Business Karaoke Podcast with Brittany Arthur
Business Karaoke Podcast with Brittany Arthur
S3 E10 | The Lamont Lens: Society, Trust and the AI Age
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Dr. Christopher Lamont is Professor of International Relations at Tokyo International University and Deputy Head of Program for AI and Global Governance at the Global Governance Institute in Brussels. His research career has been built around transitional justice, international criminal law, and how societies rebuild after institutional collapse. In this conversation, Brittany brings that experience to the AI age.
This is a conversation about trust. What destroys it, how societies rebuild it, and why it may be the most consequential word in the AI era that nobody is taking seriously enough.
IN THIS CONVERSATION
→ Whether societal change really has a before and after, and what that means for how we approach AI transformation
→ What digital sovereignty actually means in 2026, and why it is often being pursued with the wrong tools
→ Why which foundation model your company adopts is a values decision
→ What thriving post-crisis societies have in common, and how that applies to building AI-ready organisations
→ The accountability question that policymakers and business leaders are not asking
FIND US
→ Instagram: @businesskaraokepodcast
In in societies that have experienced significant traumatic events or dislocations such as conflict is the complete breakdown of trust. Not just trust between kind of a the person and a state or some governing authority that might have abused that trust through committing serious human rights abuses or doing other things. But also at a very interpersonal level as well.
SPEAKER_01Today we're joined by Dr. Christopher Lamont. Dr. Le Mont is a professor of international relations at Tokyo University, Deputy Head of Program for AI and Global Governance at the Global Governance Institute in Brussels, and in his free time, the co-host of the Age of AI podcast. Chris is also my friend, so I did my best to call him Dr. Lamont, but I think I actually failed miserably and I called him Chris most of the time. His research career began around transitional justice, international criminal law, and how societies rebuild after institutional collapse. This conversation really does center around trust. What destroys it, how societies rebuild, and why it may be very much the most consequential word in our AI era. I always so much enjoy uh reading your comments and your thoughts. So please go ahead and reach out with your thoughts about this episode. Okay, everyone, let's jump in. So Christopher Lamont, welcome to the Business Charity Podcast for our human civic AI Club Series. Welcome. Thank you.
SPEAKER_00It's a pleasure to be here.
SPEAKER_01So uh I wanted to share a little bit about why I was really interested in your specific views and your lens of the world. The as I was preparing for this um for our interview, uh my um AI and I decided to call you the Lamont lens. This was the Lamont lens.
SPEAKER_00Interesting. Oh wow. Okay. Well hopefully I live up to that lens, or the lens is one that makes things clearer, I suppose.
SPEAKER_01So the Lamont lens. So let me share a little bit about as I was kind of chatting and saying this is why I'm really interested um for um Chris to come on. So um for those of you that are thinking why I'm not calling Dr. Chris Filmont Dr. Le Mont, because Chris and I also are friends. So I can't I'm allowed to call you Chris.
SPEAKER_00Yes, so I prefer to be called Chris also. It's it's it's kind of yeah.
SPEAKER_01I'm really interested when it comes to AI. So, as you know at DCJ, we're human-centered AI, which means don't begin begin with the technology, begin with the human. Right. And I'm gonna share with you a story that I heard recently that was told to me recently, and then kind of how that um then informs kind of our thinking. So one was I was chatting with one of the professors from Grips University, so at the Graduate University for Policy Studies here uh in Japan, and we were chatting about AI and cybersecurity, security. And he said, I don't know why everyone is kind of jumping up and down about security now. And I was like, that's really interesting because I think AI and security is probably like a really big issue. Right. And I said, Do you want to expand more? Like, what do you mean by that? And he said, a hundred years ago, when you wanted to send a letter, you would send somebody and they would jump on a boat and go somewhere, but this person could equally be assassinated or murdered or kidnapped. Right. Yeah. And not that any of this is funny, but what it kind of highlighted was that yes, we are in a new time, being with uh, you know, the AI age, but the issues of security, the issues of unforeseen circumstances have always been with us. And I'd really like to begin with that with that um as how it's connected to kind of you, is that I've always found it so interesting how you have seen societies go through immense change. Immense change. So, you know, we can start nice and light and talk about how, you know, you've studied, you know, war crimes in the Balkans and things like that. I mean, we're not three minutes into the episode, but I, you know, we can talk about war crimes. And I'm really interested from a before we jump into the technology and the people, I'd really love to hear your thoughts or your views on, even before jumping to AI, when they're society and they're going through immense change, what are some of the things that usually lead up to those kinds of immense changes? Is there kind of like then an absolute like switch where it's like before and after this moment type thing? And then the rebuilding. So with that, um I thought there's no one better than you to come and talk to us through the society evolution when it comes through, obviously, um what you know, and then comparing that to where we are with AI. So that is my thinking around um our episode for today.
SPEAKER_00Okay, well, lots of questions to um to respond to coming out of that. I think that what you kind of ended with, and that's the question of like, okay, the the like transformation as a switch, the before and after. Um in most types of change, right, processes or society or where societies have experienced uh some sort of massive um disrupture, it actually is difficult to identify that like one point in time, right? So in countries that have experienced, for example, civil conflict, right? So even a question like when did that conflict start? Was it when kind of the shooting started, or was it when there was a political breakdown, or was it when there was some sort of incident that happened preceding that? And then also when does it end, right? Is it when kind of a peace agreement is signed, but there's still high levels of societal kind of suspicion, distress, walls separating communities, for example? Um and and that fluidity in terms of how we experience change cuts against how we talk about change, because we tend to assume that there's like a beginning and end, right, um, to any major kind of historical episode, and then apply that to technology and what you mentioned early on is what makes AI different from other security challenges or other types of governance challenges that we've encountered in the past. Right? So if you think about AI and surveillance, right, that states have had the capacity to survey. And um, often you could use people, you would use people to do that decades ago to survey. But there are a couple differences that have that have kind of been creeping up on us that we might not see without kind of perfect hindsight. And and one, and this is something that gets to the kind of broader governance conversation, is just how much data is out there about all of us, right? That we are, of course, not necessarily kind of live streaming our lives as such, but as a figure of speech, right, in a way we are, right? All this data is out there about us. And then you look at that and you aggregate that. And what you can do with that types of that type of information, what that enables in terms of um not just the private sector, but also the public sector in terms of governance for good and bad, is something that um previous generations did not have access to when thinking about governance or thinking about society. And I think that's that's that's an important difference. And so if you think about also this question of security, what type of security vulnerabilities do we have? Well, um if an individual was somehow compromised, who was carrying a message in the past, right, to go back to the the analogy, um you would, if you if you identified that, be able to limit the exposure you have. But if your kind of data systems are compromised, right, and you don't know it, right? Or even if you do know it, but you don't know how it's being or where it's being or what to do about it, um the scale of the challenge and the potential harm that it can bring about has grown significantly. Right. And so I think that part of this seemingly like, oh, everybody now is talking about security when it comes to AI or thinking about some of the kind of cybersecurity implications. I think that's natural, particularly when it comes to developments like agentic AI. But I also think, yes, there is, it's helpful to look to the past for parallels and see kind of how we have addressed similar challenges, but to be careful not to apply the tools of the past to the challenges and problems that we are facing today, right? Because the um the nature of, or so the society that we're seeking to govern has changed how it communicates, right? I'll use a very kind of like if we go back 100 years, for example, um, I also teach on a kind of international program at a university. And one of the things that's interesting to observe, or just like you think about, is that, well, a hundred years ago, for example, you could not have international programs the way you do today, where with people from all these different continents coming together, right? Like if you were um going to travel intercontinentally, it would probably be like a once in a life, or maybe twice in a life, or or maybe if you're super, super wealthy, right? A few times, right? But these the the these these journeys take a lot of time. Um and then once you go someplace else, you need to like learn the language, adapt life, right? Um it it's it's a it's it's a lifetime commitment. Um but in the 2020s, of course, um, it's something that for young people in a lot of different parts of the world, that's kind of natural. Well, think about like which country do you want to study in, right? Um so on the one hand, right, mobility has increased, but also if you think about the world in which they live. If you come to a country like Japan, for example, you're um you remain connected on social media with all your friends back home, right? You can have friends also make new friends, like virtually who are on who are on different continents. Um and also you can remain in your like own media environment from back home. You can kind of keep watching your favorite series, you can keep doing all this, right? And so the the world, right, the the the the the kind of the experiences that people have today are much more interconnected than they were in the past. And if you think about that in terms of AI and governance, right, that that's also true, right? Because what what we're seeing at a very individual level, your own kind of personal experiences, how you are experiencing the world, has also changed. For example, business, right? Think about supply chains, think about all these other elements of interconnectivity. And so it's an interesting time to be talking about that in 2026, of course, because it seems that, well, um, a lot of those um institutions that facilitated that interconnectedness are under increasing stress. But there's there's no way, I mean, right now, right, to think about it to kind of pull back, right, the um turn back the clock when it comes to these types of societal and individual level changes and how technology has changed our lives, right? So the the policy responses to that would reflect the world we live in today, not 100 years ago.
SPEAKER_01That's so interesting. I think the the the point that I wanted um dive a little bit more into is the uh when you mentioned that the tools of yesterday can certainly for inform, but shouldn't necessarily form our approach to tomorrow, right? So use them as for an informative rather than a formative um uh approach. And I think this is interesting and almost an opportunity to maybe even um break down or even define the idea of sovereignty and digital sovereignty. So I remember, so I'm from an island, um, even though I have a very odd accent, it's still an Australian accent to some extent. Um I remember my grandmother telling me um during the Second World War, they in the papers, they basically told, sort of they being the government of Australia, told everybody, don't worry, the war will not reach our borders because we are an island and the Japanese have bicycles. And how are they gonna get from Japan to here? Honestly, there was like you can there was a front page cover with a Japanese person on a bicycle and the Australians on land kind of looking safe and kind of laughing.
SPEAKER_00Right.
SPEAKER_01And then I said, Well, how did that end up? She goes, Well, Darwin was bombed, you know.
SPEAKER_00Right. Aviation.
SPEAKER_01Aviation. And so the idea that a sovereignty used to be this is my land, you're in your land, that that's very much not what sovereign means anymore.
SPEAKER_00Yes. Um so this is a very good question, right? So we're talking about sovereignty itself. I'd like to focus on, though, I I think you mentioned digital sovereignty there, because I think that's that's something that is increasingly talked about and um is something that um a lot of states have begun to embrace. Now there's sovereignty as we knew it in the the past century, and there's and and how we see the world is actually very much informed by sovereignty, right? You if you were to imagine, for example, a world map, you would imagine a world map with state borders, right? That's that's that's like you've closed your eyes, right? Yeah, flags all the in in all the countries. And of course, kind of the the reality of geography is different, but nonetheless, that's that's what informed our how we see sovereignty. It's it's territorial, it's linked to kind of a population and political authority, and all these things were nicely aligned. And in this world that I described that had emerged with kind of this deep interconnectedness and also fluidity and mobility, um, what sovereignty kind of means in practice has has begun to shift. And it's shifted so much now that policymakers are beginning to seek to find, look for tools, right? Like, okay, so um, if you think about other kind of strategic industries in the 20th century, like the steel industry, right? So this was something that was kind of closely associated with national power, it was something that you could um subsidize, prop up, and even the smallest of countries could have their own steel mill and locally produced steel, right? And so that mindset is informing, unfortunately, how some people think about digital sovereignty. So if we just, if we just build the data centers, right, or if we um just have a national champion, like a kind of a kind of a foundational model, right, that is, that is developed domestically within our country, right, we have our own digital ecosystem that we can stick our flag to and feel better about ourselves. But as you know, as somebody who kind of works in this in this field, that it's actually much more complex than that. If you think about, for example, where tech companies get funding, right? This this often crosses borders. And if you also think about data centers themselves, the question of like, well, who owns the data center, right? And you think about data centers that are built in different parts of the world as part of these initiatives to um increase a state's data sovereignty, well, most of the know-how operators kind of and also kind of the rules governing them come from outside, right? So you might have, on the one hand, um, increased a particular capacity, but that doesn't really fit your understanding of sovereignty, but is being used as an indicator to measure sovereignty, right? And so in this particular context, then the the the traditional tools to bring about sovereignty or to achieve sovereignty are tools that were honed in an era where you were governing something that was less internet connected and less complex. And also when I say interconnectedness, I'm not just talking about interconnectedness across borders, but I'm saying interconnectedness within the AI ecosystem itself, right? So if you think about, okay, you have these different regulatory instruments that are governing different parts of it, like you might be thinking about kind of the services aspect, or you might be thinking about the kind of the kind of design aspect with through the EU AI Act, or um you might be thinking about like platforms. Um all of these are interconnected, meaning that if you are thinking about regulating one particular aspect, you might begin to feel like you're playing whack-a-mole, right? Um in terms of of governance. And also, as we mentioned, like the elements of of change, right, that is taking place is one that it's it seems obvious when you talk about it in a conversation like this, um, but uh in terms of kind of groupthink, right? When you have people in a room thinking about, okay, so so what can we do to achieve digital sovereignty, they tend to fall back on some of the same kind of indicators that that I mentioned before.
SPEAKER_01And if we think about digital sovereignty as, for example, sovereignty, maybe we don't even need the word digital, like maybe it's just sovereignty in 2026. If we think about traditionally like sovereignty and what that what that means, it's deeply emotive to people. If you were to take somebody's flag and burn it versus fly it, right? There's people have deeply emotional reactions to this. But if you were to ask somebody, for example, how they feel about the data center or or their own data, for example, I um have not seen such a natural human response to that. And I'm and I'm wondering, you know, the symbols of flag, the symbols of nation, who are you, who you are not? And there's even almost um, you know, many people who listen um to our podcast are similar to us, so either global professionals in Japan or Japanese professionals globally. And you know, we always get the question, where are you from? Where are you from from? Right. You know, this kind now, you know, as you kind of mentioned, there's this switch of like when did change kind of happen? When did this borderless you know kind of change? Well, uh perhaps there's also elements of the fact that you could, you know, for example, my husband is is is now is Mexican. I have a an Australian-Mexican child. So there's no the borders are, for example, now in many circumstances are kind of surpassed. And so I'm thinking it's quite interesting if we go back to like the symbolism of a flag. And I'm wondering, will we get to a point where people feel so deeply passionate about their digital sovereignty as they do kind of national sovereignty? Um and if they did, like what would that what would that look like?
SPEAKER_00This is this is this is interesting. I think this is this gets to one of the reasons why this conversation is beginning to have this kind of outsized presence. Um because it resonates with people's sense of identity, right? So if you look at digital sovereignty conversations in different contexts, they mean different things, right? So if you were to take some American con uh understandings of this term, um, of which there are many, um, we can get into that maybe a little bit later. But but one would see this as, well, digital sovereignty means that American tech firms should not be subjected to foreign regulatory oversight, right? Um and and that's very emotive, right? So it's talking about like, well, the these are US companies and these are foreign regulators, right? Which which creates kind of this very zero-sum image of kind of digital sovereignty and and the digital space. Um for others, it's a it's a developmental goal, right? It's it's well, um our country is facing developmental challenges, right? You see there's this remarkable new technology. Does it have the potential to level the playing field? Can countries skip certain stages of of development? Um and therefore we won't ask a lot of questions about well, we don't data centers, right, what they're actually doing, right? What the what kind of individual um data protections at at that level, right? And so I wanted to like just like put a caveat to what I said earlier, simply because um while our experiences are much more interconnected and while kind of the technological infrastructure is much more inter interconnected, um, we still live in a world of nation states. Yeah. Right. Um perhaps the most important um, I guess, identifying marker that people possess would be their passport, right? And be kind of the country of the from, and as you mentioned also. Um it's also a very emotive kind of Identity as well. And you do have models of digital sovereignty where countries have sought to essentially develop their own ecosystems domestically, right? And so China being kind of the biggest example of that. But of course, China with a large population, and China with kind of very top-down governance where national leaders establish very ambitious targets when it comes to, even before AI, other digital goals. And then also as a country that's seeking to export its technologies through mechanisms like the digital Silk Road, right, you can the the flag is very visible, right, in that sense. But most countries in the world are not the US or China, right? Obviously. And how do those countries then see digital sovereignty, right? So if you think about interconnectedness, I think one of the places where this is most visible, where you're seeing conversations of governance, but also challenges associated with governance coming to a head is Europe, which is not too surprising because the European Union was an early mover in terms of attempting to regulate artificial intelligence. But you are seeing now in 2026 kind of a um significant distance between the ambitions that originally surrounded early discussions around the EU AI Act to now the EU omnibus, right? Which is arguably taking a step back from some of the larger ambitions that the European Union once had when it came to governance in this particular realm. In large part also because different EU member states now see digital sovereignty differently. So if we are to look to France, right, France is one that's seeking create national championship. And have, yeah. And have, right, and have created national champions. But but these national champions also exist with this very very kind of deeply interdependent kind of network of either funding or kind of bringing their products to market, which means that it's not like the kind of a French-owned steel mill, right? Um, in terms of actually being an indicator or be being a kind of sovereign entity, the way I guess paradoxically, some of the kind of Chinese counterparts might be in a system where you have a top-down kind of state picking um those um national champions within the technology sector. Um immense challenges, right? And I think this is one of the interesting um aspects about this conversation is it's um it's very easy to move to kind of almost caricature extremes of sovereignty and interdependence, like this kind of world where, okay, well, nation-states matter less. I would argue that very few people in 2026 are seriously making that claim that nation-states matter less today, in the 2020s, than they did in the past, given the level of geopolitical tension that you are seeing in the world today. But the combination of kind of technological change and societal change means that how we address problems of government-to-government relations in the 2020s is going to change, right? And so I think this is one of the big questions in terms of where this technological space will lead to. Are we going to be in a world where it's possible to have many different digital sovereigns? Right. Um, or in a world where there are maybe two digital sovereigns, right? And then you're kind of in these different camps between the two, right? And so like if you think about like what does digital sovereignty mean, um in different countries, it like I said, it means different things, right? So some countries might see digital sovereignty as saying, okay, well, let's um distance ourselves from the US tech ecosystem, right? Whereas others might have an opposite view. Um and some are also thinking about kind of re um rethinking, as you mentioned at the very beginning, how we think of sovereignty in a world where you have a few very large big tech players, right? And interesting here, they're they're historical parallels, right? Um as well.
unknownYeah.
SPEAKER_01That is um also definitely terribly interesting. I'd like to unpack a few things there. I think the first thing is, as you mentioned, um the team there seem there's two players, right? So it's either the US or China. Now, this is I I reflect a lot on this, not being from the US nor being from China. Right. Um and for a couple of reasons. One is, does that just mean there's like a haves and a haves nots? Like there's an AI haves and an AI have-nots, and that's kind of like it. Is that really all we have? Um and then another interesting approach is as you said, there's like the US approach, which is also quite culturally informed in a way. Um so, for example, if you take um the foundation model of Grok, now Grok's like flag in the ground is truth. We you, you know, the Grok's it like the number one factor is grog should be truth seeking. Right. Um and then, for example, you have uh on the other hand, you perhaps, for example, China's many foundation models, one of which, for example, is Deep Seek, and is an open or they've taken a different different approach and had it be an open model. And so Malaysia, as of 2026, I mean things are always changing, so but it's 2026 February. Um Malaysia has adopted Deeks Deep Seek as their national model, which I think is really interesting. And then does, for example, in this moment, do you need to have, you know, what what would the theatrics of a of a an American delegation to Malaysia be, where it's like, oh coming and yes, let's, you know, have our good relationships and you know, you're spending ice mass and all that kind of stuff. Um that's one thing. But then if a country is using a foundation model of like another country, for example, that you of which you uh don't see eye to eye on many things, this is then informing not only the way that uh this entire country is operating, but also thinking and creating. Um this would just the simple adoption of a foundation model would means more than a thousand delegation visits to a country.
SPEAKER_00Um yeah, and exactly, and and what you know about a country. Right. I mean, because if you're thinking about this in the context of public services, um and you're thinking about, okay, the need to um digitize all these records that a state might have, right, so as to make these models useful. And then you think about who has access to that data, right? Um you're in a world where potentially um the tech players from either side know way more about not just what's happening in your country, but in other countries as well through access to that data. And so this is where if you're thinking about um adoption and diffusion of technology, that China has a slightly different perspective here. That if you look at the rationale behind um DeepSeek and making that open, is that the bet is that China can diffuse its models faster and easier than the US can, because the US tech ecosystem is embedded in kind of a highly monetized system in which you need countries to buy into, and also um you don't want these models to to open, right, because they're proprietary and you have invested or somebody has invested billions of dollars in in that. And in some cases, and this is where if you think about what conversations are happening, um that that that could be something that could be harmful to disproportionately harmful to US tech, right? As there is a perception that um states are leveraging types of vulnerabilities that you have described um in order to achieve different policy outcomes, right? And so there is a significant nervousness about this. And this is where um so Oscar Gestrain, who I had a conversation with um on kind of digital sovereignty, where he likes to use the term um digital autonomy.
SPEAKER_02Right.
SPEAKER_00So as opposed to sovereignty, because what people are saying is digital sovereignty, but when you push them on it, they don't actually mean sovereignty. They're just using that because it's a it's it's a word that everybody kind of understands. But the the actual meaning of sovereignty, as we talked about earlier, is something that's very difficult to apply in the context of the technologies that we are talking about, and also the type of technological competition that we're talking about. So if you think about autonomy as a state, that's actually what you you want. You want the ability to autonomously act, right? And that's the feature kind of of sovereignty that you're really seeking to get at, not necessarily putting your flag on everything, right? And then you're gonna start asking a different set of questions. And so um, as you mentioned, there's differences in culture kind of between the US and China, but there's also differences of culture internally, right, um, within states and their and their foundation models, right? Even kind of between Grok and kind of OpenAI or Anthropic, right, you're noticing um differences in terms of the ways these models make sense of and respond to and interact with users. And so if you're thinking about autonomy, you might be thinking about kind of value alignment. Now, of course, there are limitations to that, and those limitations are most visible when you begin to um discuss sanctions, right? Um, and um these companies cannot divorce themselves from the country that they are they are based in, and whatever that country's you know foreign policy's goals are, whether that's like you're talking about Beijing or you're talking about Washington, um, right. Um and in that sense, right, I think that a lot of the conversations about digital sovereignty are actually a response to that nervousness that um if we are reliant on um a company that is flagged in another state to provide kind of very basic things that we need and that we cannot do without, um what's to prevent that kind of country from leveraging that dependency over us to get us to do things that we otherwise don't or would never do, right? Um and again, I think there what you're trying, what you're looking at is this question of autonomy. And autonomy might actually come from an interdependent network, but of a different constellation of countries, different constellation of actors, different constellation of companies, right? Um that is not as narrow as this kind of idea of absolute sovereignty. So you could have countries or companies in countries with kind of similar sets of kind of values, right? That we might think about kind of data privacy as one. Um, and then aligning to kind of work together um to um ensure that kind of basic level of data privacy protection across the board and kind of allow for easier interoperability across cross-borders. But so perversely, conver conversely, right? Like um what this interdependence means is that we it's it's not that countries matter less, right? But it's that we um need to have a greater awareness of and a greater understanding of who we are working with to provide us with something that we need. Right. Um and what does that mean for our autonomy going forward, whether you're a government or a private firm?
SPEAKER_01That that's very interesting. Um, as someone who's Australian, I can speak to like fake autonomy. So fake autonomy is what we have in Australia, which is like well, yes, Australia, you're a country, but if there's ever a war, we're the Commonwealth. And all of a sudden we're the Commonwealth. No, Australia, you're your own country, so you need a visa to work in the UK, but our royal spaces are all over your money. So like also like a very odd kind of um situation. Um, but just to kind of understand how deeply valuable this is, Eric Schmidt is a CEO or the ex-CEO of Google who's deeply invested in the idea of perhaps AI autonomy. Um and that, you know, he he was mentioning in a in a recent keynote, you know, under no circumstances can we can we risk just a few billion dollars for losing the race, losing the AI race.
SPEAKER_02Right.
SPEAKER_01And so this is where you're saying, okay, as you mentioned, the AI, the AI systems of the US are deeply um embedded in in financial models, as in how who's making money from this, how we're making money from this, what's the business model. But he was saying, just to like understand how valuable this is, he's like, who cares about a few billion dollars if we lose the AI race? And I thought this is super interesting because I'm kind of seeing a little bit of a curve. So that we go from individual, the middle is kind of like a little bit gray and very lots of organizations, and then there's the end user. So I'll tell you a little bit what I mean. So I find and perhaps this might have always been the case, um, but for me personally, I feel that um politics is incredibly individual at the moment. So for example, it's not the US, it's Trump, it's not Japan, it's Takaichi, it's not France, it's Macron. Like they're individual people. Then there's this kind of like grey mass in the middle, which is like uh Washington or Beijing or some kind of you know uh committees that sit somewhere, and then there's like the end person, me as a person. So it's like, how do you connect what's going on from like that individual like country leadership? How do you navigate the gray? And then what does that then mean to someone that's getting up every day and just trying to run their business?
SPEAKER_00Oh, this is a this is a great question because I think you're you're touching on kind of a couple different aspects here, right, um, that that I can go into. And I think first you mentioned the AI race, right, in this framing. It's like, okay, traditionally, if you want to kind of mobilize investment in something, right, you need to present a business model case that investors will invest in. And with the framing of the AI race, right, you it's it's being framed as, well, this is um this is existential for the country. Yes. Right. And this is this is this is interesting because this gets back to kind of American understandings of digital sovereignty and sovereignty itself, right? That if if somehow in this framing of the race, the US is not out front, does not win it, right, then that's it, right, for kind of for the US. And so therefore, you need kind of an all-of-society kind of response to this, where kind of the government mobilizes private capital um into making sure that the US comes out in front. This is actually not though, because this is interesting. So you mentioned that that that this is this this language um people might associate with, oh, this sounds like all kind of the the kind of the tech tech leaders who were surrounding Trump and who attended his inauguration, right? This is this is kind of their messaging.
SPEAKER_01Yes, exactly.
SPEAKER_00The the the messaging. But but the kind of the US is kind of the it was the Biden administration that came out with a AI strategy, right? And interestingly, um in the forward framed the AI race using the exact same language, and also some of the exact same people who you're talking about were involved in the kind of the crafting of that commission that came out of a Biden administration presidential commission's report, right? Um So there's there there has been a significant amount of continuity in terms of this kind of this framing of the AI race. Um but you mentioned kind of the Trump administration is doing a lot of things differently, not necessarily here, but in terms of AI governance, how it understands AI risks, kind of AI safety institutes, deconstruct also kind of international cooperation when it comes to digital technologies. Uh one of the things that the previous US administration was supportive of was, for example, the UN Digital Compact, right, which was a high watermark for international cooperation when it comes to AI and sustainable development, and also some of the more ambitious goals that framed AI as a catalyst in kind of a win-win technological race that can decrease global inequality rather than increase global inequality. And also in terms of AI governance, right more broadly, the previous administration had a very different approach to risk and safety and was embedding the US into this growing network of AI safety institutes, and also was part of a whole lot of conversations that also went beyond kind of civil governance of AI to looking at military applications of AI and the such. And so there the Trump administration has taken a very different approach, right? The US US policy is completely kind of reversed on these issues. And policy is less predictable, right? The policy process of the previous administration has not been replicated, right? Um that's kind of a long-standing kind of policy process that then means that, well, US policy could be something one day, but then I guess I don't know what the word uh the the president truths something out. Um and then something is is radically changed. And this is interesting in a sense that this is not just, again, not just kind of the Trump administration in the US, but this increasingly kind of um personalistic uh approach to governance in work with world leaders today, right? So as mentioned kind of Takai Chi's election in in Japan, right, a lot of focus on the prime minister as a person, but then forgetting there's like a cabinet office and there's a political party and there's ministries, right, that are all influential in these processes. Um and then also if you think about how kind of course she is very different in in China, right, um, discussing um, but but but there are all these constraints, right, regardless of um what leader we are talking about. And that's the gray area of of the process that you were talking about. And so for people like looking at this from the outside, what you'd want to do is look, okay, what do you identify as the policy problem, or what does it what are leaders talking about as the problem? Um is a good indicator that will tell you a lot about like what the subsequent policy process will will look like, right? And that's why I think conversations about conversations about digital sovereignty are very important, because that's at the stage of, okay, so what's the problem? Right, what what is it we are trying to achieve, right? And they're different ways of different visions of that, but some of them might actually be harmful in the sense that, yes, the the things that could have made you more sovereign in the past are actually going to increase your vulnerabilities and dependencies in the present today. And kind of this what what does this mean for the average person? Well, I think this is very important. I think this is something that people don't think about sometimes um enough. Uh that that often When it comes to issues that we might think are very important, for example, like data privacy or cybersecurity, individual users might think, oh, well, what's what's wrong? So just to use like another kind of parallels, if you think about like open open AI and ChatGPTs, kind of subscription models, right? People upload all sorts of potential, like personal identifying information, potential propriety information for all sorts of reasons, right? And they don't think a lot about what that means for them. And in fact, for most people, there's probably not going to be an individual like huge repercussions. But if you think about this at an aggregate level, um well, this provides one particular kind of firm with a whole lot of data that otherwise they would never have had had access to. And you mentioned Malaysia earlier and other countries are thinking about, well, if you have all that very kind of cultural specific knowledge about our society, um then going into training something that we have no insight into or no control over, that potentially one day if relations break down between our countries could be used in all sorts of ways that we're not thinking about, right, then that that is that is a concern, right? And I think that for those who are thinking about these conversations, to think about kind of well, there's different levels of of individual um I guess, stakes that are that are at play in in this conversation. So it's it's it's much more important, right, than um perhaps some other policy debates, right, that that you might or might not be be following. Um because of its knock-on effects, and when I say these knock-on effects, right, if you think about um if you look at policy responses, that might actually make it more difficult to work with collaborators in different countries. Um a lot of these co-creation spaces on the internet, you don't really know a lot about the other people who are kind of helping to refine something, right? And so you might find yourself in a situation where um you're just somebody kind of in the US or Australia working on something collaboratively in a collaborative space, but you've been working with somebody who's kind of in China, right? And then you might find that, oh, you've contributed to, right, or you might find that you've fallen foul of some domestic legislation that is aimed at enhancing your country's own digital sovereignty. Now, what would the effect be? Well, the effect would be probably rather chilling, right, on all of those creative spaces, like your your own people's ability to engage in those spaces. Um people just like simply like, okay, well, I I don't want to risk this, right? Um, and tune out, but then these spaces will still exist, right? And so it's for those who are kind of involved in the cutting edge of this um sector, right, or the sphere, these are really important conversations because they might impact upon your ability to um to go about creating what it is you have been creating um in a way that you might not have thought about, right? Um and it might also just like creep up on you out of the blue, right? Like, oh my gosh, I didn't realize that this was even something that I shouldn't or somebody should not be be doing. And so that's why these broader conversations are incredibly valuable. So I think in in terms of that, the gray space that to what to look out for is well look at how the problem is being formulated, right? How it's how it's being articulated. Of course, you won't get insight into like the back room, right? But but that that will tell you a lot about the kind of the direction of travel. So like look at how different countries um talk about digital sovereignty. Yeah. Right. Yeah.
SPEAKER_01And and if we think about um an A AI sovereignty or even just AI or a country's approach to AI, um it requires one fundamental thing, which is it requires the society to get behind it. Yes. It requires every single person in the society to say, yes, this is this is for us, we're going to use it. So it's not just you can develop it, but people have to adopt it. The actual adoption of the technology is also key. And we've seen, you know, as as we've kind of mentioned um in this past a few in this podcast a few times, that you can certainly have the past informed, but it can't certainly form, you know, where we're going to tomorrow. So I think the informative thing that we can take from the past, from getting people, you know, behind systems, is that um, you know, what why did people believe in their country? Let's take, for example, America. Um, Americans love their flag, they love their flag, they love the American dream because they saw the results. They saw if I do these things, I then get there, right? This was a there was a very linear path. They achieved it, people around them they achieved it, people on the other side of the country achieved it. It became like this is a thing that we can do together. And I trust if I get involved in this system, it'll take me there. Right. Um, I think when it comes to that, what would the digital American kind of dream look like, right? And then you can replace America for whichever country you like. Um, but how are you going to get your society to adopt the technology? And then I think the results factor, I think, is also quite interesting because like the Americans, it's like, you know, yes, there is that kind of rags to riches story that the Americans love, and they've seen it multiple, multiple times. Right. Um and I think people haven't seen that yet with AI. Right. They've seen big companies get even more rich from through investment from other big country companies, right? Right. But we I don't think we've seen the rags to riches story as much yet that helps people believe and trust in a system that they have in the past.
SPEAKER_00Right. This is interesting. So there there are there are a couple elements to this question. And and one relates to essentially kind of the future of work and in AI, right? Like will anybody like me have a job, right? Um and and the anxieties that that creates. So as I mentioned, right, that there's a large push to kind of center AI in the broader conversation of US technological competition, right? For example, the space race, right, where all the tremendous amount of kind of government money was put in to um making sure that while the US wasn't the first kind of in space, the US would be the first on the moon, right? Um and similarly, um you have that mobilization here. I think the problem of a question of adoption for a country like the US is less of a um challenge. In part because uh the US dominates kind of platforms that can bring things to market, meaning that, for example, if you were a country that was pushing your own foundational model and you had adversarial relations with the US and you're a small country, you're not gonna be on the Apple store.
unknownRight.
SPEAKER_02Right, right.
SPEAKER_00Right. So like a Cuban foundational model, for example, um, if one right would not would not end up there, right? Um and and because that kind of broader kind of platform ecosystem is right now like so US dominated that that that's less of a a challenge. But the second part is what you mentioned is going to be a problem, right? Um as people perceive, okay, well, um there's this AI technological race, there's AI is creating fabulous amounts of of wealth for those big players in this industry. But from the perspective of a university computer science graduate in 2026, there are no jobs for me, right, um that would have existed just five years ago. Right. Um and from the perspective of those already working in um these fields, um companies are getting smaller in terms of workforce, not larger, right? I think, I mean, I'd have to go back and double check with this, but even like these large data centers that consume vast amounts of electricity and process vast amounts of data, um it's not like going to visit a factory, in a sense. Like there are very few human beings who need to work there, right? So you're you're using a large amount of land, right? You're using a large amount of resources, but comparatively, you're not really creating jobs through bringing data centers, for example, to your kind of state, if you're if you're in the US. Um, you're bringing different kinds of investment, but but not necessarily creating jobs and creating wealth at the lower end. And in a country where you do have um significantly more than Japan, right, great, greater inequality. Um this is going to be a challenge for um for how we move forward. Of course, this is not the first time that the US has experienced this kind of like economic dislocation. We were talking about steel earlier, right? Go to steel, former steel-producing parts of the US today, right? Um a lot of these are long-term economically depressed communities. Um what is AI going to do for the average person in the context, as you mentioned, that kind of that broad American dream that you had kind of described so eloquently earlier, right? Um, well, that that really remains to be seen. And in order to continue to mobilize this kind of consistent support, um one would need to address that. And this is what's interesting. Um there are, despite kind of at the kind of that you mentioned that we we tend to focus on leaders in the world today, despite kind of the Trump administration's um reflexive distrust of regulation in this sector, um, at the state level in the US, you had um you have a whole host of initiatives, right, to kind of mitigate risk or to do things as simple as, well, um, if AI is interacting with a customer, then it should be kind of disclosed, right, that you are you are interacting kind of with with AI. At that level, um there seems to be a growing kind of consensus that this type of kind of governance uh at is is necessary. And and interestingly, when there was an attempt to kind of put this into um uh another piece of legislation, um, there were opponents from Donald Trump's own party, right? Who were like, well, wait a second, now this is this something that would help people, right? Um and and so there there is a growing conversation about this. And it's something that definitely um will kind of increase, right? It's only it it it there's only gonna be more people talking about this as the real effects of AI begin to be felt more broadly in in the economy in economy.
SPEAKER_01We're we're big on AI literacy, right? But your mi minimum viable literacy. What's the minimum viable literacy that you need in AI in order to be a contributing member of society or to thrive? This is something that we often.
SPEAKER_00Okay, I would say there's uh so one thing that I that that in conversations um that sometimes is surprising, and this gets back a little bit to the beginning of our conversation about kind of societal breakdowns or societal transformation, is that um for users with very little literacy, meaning that this is just something else that they've downloaded on their phone, just another app, right? Um there there isn't an understanding of how their data that they're sharing is being used, or kind of what this or how this model thinks. Right.
SPEAKER_01Well, either end, right? There's a there's a lack of understanding as to how good it can be or how risky it can be. It's just like on both ends of it.
SPEAKER_00Yes, exactly. So it's like kind of this idea of just um you're using it for relatively kind of simple and straightforward tasks. You don't understand its potential, but then you're putting this like disproportionate trust into something that you don't really understand, but you're not really using it the right way.
SPEAKER_02Either, yeah.
SPEAKER_00Right? Because you don't understand that, well it's not going to really be able to help you if it hasn't been trained on any kind of data that that is relevant to whatever you are are talking about or whatever kind of uh conversation that you're seeking to elicit from it. And so um I think just understanding um and I think this kind of lifting the veil um a little bit um without getting too technical behind um LLMs, for example. And I think this is actually another element, minimum level of of AI literacy is people tend, when you say AI, to immediately think about large language models.
SPEAKER_01Right. Only.
SPEAKER_00Only, exclusively.
SPEAKER_01Yeah.
SPEAKER_00Without understanding kind of larger ecosystem, like my thing about computer vision or kind of all the other kind of elements. And and then when people are like, oh well, um I don't really need to worry about AI because I asked Chat GPT to create a map of my country and it put the cities in the wrong place. Oh, this is so stupid. Stupid. Right. So why are we even talking about security, or why are we having all these other kind of conversations about it? Um that in itself is a a marker of somebody who's just being exposed, like trying to learn, but doesn't really understand the technology, but then coming to this like conclusion, this broad sweeping conclusion that we don't need to worry about.
SPEAKER_01Yeah, it's like you know, go back to either the late 90s or early 2000s and saying this internet thing is so stupid, it takes so long to you know dial up, for example, not seeing that you know the potential. Um you want to do a quick check? Okay. Uh not so, for example, with the internet and then not not seeing, for example, the potential. Um what I would like to um close, uh start clo c closing up with is reflecting on kind of where you have come from, which is the idea of international relations and seeing seeing societies rebuild and rebuild. And sometimes obviously there's you know horrific things that a society can go through, but we've we've seen time and time again in humanity that even after a fundamental shift, um, we've seen societies build back and thrive. Um and we've even seen some societies miss entire blocks of what we would say is kind of like l- like um linear development. So, for example, many countries in Africa or even many countries in Southeast Asia didn't do dial up internet, they have no memories of what it's like to go to bed and wait for two songs to download on a internet that was never their life. They went from nothing to running an entire business through WhatsApp, right? Right, yeah. So super interesting. So not everyone has to have the same journey. Right. And so I'd love to think from what we have seen in in the past and what you've seen societies go through as they've re-rebuilt, what does a real thriving society have in common? As in, they've gone through massive change and they've now set themselves up for you know a real element of success, um, whatever that might look like. W what are some lessons that we might learn from and then apply for obviously some of the changes that society is is really going through now?
SPEAKER_00Okay, so um broadly speaking, I would say that one thing that you do see in common in places or in in societies that have experienced significant traumatic events or dislocations such as conflict is the complete breakdown of trust. Not just trust between kind of a the person and a state or some governing authority that might have abused that trust through committing serious human rights abuses or doing other things, um, but also at a very interpersonal level as well, not trusting your neighbors. Aaron Powell So I lived in East Germany for a while. I didn't know.
SPEAKER_01So I lived in but I lived in Berlin for like 10 years.
SPEAKER_00Yes, right, exactly. And so this will this this this this will resonate.
SPEAKER_01That was like even you felt it in the in the staircases, like still.
SPEAKER_00Yes, exactly. It's a who is this person? Why are they talking to me? Yeah, right. This kind of immediate kind of sense, or what do they want from me, or how are they going to kind of leverage whatever this is? Or all of these um are symptoms of a breakdown kind of in trust. And one of the things that when kind of thinking about transitions, um that that sometimes is overlooked is that that element of trust needs to be rebuilt first, right? So you can create a kind of governmental commission or public inquiry, or you can do all of these things, right? But if but if nobody trusts the state, nobody wants to talk to the state, right? Then that's going to limit the effectiveness of whatever you're trying to do at kind of a macro policy level. And here, right, getting back to like these, these, these changes, and you're mentioning that different countries have different experiences. And this also highlights um different levels of trust in technology, right? So you might think about you lived in Germany. Why are Germans so distrustful of technology, right? Um why why are um conversations about, for example, data privacy or kind of cameras in public places so much more visceral in a German context than they would be in Australia or anywhere in Southeast Asia, right? Um where people might just be like, well, yeah, it's just kind of a public space, so why why not? Um this there there are different um equilibrium points, right, of of what cultures and societies will will tolerate, and it's not one size fits all, right? So in the conversation on AI governance and also AI futures, right, the um while the world is interconnected and while these ecosystems are interdependent, kind of what AI governance might look like in one place can on the margins and will very likely differ another place, meaning that this could this could mean that there's a greater tolerance, whereas some countries might see data centers, even if they're fully privately owned and foreign-owned, as some sort of marker of linear development, as you as you mentioned, right? Or as a way to skip different um developmental kind of milestones that we tend to associate. And and therefore, right, we are in a world where lots of conversations are focused on digital sovereignty. There are lots of different visions of digital sovereignty, we are very interconnected, but um that landscape is shifting and changing constantly beneath our feet, which means we need to have just a better awareness of this, right? And that awareness will help us better understand policy shifts that are happening and hopefully will allow for um changes when they do happen to not like totally come out in blue and also um erode our trust in um these processes. Although, of course, um there are that that there probably will be some developments that will, right, and and should lead us to question our trust.
SPEAKER_01I think trust is actually um perhaps the word that was connecting perhaps everything. So, for example, even if we're kind of reflecting on the um like the American dream, I guess one of the the reasons, you know, if you're thinking about shall I participate in this new system or in this new economy or not, it's because people trust their participation is going to mean something, it's going to result in something. Um so, all right, so thinking about an AI economy and an AI economy to to thrive, I mean, one thing, obviously, I mean, at DTJ we always are talking about literacy. Um, however, I think we're going to have to look at trust now, um, perhaps a little bit more deeply. I think it's always been there, but I don't know if we've kind of like put a stamp on it. Like putting words to it, kind of like as you have today.
SPEAKER_00It can be interesting, also, just like um in conversations. Just like are there are and and are there different like models that people just intrinsically for no good reason trust more than another one? And then ask why that is like and and also maybe um what at what level would your trust like stop? Right. Right. Right.
SPEAKER_01So like um when is okay, when is not okay. Yeah.
SPEAKER_00Right, exactly. Like recently um had a conversation with a commercial airline pilot um who who works on certifying uh or who could does research on certification of kind of um of digital flight instruments, right? And of course you can't have kind of autonomous instruments right now in in civil aviation. But the question is is that well it really boils down to is just will people trust getting on a plane one day without human pilots? Right? Probably not for for near time when you would think about like it's it's there's all these social interactions that happen during flight or all these things that might happen. For example a medical emergency or a disruptive passenger um whereas if you have like a human being who is the captain right um who makes decisions on whether to land the plane or not.
SPEAKER_01That's interesting without opening up like a whole nother kind of worms I think that's so interesting because the way in which societies also approached us is really interesting. So for example I remember we you know we had a hypothetical situation we're running um with a few policymakers, but they were not just Japanese policymakers, they were also from abroad. And it was interesting most of the participants from either Southeast Asia or Africa said that they wouldn't use AI for invoices. And I was like that's interesting. I wonder why why would you not? Right. And then their reasoning was if something goes wrong who who do I who's responsible and I thought oh that would never kind of like came across my kind of value stack. Right. That didn't kind of I was like yes invoices AI yes thank you.
SPEAKER_00Yes but they were like no who who's gonna apologize who's gonna apologize that was a thing who's going to say sorry and I thought wow so interesting yes yeah so uh in in that sense right like we don't have until you're able to like right think think in those terms right and and also different as I mentioned different cultural perspectives on um what trust means how trust is achieved how trust can be lost um all of these these vary. And I think this is one of the challenges you might think about in terms of the the broader conversation about a technology that tends to focus on um learn from data at an aggregate level might miss some of these nuances and therefore you might find something well it's kind of very well suited for like an American consumer or an Australian consumer but then is is very distrusted in a different part of the world, even though nothing is different in terms of how it operates.
SPEAKER_01Chris thank you so much for joining I feel like this has to be a part one I feel I feel part two. Thank you thank you so much. I think certainly um I mean my uh reflections of of today certainly we can use the the past to uh inform but definitely not form for tomorrow we have to think very seriously about where we are um and then and and then again I mean as society we've kind of like done this all the time which is like just try to be better right just try to be better than we were yesterday and to take that um to the future but then to also very deeply think about um your own systems of trust what does that look like um and then how to apply that in your own teams in your own business because as you mentioned I mean if we look about societies that are thriving societies they do trust their neighbors. Right. And so what kind of steps can you take in your own environment to literally trust your neighbor?
SPEAKER_00Absolutely right so I think these are this is definitely a conversation we we should continue at at some point. Thank you very soon. Thank you thanks so much thank you.
SPEAKER_01The Business Carakue Podcast is brought to you by DTJ Design Think in Japan. In a world where you can build anything with AI now knowing what to build is really the work. And so there's two ways that you can come and work with us at DTJ. One is you can come and join our academy and build a real level of executive fluency. Work with us in our AI builds where in 12 weeks we can have a custom AI solution for you up and running for your business. But the number one thing is everyone go and download our 2026 signals report it has some of the most meaningful trends in AI that you'll be seeing in 2026.