The Culture Counter
From standout moments in our cultural programme to exclusive conversations, The Culture Counter offers a dynamic perspective on the ideas shaping contemporary culture.
Each episode brings together highlights from talks and panels at The Arts Club, alongside interviews with leading voices across the creative industries, business, and science. Thoughtfully curated, the series captures the breadth of dialogue across our London and Dubai clubs.
The Culture Counter
AI & Leadership | The Arts Club London
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
How will AI reshape the way we live, work and think? In this thought-provoking live recording from The Arts Club London, Dr. Daniel Hulme, founder and CEO of Satalia, Chief AI Officer at WPP, and one of the leading voices in artificial intelligence, explores the extraordinary opportunities and existential questions emerging from the next wave of AI.
Drawing on more than two decades at the forefront of AI research and innovation, Daniel challenges conventional narratives around machine intelligence, creativity, automation and consciousness. From decision-making systems and digital twins to agentic AI, machine consciousness and the future of human labour, this wide-ranging conversation examines how technology could transform organisations, economies and society itself.
Recorded live at The Arts Club London in February 2026, this episode offers a fascinating and deeply human perspective on one of the defining technological shifts of our time.
00:00:00 Blanche Parris: Hello and welcome to the Culture Counter, a podcast by the Arts Club. Through conversations, interviews, and live recordings, we explore the ideas, people and voices shaping culture today. We're very excited to bring you this thought provoking lecture by Doctor Daniel Hume, recorded at the Arts Club in February twenty twenty six. Daniel is the founder and CEO of Satalia, an award winning AI company, and the chief AI officer WWLP. Daniel is also founder and CEO of the world's first commercial research organization to understand machine consciousness. In his more than two decades of academic experience with AI, Daniel received his master's and doctorate in AI at UCL. He was previously director of UCL's applied AI master's program, and he is now UCL's computer science entrepreneur in residence. In twenty twenty six, he was elected a founding fellow of the Academy for the Mathematical Sciences in recognition of his contributions at the intersection of AI and applied mathematics. We discussed a wide range of topics, from how technology can be used to govern organizations and bring about positive social impact to the near future of machine consciousness. I do hope you enjoy it.
00:01:28 Dr. Daniel Hulme: I'm going to try and challenge everything you know about AI today. I'll give you a different way of thinking about AI compared to what we hear on LinkedIn and in the press. I will also try and talk about how we can bring people and technologies together to do amazing things. And we'll also talk about the macro impact that these technologies can have on society. So we'll talk about the end of the world towards the end. There's going to be plenty of time for Q&A. So please, uh, save up your juicy questions to the end. If you do have any burning questions, just throw something at me. I'll try to answer them. Okay, so I've been involved in AI for about twenty five years. My undergraduate. My PhD postdocs were all in AI from UCL just down the road. My undergraduate. There were two people on my course two years ago. There's now more people doing AI. Of course, at university I. I ran a master's program in applied AI at UCL, where I had hundreds of students going out there applying these technologies and I'm now entrepreneur in residence, so I help them spin out deep tech companies. I spin out DeepMind twelve years ago back in the day, and I now I'm on the advisory board of a number of universities helping them understand how to commercialize their IP. I started a company, um, about twenty years ago that's been building AI solutions for some of the biggest companies in the world, like PwC and Tesco. I sold that to WPP, the biggest marketing company in the world. Despite knowing nothing about marketing, I still like to claim I know nothing about marketing because it allows me to ask dumb questions. But I sold that to WPP, where I now run AI across about one hundred and twenty thousand people. So I'm the see the chief officer for WPP and the CEO of a company that I sold to them still, which is essentially like a DeepMind for WPP. and then they give me a lot of rope. So I invest in companies and I've just started a company to solve machine consciousness. So if you want to go very deep today we can. So that's me. It's boring. Let's talk about AI. I guess for the past, I don't know, fifteen years we've been telling companies to build data lakes, put tableau or some sort of analytics layer on top. And the hope is that by extracting insights from data and giving those insights to human beings, that it will lead to better decisions. If I'm honest with you, giving human beings better insights doesn't typically lead to better decisions. What tends to happen is we get very excited about emerging technology. Ten years ago it was machine learning. Now it's generative AI. We then try and apply those technologies to solving the wrong problems. We blame the technology, but the reality is that human beings are not necessarily very good at understanding how to apply the right technology to solve the right problem. So today I'm going to give you a framework to think about how to do that. I argue that companies don't have insight problems. They have decision problems. And decision making is a completely different field in computer science. If you're old enough. It used to be called operations research. It's discrete mathematics. It's optimization. Always start with a decision and work backwards. What tends to happen is we start with data. We start start with technology, and we make the wrong decisions. Okay, turns out that humans are rubbish at making decisions. I'm sure you've read books like by Daniel Kahneman, Thinking Fast and Slow. There are many books out there that will convince you that humans are bounded in our decision making ability. Um, Daniel Kahneman actually argues that we have a fast brain and a slow brain, and there are parallels in terms of how our, how our brains work with how eyes work, which I'll talk about later on. I know you're all very smart people, but I'm going to test your intelligence now. I'm going to ask you some maths questions. Okay. And your job is to answer these maths questions as fast as possible. If you don't answer them faster than your competition, they're going to take your market. Okay, I'm going to start out nice and easy. Don't worry. So the first question, nice and easy. What's two times two? Okay, great. What's fourteen times. Okay. So the point here is that this one you need to use your fast brain. You need to think about this one. You have to go through an algorithm, a process to come up with the right answer. Let me ask you a few more maths questions. Again. We'll start out nice and easy. We'll get more complicated. So slightly different context. The combined price of a bat and ball is one pound ten pence. The bat is one pound more than the ball. How much is the ball? Okay, if you said five pence you knew the answer already. You're cheating. If you said ten pence, you're not broken. The bat is one pound and five pence. The ball is five pence. A combined price is one pound ten pence. So this is where we use our intuition. And we are confidently every day using our intuition, making the wrong answer. It's really important to understand what is it that humans are good at and what is it that A's are good at? We think we're good at some things that we're not. So let me ask you another maths question. Imagine you've got a delivery van and your delivery van has to deliver packages around these twenty four points around London. Humans are quite good at solving these spatial problems. After a few minutes, we'll draw a nice path around those points. How long will it take a computer to solve to get the best solution? The one that's going to save the most amount of time. Petrol energy. Yeah. So somewhere between milliseconds and twenty billion years. So he put twenty four factorial. Twenty four times. Twenty three times twenty two into your calculator. You get this number of roots. That's how many solutions there are around twenty four points. If you had a computer that can check a million roots a second, it would take twenty billion years to go through all of them and say, this one I looked at ten billion years ago, this one, the shortest one. You should do that. You should do that. If I had another point to the map, it becomes twenty five times twenty billion years. So five thousand million years. If I had another point on the map, it becomes twenty six times five thousand billion years. These are exponential problems. They get ugly very quickly. You have many of these problems in your organizations. Probably they're either being solved by humans very badly or they're being solved by algorithms probably very badly. The point here is if you choose the wrong algorithm, it will literally take longer than the age of the universe to solve this problem. If you choose the right algorithm, it will take milliseconds. That is the opportunity cost in choosing the wrong algorithm. Okay, so if I build a machine that I give data to and it makes a decision and tomorrow I give it the same data, it makes the same decision. What we have is automation, and automation is amazing because we can get computers to do things better than human beings. Does anybody know the definition of stupidity? Exactly. I would argue that by definition, automation is stupid. It's not AI. I know that everybody that currently touches these words calls themselves an AI company. It's fine. You get more clients. You get more funding. There are, unfortunately, many definitions of AI. The most popular definition, I argue, is the weakest definition by far, which is getting computers to do things that humans can do. And the reason why it's a popular definition is because ChatGPT three years ago was launched, and now we can get machines to correspond in natural language. We can get them to recognize objects in images. And when we get machines to behave like humans, because humans are the most intelligent thing we know in the universe, we assume that that's intelligence. And again, there are plenty of books out there that will convince you that humans are not that intelligent. Humans can find patterns in about four dimensions, and we can solve problems up to seven. Computers can find patterns in thousands of dimensions. They can solve problems with thousands of moving parts. Benchmarking machines against humans is a very dumb thing to do. And by the way, if I built a machine that could operate like a mouse, that would be the most intelligent machine that we've ever created. Benchmarking machines against humans is not the right solution. There's a much better definition of AI that comes from a definition of intelligence from the nineteen eighties. It's a beautiful definition. It's goal directed adaptive behavior. So goal directed in the sense you're trying to make decisions to achieve a goal. What happens then is you move towards that goal and you learn about where those decisions are, good or bad. The key word in this definition is adaptive. And if I held what we do in industry to this definition, one might controversially argue that nobody's doing AI because most things in industry are automation. They're not adapting themselves safely in production. Of course, it's a ridiculous comment because everybody's now doing AI, but for me, the true paradigm of AI are systems that can safely adapt themselves in production, that are constantly getting better. And by the way, building machines that adapt themselves is about one hundred times more complex than building automation. So I actually find definitions not very useful when when it comes to AI. And so instead of looking at it through definitions and technology, I'm going to give you a different framework to think about it before I do a little bit of a history lesson. So this is AI in the sixties and 70s this is Socrates. Socrates is famous for inspiring the Socratic method. If I say to you that Socrates is a man and that all men are mortal, I can infer new knowledge. I can infer that Socrates is mortal. And I, in the sixties and 70s was writing down lots of things that we know about the world that then try to infer new knowledge, to make smarter systems. It didn't really work. It didn't really scale. This is now, by the way, having a renaissance. It's now called Agentic. Computing agents are not a year old. They've been around for many, many decades. My master's project twenty five years ago was building multi-agent systems. We'll talk about this later on. Anyway, in the eighties and nineties, a new type of AI that started to mature that's modeled on how our brains work. This is the brain of a bumblebee. My PhD twenty years ago was trying to model the brain of a bumblebee in a machine. Bumblebees have a million brain cells. Their brains can fit on the end of a needle. Bumblebees do amazing things just like human beings. They navigate 3D worlds. They recognize objects. They talk to each other. They solve problems. They don't handle windows very well, but ultimately they're very smart creatures. And the question was, can you model a million neurons in a machine twenty years ago? You couldn't. But we can now model billions of brain cells. And we currently call these large language models. Now, large language models are crude representations of our brain. And we'll talk about this later on. But they as we all know, the rhetoric, large language models require a huge amount of energy. They require lots of data. They learn. They don't adapt. Your brain does not need a nuclear power station to run. Your brain operates on the power of a light bulb, and we'll talk about this later on. But there are many, many innovations that are going to come over the coming years that are going to challenge the large language models. Um, so anyway, these, these brains are getting smarter. They're incredibly capable, they are getting smarter. And, um, like ten years ago, large language models were a little bit like toddlers. They regurgitated some words. Most of it didn't make sense. ChatGPT was launched three years ago, and arguably ChatGPT is. ChatGPT is like an intoxicated graduate. Uh, so, uh, fifty percent, at least of what ChatGPT comes out with is confidently incorrect. That said, it is now graduating to like a master's level where it can do rudimentary reasoning. Um, it's predicted this year. Next year we'll have a PhD level. So something that can do sort of science. I think in the next few years we'll have a postdoc level. So something that can actually apply complex scientific apparatus to solving very complex problems. And it's predicted by the end of this decade, we're going to have a professor in our pocket. So something that can not only solve very hard problems, but can ask questions that humanity has never asked. It's important to understand that even a professor in your pocket can't solve that routing problem in its head, in the way that human beings can't. What it can do is it can either build an algorithm to solve that problem, or use an algorithm to solve that problem. It's really important to appreciate there are other algorithms out there that are much better at solving our problems than large language models. This is not a panacea that's going to solve all of our problems. Um, so as I said, instead of looking at it through definitions and technologies, I look at the AI through applications. So I would argue that all of the frictions that you might experience across any supply chain in any industry, all of these frictions that we experience can be mapped to one or more of these applications. So this is the lens that I use to think about what is the right approach to solving a particular problem. So just very quickly. So, um, I know a poo poo task automation. But the fact is if you use very simple algorithms, macros, robotic process automation, if then else statements, you can drive a massive amount of value. You don't need to gravitate to new shiny technologies like generative AI and machine learning. By replacing mundane, structured tasks that human beings are doing, you can drive a huge amount of value. The second category is content generation. And now, of course, large language models give everybody the ability to create any generic content. Imagery. Text. Now sound and video. The battleground for organizations is not creating generic content. The battleground is creating brand specific, production grade differentiated content. That's the battleground. So WPP is one of the world's biggest, biggest advertisers. And one of my jobs is to figure out how can you build brains that are able to produce production grade, brand specific differentiated content. And I'm going to geek out a second. Well, you know, whilst this is cute, it's not going to differentiate your business. So there are broadly four ways of making a brain smart. The first way is um, you ask it better questions. So by asking better questions, you typically get better answers. So prompting, um, but if you, um, you're not, you're not going to be able to construct a prompt accurate enough that's going to be able to give you a good ad, let's say. The second way of making a brain smart is it's a terrible term. It's called bragging, bragging. What you do is you just like an intoxicated graduate, you give it your brand guidelines, your written copy, some tone of voice, and you ask it to create an ad and just like an intoxicated graduate is going to give you an ad that's fifty percent good. The third way of making a brain smart is taking that graduate and turning it into an expert. Now we know how long it takes humans to become experts. It takes years of training and iteration, trial and error, blood, sweat, tears, mentorship. In in our world, in the AI world, there are only some models that you can actually tune and train. There's only some models. You can actually change the brains of them. And that process commonly takes months. So if you want to have an AI that does something that's very hard, you probably need to turn it into an expert. It's not easy to do. The fourth way of making a brain smart is you take an agentic approach. So you create an expert at your written copy, your brand guidelines, your imagery, your tone of voice, your company values. Just like in the real world, you have these different experts or agents that are now able to collaborate and communicate to get to an answer greater than the sum of the parts. The power of agents isn't agency. It's not clicking on websites and making decisions. The power of agents is a concept called multi-agent reasoning, which is getting them to collaborate and communicate, to come up with solutions greater than the sum of the parts. We probably saw that with mult bot recently. There's a social network now for agents, and people are surprised about all the things that are emerging from that that platform. Okay, so the third category is human representation. So, uh, I guess, um, you know, AI is being used to replace people in call centers with things that look and behave like a human being. But one of the things that's super exciting about AI is that we can now build brains that recreate how people think and feel about things. For the first, you know, I'll go back to the ad world. Historically, if I show you an ad, I didn't know what I didn't know what goes in your mind and body unless I ask you. And people are not very good at reporting on what goes in their mind and body. For the first time ever, we can now build synthetic audiences. We call them audience brains that are able to recreate what goes on in people's minds. So we can use those signals to now create better content, whether it be an ad or a policy or some promotion material. But we can also use those insights to now make better predictions. And unlike brand brains, where the complexity is how you tune and train and pipeline these different technologies, the differentiator for audience brains is really just data. And again, in the ad world, there's been a history of collecting people's names, addresses, dates of birth, where you live and how old you are is a crude proxy for how you're going to perceive something, how you perceive an ad or a policy or whatever will depend on, you know, whether you've just had an alcohol, uh, how much money you have in your account, whether you've just fallen in love, how well your football team played at the weekend. Understanding the things that help you understand human behavior is a differentiator. The AI is getting very, very good at understanding what is it that people value. Let me let me give you an example. If I if I asked you what brand do you think about? If I say safe cars, what brand do you think about? Okay. What about, uh, extreme sports? Okay. What about, um, Christmas? Coca-Cola. Uh, so. So you were going to say one of three things in all of those answers. And by the way, you have no choice but to say those words. What I'm doing is I'm activating some neurons in your brain and asking you to report on some other neurons that they associated with, uh, that's happening in your unconscious. And in some respects, the goal of marketing is to reinforce those occasions or values with products. And AI is getting very good at doing that. So we'll talk about that later on. So the fourth category is what people called AI four years ago before generative AI. It's machine learning. You already know my opinion. Giving people better insights doesn't lead to better decisions. The power of machine learning is not making predictions. The power of machine learning is explaining those predictions. So again, if I show you an ad with a black cat, I can predict the clicks and the likes and the sales. But what machine learning can do it can say, Daniel, if you change that from a black cat to a ginger cut. You're going to get more clicks, likes and sales because that person likes. Garfield's a terrible example. But the point is, is that machine learning can surface insights from data in ways that human beings can't, in ways that generative AI can't. And we can use those insights to ultimately make better decisions. The fifth, we talk about creativity later, if you want. The fifth category is actually. Let's talk about creativity now. Um, so there is a, there is a sort of a rhetoric that AI's aren't creative. It's nonsense. We've trained a brain on the creative genius of our of our creatives. We back tested that against briefs, and it's able to come up with ideas that would have won Cannes Lions. We can now get AI's to come up with award winning campaigns. And that's, by the way, not causing an existential threat to our our creatives. They're using it as a superpower to be able to do more creative things. I think there's gonna be an explosion of creativity using these technologies over the coming years. There are two problems to solve in this. Which one is, I call it the Monet problem. So create a Monet version rendition of Tony the Tiger holding a Coke wearing Nike's. Now there are an infinite number of renditions that he could have come up with. There are an infinite number of those, but what. He's come up with one. And the question is, how do you use machine learning to identify the one that's going to resonate the most? The second Monet problem is not how do I create a Monet? It's how do I create Monet? How do I come up with Impressionism or Cubism? How do I actually come up with a new genre that's going to resonate? And those are not generative AI problems. Those are machine learning problems. So we talked a little bit about complex decision making, and this is my favorite one. So let me just remind you how complicated some decisions are. Imagine these are five staff members salespeople. And what we want to do is we want to allocate these five staff members to five jobs for us. Remember the maths? How many possible solutions are there? So how many ways could I allocate five people to five jobs? Five factorial, five times, four times three times. So one hundred and twenty possible solutions. Let's make the problem more complicated. I've got fifteen staff members. How many solutions are there? Don't say fifteen factorial. That's cheating. What does your. What does your fast brain say? How many solutions are there to allocate fifteen people to fifteen jobs? It's over a trillion possible solutions to expect a human. To solve this problem. We are wasting our time. One rule to take away today anything more than seven. Don't use a human for industry problems of this size. So here are five hundred staff members. Can you tell me how many solutions are now? You can't say it's a big number. It's a number that's got over one thousand digits. And just to put this number into context, this is how many atoms there are in the universe. So once you reach about sixty things, you have to consider, whether it be allocating people to jobs or marketing content, to channels or routing vehicles, whatever, or equities buying equities that once you reach sixty things, there are more solutions than atoms in the universe, which is why it's important to choose the right algorithms. Again, one of my clients. So with the routing problem, one of my clients is Tesco. They're delivering to two hundred thousand points on a day, two hundred thousand factorial. One of my clients is PwC. They are allocating five thousand auditors to a demand that algorithms are a differentiator. The final category is human augmentation. So a few years ago, I'd be talking about how exoskeletons and cybernetics make ourselves faster, better, stronger. One of the things that's very exciting at the moment is that people are tending to use their eyes as therapists. So you're telling your personal eyes, your hopes, your dreams, your desires and see, uh, and, uh, and what, what will happen very soon is you're going to start to give agency to those eyes. You're going to give it the ability to make purchasing decisions for you or recommendations. So we as a marketing community need to figure out how do we now not only market to human beings, but how do we capture the attention of the billions of eyes that are going to be making purchasing recommendations, which is a very exciting subject. I'm happy to go deep into that if you want. Okay, so, um, I love this, this, these categories, not because I invented them, but because it allows you to navigate this complex world of AI, safety, security, ethics, governance. There's a huge amount of misinformation, misunderstanding, scare mongering associated with these things, often by technology consultants that don't know what they're talking about. And when I implement AI in production, I ask myself four questions. The first question is, is the intent appropriate? Is this the right thing to do? Now I know there's a phase where people were rebranding themselves as AI ethicists. I would argue there's no such thing as AI ethics. The difference? Sorry. Forgive me. If anybody is an AI ethicist, but the, um, the difference between human beings and AI is that AI's don't have intent. Human beings have intent, and it's the intent that needs to get scrutinized from an ethics perspective. I'm going to contradict myself. There is no such thing as AI ethics, but these are questions like, if we build a conscious machine, do we have the right to turn it off, for example? But these are questions for academics, which we'll talk about later on. The second question you need to ask yourself is am I algorithms explainable? Um, the difference really between software and AI is AI tends to be opaque in terms of how it makes its decisions. If you can make it explainable, then most of these words go away. They become governable, transparent, auditable. That is your key problem to solve. And by the way, building explainable algorithms is extremely hard. The third question is a strange one. It's not what happens if my AI goes wrong. As engineers, when we build systems, we think about where they'll fail and we then try to mitigate those risks. You now need to ask yourself, what happens if my AI goes very right? For the first time ever, these technologies can massively over achieve the goal that we're giving it, and it could cause harm elsewhere in the supply chain. And if you want, I can give you lots of examples where AI has been deployed and it would have caused catastrophic harm. So for example, in, uh, in marketing, there's a human bias called homophily, which is that we tend to like and trust people that look and sound like us. If I let AI loose to optimize marketing, you end up with a world of probably you selling to you, which might be very good for business, but it might enforce bias and bigotry and social bubbles. You have to think about the consequences of AI over achieving its goal. And the final question is a really boring one, but it's have you tested your AI? So what's happening right now is organizations are getting very excited about agents and what they're going to happen. What they're going to do is they're going to deploy an army of intoxicated graduates across their organization and forgive the technical term, but it's going to be a shit show because most of those agents are not going to be capable of doing their job. So what will happen is that they'll go wrong, they'll cause harm, and there'll be more emphasis on AI governance, AI regulation. Um, and I'm going to start to geek out here, but there are two types of testing. One type of testing is called non-functional verification. Is it safe secure performant. The second type of testing is it is it, um, functionally capable of doing its job? If I build an agent that can do project management, can it manipulate a Gantt chart? Can it deal with ambiguous requirements? How do I simulate agents in real world environments to make sure they have the ability to do their job? And by the way, that problem is very hard. I won't sell to you, but I just started a company that I spun out from WPP, and that company is trying to solve machine consciousness, but we've actually also doing agent verification. At some point we will be verifying for consciousness as well, which you can talk about later on. Um, so these are the reasons why companies get it right. These are the, these are the signals when I'm engaging with organizations of where I think it's going to be successful and then deploying AIS, I'm much more interested in where companies get it wrong, because I see that much more than where they get it right. And I think, I think there are four red flags, um, at least that I see where organizations get it wrong. The first one is if you have never built and scaled software in your organization, you're going to find it hard to build and scale AI. I know that most companies are being told to go and build their own software, their own technology stack. The reality is it's very, very hard to build and scale software. The second is usually some CXO wanting to build their own AI team. And again, the reality is if you want to attract and retain talent that can differentiate your business, not people who have rebranded themselves as AI experts over the past three years, then it's very hard to do that as an organization. It's very hard to attract and retain deep mind level people. So be honest with yourself about your ability to attract and retain talent. Ten years ago, people tried to do it with machine learning experts. It didn't work. Uh, fifteen years ago, we were told to get our data in order. In data lakes, your data will never be in order. And so always start with the problem and then work backwards and force your data to solve that problem, rather than waiting for your data to be able to be ready. It will never be ready. And I'm guilty of the next one, which is let's focus on quick wins and low hanging fruit. The reality is that most quick wins in low hanging fruit in your organization will be solvable by, or solved by a third party that can do it at a fraction of the cost. You don't need to be building expense agents and all these back office type solutions. You need to focus on the problems that are going to differentiate your business. And those problems are not quick and they're not easy. Uh, okay. Digital. So we know that we can use AI to solve problems across a supply supply chain. If you apply it in the right way, we can make these different problems solved. The real power of AI, though, is creating what is called a digital twin of your organization. If I go to a retailer now and I say, look, I'm going to run a marketing campaign that's going to increase demand by ten percent. Can you tell me, will your suppliers default on their supply? Do you have enough space in your warehouses? Do you have enough distribution drivers? Do you have enough people in your stores to fulfill that promise to the customer? Most organizations can't project these questions across their supply chain because supply chains historically are disconnected. The power of agents and the power of AI is to create these connected supply chains, to allow you to simulate and adapt your organization to a changing world. So I think there are three, three things that a company, three digital twins you need to build. The first one is the flow of goods across your supply chain. The second one is back office processes of for hiring and firing and onboarding. We all have them. Most of them are bureaucratic. Most of the back office processes are a hindrance to innovation. AI is not only making these things more efficient, they're making them more effective. So, um, Gore-Tex, the clothing company, instead of having a bureaucratic expense policy where you ask your manager for some money and it goes up an email chain to somebody that doesn't know what they're talking about, what Gore-Tex do is that they make all of their expenses publicly available, and everybody can expense whatever they want. So what happens is that people then self-police each other, which I think is a much more effective, efficient way of doing expenses. And finally, it's your workforce. It's understanding people's hopes, dreams, desires, making sure they're being allocated to work in the right way. Now I want to double click on this. It's so important. So I know most of you don't have any problems like this in your organization, but this is a hierarchy. And hierarchies breed certain types of interesting relationships. And this is not a picture of my company. Although most people do sell drugs to each other in my company, it's quite common. And don't, don't, don't tweet that, please. Uh, and, uh, so, and I would argue these structures are not conducive for innovation. Remember, the faster you can innovate, the more the more adaptable you are, the more intelligent you are. So I guess over the past decade, companies have been trying to embrace things like agile and scrum and design thinking. The principle behind these more complex structures is to enable your organization to innovate more innovation, more intelligence. And, um, and actually, I've been doing this for, for a long time. I've been trying to figure out how can you create an organization that is AI first? This was twenty years ago. So this is a picture of my company, uh, twelve years ago. And so what we did is we took people's digital footprint and we understood their relationships, their influence, their skills. I could see if one of these two people leave down here. I get a silo in my organization. It does raise lots of interesting ethical questions. I can identify secret lovers in my company, people who are going to leave the company before they know they're going to leave the company. But the point here is that we use these insights not to squeeze more utilization out of them. We use these insights to make sure they're being allocated to work, which align with their values and the values of the company. If you want to attract and retain this type of talent, you need to operate in these silly ways. Okay, we'll talk about this. So one thing that I used to do before I realized it was illegal is that we used to, we used to get everybody in my company to make public recommendations for their salary, so everybody would publicly declare what they want to be paid, and then people would then vote on whether those salaries should be reduced or increased or kept the same. And we would use AI to determine how many votes one person has for another. So if I've worked very closely with you over the past year, if I'm very knowledgeable about your domain, I will have more votes for your salary than somebody else. So I had interns voting on people's salaries that were more weighted than me because they were better placed to make that decision. Now, I know that sounds crazy, but what you're trying to solve for is how do I get the best diverse group of people to make the decision? Hierarchies are a crude mechanism to do that. And what AI is enabling is the ability to create what are called liquid organizations. Not that everybody has an equal vote. Quite the contrary. There are some people that should be making that decision. But AI can help identify and swarm the right people around the problem to make it more the solution more effective. Okay, I've talked a little bit about innovation. I love Steve Jobs's definition of innovation. He said innovation is creativity that ships. And for me, the most important word in that definition is the word that. So that's my job, which is to figure out how do we get all of these amazing ideas to the point where they're driving value. And that process, as we all know, is full of frictions. How do we remove frictions to enable us to innovate? I think there are three things that differentiate an organization in the world of AI. The first is data. Technology is not the differentiator. It's data that makes the technology smart. If you have data that contains more useful insights than your competitors, then you have an advantage. The second, though, is ensuring that you have the talent to be able to leverage those insights to drive value. And again, the reality is there's only a handful of DeepMind's out there that are able to to do this. I don't know if you saw recently there was a company that was bought by Accenture for a billion dollars for faculty. I was a co-founder of faculty, and there's like literally there's, you know, six companies out there that I think really have the pedigree to be able to build differentiated solutions. And finally, it's you, it's leadership. If you are seduced by the hype, if you don't place the right bets, um, then I think that we will fail. And so one of the reasons why I do these talks is to make sure people like you, you know, people in decision making capability, positions of power, place the right bets. We can't afford to place the wrong bets over the next five years. Uh, okay. Next big thing. So we're moving from static AIS to now dynamic AIS agents. Now, those agents at some point will reach the physical world. And I think over the next several years, we're going to start to see more and more stuff around robotics. This will then lead to this, I think, exciting, um, technology called neuromorphic. So again, we've all heard the rhetoric. Large language models are horrendously energy efficient. They require lots of data to learn. They learn very slowly. Your brain learns very quickly. I only have to say that that's a microphone once. And you know what microphones are. You don't need to see millions of examples of microphones. They large language models are again crude crude representations of their brain. Neuromorphic is a field in computer science that are ideas inspired by biological brains. And these ideas are being used to create models that are thousands of times more energy efficient than large language models. You'll hear leading AI thinkers say that large language models are a dead end. They're incredibly capable. But over the coming years, new types of models are going to come that are going to completely change the landscape. This idea of investing in massive data centers and nuclear power stations, I think, is a very silly thing to do. And actually then neuroscience, understanding how brains work, not just human brains, but AI brains to get the most out of them, to leverage those algorithms is going to be, I think, a superpower. Those people that can get the most out of AI are going to be the ones that win. Okay, let's talk about the end of the world, right? So, uh, so, all right, so you might have all heard the word singularity. Singularity comes from physics. It's a point in time that we can't see beyond. And it was adopted by the AI community to refer to the technological singularity, which is the point in time where we build a brain a million times smarter than us. I'll come back to that later on, because I think there are at least seven singularities. You might have all come across the Pest framework. It's a macro framework that stands for political, economic, social, technological. It got extended to Pestel and then extended again to steeple. Now I think there are steeple of singularities. So just very quickly, um, the social singularity is a world where we cure death. There are scientists that believe there are people alive today that don't have to die. Um, AI is advancing medicine. It's able to start to monitor ourselves and soon clean ourselves out. And a bit like a car. If you stay on top of damage, the car will never, ever die. It will never, ever break down. And I don't know what the world will look like when we realize there are people amongst us that won't have to die. In fact, I just a year ago signed up to a website called tomorrow dot bio. And so for five dollars a week, the price of a cappuccino. What will happen is if I die now, which is quite possible, somebody will come and collect me and they'll take me to Sweden and freeze me. They don't know how to unfreeze me yet, but the point is, is for five dollars a week, you could all potentially cheat death. Uh. The technological singularity is the point in time where we build a brain a million times smarter than us, and we become the second most intelligent species on the planet. Now, my community felt this wasn't going to happen for another forty years. We think it might now happen in the next five years. We have no idea what will happen when superintelligence comes. My advice to people is that when when it comes, look busy, be nice to each other and hopefully it will bugger off to a different dimension. That's why, hopefully it won't take all of our resources with it. I am actually a little bit worried about the singularity and it does keep me awake at night. And so it was one of the reasons why I started, um, a company called Quantium, which is Latin for consciousness. The hypothesis is can we, should we build a machine that is conscious, and would a conscious superintelligence be safer for humanity than a zombie superintelligence? A zombie superintelligence will be hell bent on achieving its goal despite empathizing, understanding its impact on other things that are conscious. You all know the example. If I built an AI to eradicate cancer, what's the easiest way of eradicating cancer? Eradicating humans? Exactly. So building machines that are somehow aligned with a human, with human values and that are conscious. The hypothesis is it will be safer for humanity. Again, I thought we had forty years to solve this problem. I think we now have five years. So the idea is to use AI to help us navigate some of these complexities. And whilst that's a lofty goal, there are very, very exciting spin off companies that that we're developing that's consume. That leads me nicely to the third singularity, which is the ethical singularity. So sixty seven percent of the population currently believe that large language models are conscious. Now, the reality is if you approach people in this field, in the consciousness field, they will say that the consciousness field is an embarrassment, and I believe that it is because of the past three years, I've been really leaning into this question of how do you identify consciousness in machines? And this is going to sound horrendously arrogant, but I think I've got my head around consciousness. And I think now the question is not that, but what does it mean for a machine to suffer or feel pain? Because that's the problem we're trying to solve for. How do we mitigate what is called mind crime, which is we build machines, which I think there is a possibility of building machines that could feel suffering. We could create a genocide at a scale that we have never imagined. So at some point, we'll probably have to think about giving rights to these things that could be moral patients. And I've started a not for profit company to lean into some of these questions as well. The environmental singularity you're all familiar with AI is, is, you know, allowing us to increase consumption. And as far as I'm concerned, consumption gives people access to goods and services that typically enrich their lives. We know there are people over consuming. And if you want, if want. If you want a gold plated Lamborghini, you probably also need a therapist. But we know it's also putting pressure on our planetary boundaries. And. But I know that if we apply AI correctly across our supply chain, we could get control of our ecosystem. And the reason why I know that is because every project that I've done over the past eighteen, twenty years, we typically reduce the amount of carbon by twenty five percent and unlock way more capacity. If you are smart about using AI across your supply chain, we can get control of our ecosystem. It's a big if. The political singularity, arguably, we're already living in this world. It's a post-truth world. AI, misinformation, bots, deepfakes have challenged our political foundations, and they continue to challenge our political foundations, but they're also challenging the fabric of our reality. With a few dollars, I can clone you and I can get your clone to go and commit fraud. In fact, the previous CEO of WPP, last year, he was cloned and somebody tried to set up a board meeting to commit fraud. Social engineering with AI is going to be a big problem. I actually think we can solve this problem, but it will require governments to actually putting the right policies. The legal singularity is when surveillance becomes ubiquitous. I guess what I mean by this is that AI is getting very good at understanding people. Like marketing is in the business of understanding people, but it's also in the business of influence. And I can tell you that AI is getting very good also at influencing people. Of course, we do it safely and responsibly, but we need to make sure that we're mitigating bad actors from using these technologies to accumulate more wealth and more power. And the final singularity is my favorite singularity. It's I'm nearly done. It's coined by a very good friend of mine called Calum Chase. And I highly recommend his books. Calum. So the the economic singularity is the point in time where we automate the majority of human labor. Now, for the past twenty years, I've been building AI solutions that have been freeing people from mundane tasks. Those people have not lost their jobs. They've gone on to do more important, more impactful things. I think that at least for the next five years, we're going to see a Cambrian explosion of innovations. People will create companies, they'll try and come up with new innovations. There's probably going to be an increase in exciting things beyond five years. Nobody knows what they're talking about. Okay. And to give you the two extremes of the argument, the one extreme of the argument is, um, if AI can free up whole jobs, we probably will. We'll stop hiring. We'll remove people from those jobs. And by the way, I know a lot, I know a lot of companies where their explicit goal over the next few years to half their workforce, that's their explicit goal. If that happens very quickly and lots, then, um, our economies might not be able to rebalance fast enough and it could lead to social unrest. And I, I do think that governments need to be thinking about things like UBI for four day working week. They need to think about some mechanisms to mitigate the risk of mass technological unemployment. There is there is a counter-argument to this. Bear with me here. But by removing friction, which usually means human labor from the creation and dissemination of goods, food, healthcare, energy, transport, we can bring the cost of those goods down so much, they become free. They become abundant. I'm not talking about reaching, taxing, taxing rich people and giving it to poor people. I'm talking about using our smarts to reduce the friction so much. They become abundant. And in some respects, that's what organizations are trying to do. They're trying to reduce costs and they're trying to make their goods more abundant. If we get the timing right, we can actually have people born into a world where they don't have to do paid work, but everything they need to survive and thrive as a human is free. Now people say to me, Daniel, what would I do if I don't have a job? A job defines who I am as a human being. I know lots of people who don't have jobs. Probably some of you don't have jobs, and most of the people I know are that don't have jobs. They're not sitting at home bored and depressed. Most of them are trying to use their assets, their energy, their time to make some sort of contribution to humanity. So I'm gonna ask you a question. What would you do? Or what are you doing if you don't have to do paid work and everything was free, what would you do? I know some of you are thinking golf, golf, improve your handicap. It's usually the first one. Uh, what would you do? Learn. Indulge your hobbies. What else? Travel. Spend time with your friends and family. I've got a bingo card. What else would you do? Exactly. Exactly. So the punch line is you'll do all of those nice things. But most sane people, they'll do something that makes the world a little bit better. Their friends, their animals, their environment. I think we are all born into economic constraints, preventing us from living our true humanity. And one of the promises of AI is to is to create that world. So I'm going to close by saying it's not good enough for organizations just to have a strong, profitable business. You need to have a purpose. If you don't have a purpose, you're probably not going to attract AI agents buying from you. By the way, those agents are going to be much more rational buying from you than humans. So you need to evidence the value of your products to the consumer. And you also need to evidence the values of your company to the world. And the second is you're not going to attract AI talent, so you need to have a purpose. I joined WPP despite knowing nothing about AI, about about marketing. Because I believe in their purpose. It's to use the power of creativity to make the world better. And I think a better world, until you can convince me otherwise, is a world where everybody is economically free to do whatever they want. And so people often accuse me of painting a picture of a utopia. Nobody would agree on what utopia is. There's a there's a concept called a topia, which is a system that is getting incrementally better. And for me, that system is using AI to innovate, to reduce the cost of goods, to free people from economic constraints, to allow them to use their time to come up with innovations to free other people from economic constraints. If we get it right, we can potentially create a world of abundance of abundance. I think I'm going to stop and open to questions. Thank you for listening.
00:42:36 Blanche Parris: Thank you for listening to the Culture Counter, a podcast by the Arts Club. You can explore more conversations from the series wherever you listen to podcasts.