
AI and the Future of Work: Artificial Intelligence in the Workplace, Business, Ethics, HR, and IT for AI Enthusiasts, Leaders and Academics
Host Dan Turchin, PeopleReign CEO, explores how AI is changing the workplace. He interviews thought leaders and technologists from industry and academia who share their experiences and insights about artificial intelligence and what it means to be human in the era of AI-driven automation. Learn more about PeopleReign, the system of intelligence for IT and HR employee service: http://www.peoplereign.io.
AI and the Future of Work: Artificial Intelligence in the Workplace, Business, Ethics, HR, and IT for AI Enthusiasts, Leaders and Academics
Jad Tarifi, Integral AI CEO and former Google Research team lead, shares how to train AI to reason like humans
Today's guest is one of the pioneers in generative AI having spent nine years at Google Research building teams that developed breakthrough technologies that led to innovations like the transformer architecture behind ChatGPT.
Jad Tarifi co-founded Integral AI in 2021 after a distinguished career in AI roles as a researcher and leader. He received his PhD in Computer Science and AI from the University of Florida and did his undergrad at the University of Waterloo.
Thanks to great former guest and friend of the podcast Hina Dixit from Samsung NEXT for the intro to Jad.
Listen and learn:
- Can machines learn common sense? Do humans have common sense?
- Why Integral AI is providing a “base model for the world”
- Can machines ever learn as quickly as humans?
- How to improve the efficiency of LLMs with better algorithms
- Why the current transformer architecture is poorly designed for next word prediction
- How to use AI and robotics to create “magic wands” and “crystal balls”
- How to use AI to do “science at scale”
- What are the ethical implications of bots that can change the human life span
- How AGI is related to objective morality
- Jad’s four tenets of a new definition of “freedom”
References in this episode…
- Integral.ai
- Blake Lemoine and the “sentience” debate
- Podcastle, generative AI for podcasts (a technology nobody needs)
And the reason you're able to do that is that you have this rich world model And you don't have that world model innately. You have maybe priors on that world model, but you learn that world model through your experience in childhood and beyond. So common sense AI is building that world model.
Speaker 2:Good morning, good afternoon or good evening, depending on where you're listening. Welcome back to AI and the Future of Work. Thanks for making this one of the most downloaded podcasts about the future of work. If you enjoy what we do, please like, comment and share in your favorite podcast app And we'll keep sharing great conversations like the one we have for today.
Speaker 2:I'm your host, Dan Turchin, CEO of PeopleRain, the AI platform for IT and HR employee service. I'm also an investor and an advisor to more than 30 AI-first companies and a firm believer in the power of technology to make humans better. If you're passionate about changing the world with AI, or maybe just ready for your next adventure, let's talk. We learn from AI Thought Leaders weekly on this show And, of course, the added bonus is you get one AI fun fact. Today's fun fact comes straight from my inbox Generative AI is coming to disrupt podcasts.
Speaker 2:I published about 200 episodes of this podcast, so I am qualified. They have strong opinions about the role of AI in media production and specifically podcast production, which is why an email I received this week raised my eyebrows. Podcasts is pitching the ability of AI to, and I quote, automate podcast intros and outflows, ad reads, voiceovers or even entire episodes. They can now be generated directly from keyboard with as little as 70 pre-recorded sentences. Now I call that crazy talk. That is a problem we don't need, so hoped. Does anyone need or want those things that are so innate to the human condition automated?
Speaker 2:We've spent a lot of time together, me and you, our audience getting to know each other, And to me, these genuine conversations that we get to have together each week brings out the best versions of what it means to be a human. The last thing I want to do as a listener or, I imagine, you as well is hear a bot transcribe or create content in the form of a podcast. Certainly not about AI and the future of work. I think podcast is a terrible idea, And I, for one, am a firm believer that we should keep the humaneness in the podcasting industry, And I think in this case, I'm going to vote with my certainly vote with my listening time and continue to listen to the podcasts that I feel like are genuine and recorded with human beings. So thank you, podcastle, but I'm going to turn down your generous offer to automate the production of AI and the future of work. I'll link to more information about podcastle in the show notes, but for now, shifting to this week's great conversation, which is actually relevant to today's fun fact. If you have as much experience thinking about and defining the future of AI and its impact on humanity, then today's guest he's a deep thinker and an advocate for AI that is compassionate. We'll learn what that means and how you go about productizing that today.
Speaker 2:Jad Tarifi co-founded integral AI in 2021. After a distinguished career in AI roles at Google, He received his PhD in computer science and AI from the University of Florida and did his undergrad at the University of Waterloo. Thanks to great former guests and friend of the podcast, Hina Dikchit from Samsung Next, for the intro to Jad And without further ado, Jad, it's my pleasure to welcome you to the podcast. Let's get started by having you maybe share a little bit more about your background and how you got into this space.
Speaker 1:Thank you, dan. I can start as genuinely I can be. Rather than being generated by podcastle, i'll try to make this more of a spontaneous discussion. I grew up in Lebanon and I was really infatuated with physics and math growing up, so that led me to do my master's degree in quantum computing. But around that time I was really thinking of the impact that my work will have on the world and what's going to be my life work, and it felt that while quantum computing is intellectually stimulating, it's fun, it's exciting to explore, it didn't feel like it would translate to meaningful social change And like if you give me a quantum computer right now, not much would actually change in the way we live our life. It's certainly fascinating, but I questioned the impact. So I was looking for something that I can feel much more excited about And during that time I was in parallel like really fascinated with neuroscience, doing a lot of coursework, research in neuroscience, psychology.
Speaker 1:I looked into different aspects of philosophy. I was doing meditation practice We had also some different perspectives on it from various ways of which is on someone, but it was all a kind of a messy face. There wasn't a strong and to focus my PhD on this. So to build kind of a mathematical, theoretical foundation from a computer science perspective on intelligence. So that's what I spent a few years working on And my PhD thesis was on building that foundation for rigorous thinking of intelligence, including artificial intelligence and maybe even alien intelligence. And that led me to think of the next step after building those foundations. The next step was how can I bring these ideas to life? And that's what Google helped me to do.
Speaker 1:I was very fortunate to be part of Google research, where it was a very, very special place. It had more researchers and brilliant people than, i would say, any university in computer science. But Google research has thousands of PhDs And it was a time where a lot of the current breakthroughs in AI were actually conceived and invented. I'm not just talking about transformer, talking about many, many other breakthroughs. All happened there. So it was a really good place to be And I was fortunate to be given a leadership role from the beginning and given a lot of support to explore kind of my ideas and take the things that I've worked on in the PhD and bring them to life. And that was a really exciting journey. That's what got me here.
Speaker 2:And you and I were getting to know each other, and when we were prepping for this, we talked about your vision for common sense, ai, which certainly in certain circles would be considered an oxymoron. What does that mean to you? And then, how is that part of the foundation of the company that you're building?
Speaker 1:Right, so I was lucky that, actually, the team that I started built the first generative AI model at Google, and we initially focused on images and short term videos. The goal, though, is never generation. We really generative AI was a proxy for a deeper goal, which is understanding. Right Understanding is what we're really after, and generation prediction is one way to kind of concretely measure that, to showcase that Common sense tries to get closer to that ultimate goal of AI. So it's not just about generating. It's about making decisions, handling uncertainty, predictions, understanding context, taking action and also building an abstraction of the world that we're living, so building these concepts in a kind of unsupervised way. That's really the key thing to common sense AI. It's growing beyond generative AI to do all the things that we think is natural part of intelligence, and it seemed to me that once you have something like common sense AI, it's gonna need to be built in a new kind of cultural foundation. It has so many different applications, one of them being robotics, the other being real world assistance and then, finally, automated science, and these were very, very different areas than what Google was focusing on, so it felt natural that, in order to move this at the pace that is most aligned with all the wonderful impact that it can achieve, it would need to be separate entity from Google.
Speaker 1:So it was actually a tough choice for me. I didn't really just wanna do a startup. I was actually against doing a startup. I thought, if at all, i can do it at Google because I don't have to handle things like fundraising or deal with operation issues, hr issues. But I think that's the key thing, it's HR issues. But at some point it felt that it's worth adding that extra overhead because you'll get so much more speed and more capacity to actually create a better I'd say better aligned culture for this type of company, this type of project. So, yeah, that's what led to the founding of our company.
Speaker 2:So there was this flare up last year through Google and engineer Google named Blake Lemoine, who claimed that the Tom LLM was sentient, and then, after chat TPP launched, a lot of the public's imagination was captured by things that they saw LLMs do. It made it look like the machines were able to reason, and you and I know that really what AI is is math and stats at phenomenal scale, but it's an exercise in prediction. One of the hardest things to teach a machine to do is reason, And you talk casually about common sense AI. What take the counterpoint on that one? How do you think we train a machine to have common sense like a human?
Speaker 1:Do humans have common sense? I mean, do we really reason? you think that? well, Seems, actually, reason is something we really work hard on developing as humans. right, Seems that what we do naturally is closer to stats, statistical correlation, prediction, And then with some education, with some kind of disciplining through usually external tools like pen and paper, through hard visualization. when you look at the human thinking and reasoning they look like they're in pain, Like their eyes are closed and they're like squinting and they're trying to wrestle with a machine that's not designed really for reasoning, to make it reason.
Speaker 1:But I'd argue that what we're designed to do is make the statistical correlation that help us do a as possible job in an unsconstrained and messy world. And if we can do that then the rest is easy for computers. So computers are really good at reasoning. So it's always been that the messier problems are much harder for computers than the neat logical search, logical deduction type of problems. So what we need is kind of a base model to reason on top, a base model for the world, And that's the common sense that I provide. It provides a model which you can say okay, if I have this partial set of information, what are all the likely things that could happen next, Right Now. once you have that, then reasoning on top of it is just a loop of following this path and seeing where they take it.
Speaker 2:Take the example of a toddler learning not to touch a hot stove. So the human brain is very well wired so that you know, usually it only takes one or maybe two examples for the toddler to learn not to touch the hot stove And that's a really hard computing task. Do you envision a time when a machine could learn how to, quote, not touch the hot stove as efficiently as the human brain can learn that in the body of a toddler?
Speaker 1:You give actually a particularly difficult example, but I do. In fact, i think it's something that we could have now. What you need is just the. So you could argue about how human learns not to touch the stove, so that you've got the people who come from a behavioral perspective, known as reinforcement learning people in the machine learning community. So they argue that a human has an inherent reward system And when you touch the hot stove you learn to get punished for that reward system And then you're less likely to repeat. So that's one way. It's a useful way of thinking about it. But I would argue, similar to how cognitive scientists argued against behavioral scientists in the 60s. I would argue that that's a limited way of thinking about it.
Speaker 1:In fact, if you require that you touch every single hot surface, to not touch it would take too many. You know most people would be dead. So. So you need a way that's much more detail, efficient, and the way this would work in our case is that you would have a prior on what distribution of states that your body can be in, and that prior is kind of learned across evolutionary history in humans. So a hot stove is a very surprising for you, right? So what you want to do is kind of maintain a certain homestacious parameters within your body And then to do that you need to learn a predictive model of the world.
Speaker 1:So you need to understand the world makes sense of it And when you touch the hot stuff for the first time, you learn to predict that this stove is going to throw your internal parameters into a very, very surprising and therefore unpleasant state.
Speaker 1:And you also learn to generalize that most likely stoves in general are to be avoided. Things that feel hot as you get close to them generally should be avoided. So you generalize beyond that one particular instance very, very quickly. And the reason you're able to do that is that you have this rich world model And you don't have that world model in a Tlee. You you have maybe priors on that world model, but you you learn that model through your experience in childhood and beyond. So common sense AI is giving you is building that world model and then learning that particular touching hot stuff is not good or maybe desirable for some people Right Is something that you put on top of the world model. So the world model gives you the ability to generalize on, understand whether what you're going to do with that is just depending on the task or depending on what your particular agent is designed to do.
Speaker 2:So you're a quantum computing guy, so I got to ask you this question.
Speaker 2:This seems like one of the fundamental limitations of AI is that relative to the human brain, which is just unimaginably efficient And I think it runs, you know whatever it is 2530 Watts And you know again that toddler can learn tasks that are very hard for a computer to learn. So seems like one of the fundamental limitations of AI advancing to the level of. You know what we think of as human level. Common sense is it needs to be vastly more power, efficient or we need to come up with some you know real technology breakers, of which possibly quantum computing holds you know the secrets, but something very, something very fundamental has to change before we can make these kind of leaps in terms of what AI is capable of doing in a way that's anywhere close to being affordable, or we're not going to burn up the planet, or that just seems to be a fundamental disconnect. Do you do agree, and, and, and? if so, what? when are we going to have those kinds of breakthroughs that are going to make that AI at scale feasible?
Speaker 1:Yeah, so I would agree that the current approaches and public awareness are not sustainable, but I think the actually we do have alternative approaches that we've been working on for years. In fact, the approaches we're working on at our company are scalable and efficient, and that's related to the idea of creating these explicit abstractions we were talking about earlier, and it's also related to several fundamental concepts in the brain that we have been exploiting, but not to the full extent things like sparsity, things like active learning. So a lot of learning right now is being wasted. We're using way too many parameters And then, when we actually use those parameters, we're activating them all the time. We should only be activating a sports subset of them, and then we're training over training and too much data. So, comparatively human, you know, is trained on, i'd say, maybe 15 years of video data, roughly speaking, whereas, like YouTube has 1000 x more than that has about 15,000 years. So we we just need better algorithms, and those are the ones that we're actually building here.
Speaker 2:So you said a bunch of things that are intriguing. So better algorithms to potentially optimize which parameters are used. So take a, let's say you know 175 billion parameter GPT for model. Talk me through. How do you, how do you make an algorithm efficiently understand which parameters are important for which which tasks?
Speaker 1:Right. So so you have to go. You have to start with the architecture. So if the architecture is just a giant black box in screwtable matrix is very little you can do, right. So the first thing to design an efficient algorithm is to make assumptions about the architecture, which are really assumptions about the world.
Speaker 1:Architecture is mirror assumptions or the priors you have upon the board. The assumptions we make generally is that things are decomposable to their modularity or hierarchy, more or less. And then the other assumption we make is that there's there's sparsity, So there's not so many things going on at any given point in time. Right, of all the possible things that can happen at any given point in time, only a few of them happen. And if you want to look at it more philosophically, maybe a solid way, maybe the conception of time is such that only a few things can happen, But I don't want to digress into philosophy right now. So once you have these assumptions about, you know the modularity, the sparsity, then you can, you can start actually designing the architecture to take advantage of these assumptions, Right, and then you can design comparative algorithms to do it. So in our case, for example, 175 billion parameter model would be compressed by an algorithm And it would be compressed by an order of 10 with similar performance. Right, and it's, with scale, much better.
Speaker 2:I thought the big innovation maybe this naive, but you can. you can set me straight. The big innovation behind a generative, pre-trained transformer model is that all of the text gets vectorized and so it's perfectly designed to predict the next word, because you quickly get to a cluster of words that are related and you can very efficiently predict the next word. That's all it does, but it does that super efficiently. So I mean, it sounds like what you're saying is that it's actually inefficient. If you could improve the performance, you know, maybe you could get 175 billion parameter performance with a tenth of the parameters, and it seems like the current architecture is actually poorly designed.
Speaker 1:But what am I missing? Yeah, it's very poorly designed And there's a lot of engineering work that goes into it and I don't want to dismiss that There's a lot of really great engineering that happens, but algorithmically it's just the simplest possible thing you can think of. You know, just have a transformer attempt to everything in the past, and there's not that much structure and assumptions are built into it. What's interesting is that you don't need that much to get very far with language right, with language. You can really do well with a poorly designed algorithm by just pumping money into it, which means data and compute.
Speaker 1:But this approach breaks down the moment you start getting into multi-modality, especially in complex multi-modalities that involve not just two modalities but three or more, and in domains where there's not enough data or there's not enough multi-modal data. So I'd say the current architectures are very poorly designed from an algorithmic perspective, although there's a lot of really great engineering work that goes into them, and I would argue, no, that they're not efficient at all. I mean, you mentioned how efficient the brain is right, and the brain is ultimately running some algorithm and it's able to make the next word prediction with much less power, with much less training data, so I don't think what I'm saying is controversial or surprising. It should be surprising, actually, that we've won so far down this rabbit hole of data and compute.
Speaker 2:So let's say, at Integral, you achieve the breakthrough of common sense AI. What are some things that we could do with models built using Integral AI that wouldn't be possible or wouldn't be cost or performance efficient using current LLMs?
Speaker 1:No. So I'm very excited about the real world AI. I think I'm going to just give it to other people, other companies. They're doing great work in LLMs. I think there's a lot of things valuable there for society. So I'm going to concede that world to them And I'm focusing this company on the real world.
Speaker 1:So the real world first natural problem in the real world is robotics, right. So you move robots in the real world, you give them any high level command and then they make it happen. And I thought a lot about what is a tool that you give it any high level command and get it done for you. That tool exists in mythology. We call it the magic wand. So the ultimate vision for robots in my mind is a magic wand. And then there's another thing where we walk around, we see a lot of things in the real world. Can we get some type of assistant, whether that's through an AR device or whether it's through a camera? Can we actually get an assistant to understand the real world around us and give us advice, recommendations, support? And also I thought about it. It's not as good of an analogy, but there I think crystal ball is kind of the magical metaphor, right. So imagine a world where people have magic wands and crystal balls. It's kind of a magical world, right? So these are two application areas that I'm very excited about.
Speaker 1:The third application area that's made possible through robotics. The first step is what I call automated science. So I'm really excited about applying common sense AI to things like drug discovery, improving our understanding of material science, because common sense AI will be able to take action, so do the experiments, but also understand the results of this experiment and build a world model for that particular domain and therefore predict what next experiment to do. So it's a process of active learning where the model kind of automates the scientific method in a particular domain. And I wonder how much we can accelerate science which moves at the rate of like one postdoc at the time right now, if we can unlock the ability to do science at scale. I'm very interested in things like rejuvenation, reverse aging, cancer research all the different things that I think will give us, as humans, a long life and vitality. So I'd say this is a third really fascinating application area for me on this big project.
Speaker 2:Jeff Hinton says something that is always intriguing when I hear him say it If we're smart enough to build artificial intelligence, we're smart enough to prevent that artificial intelligence from harming humanity. And yet when you talk about common sense AI applied that way like the magic wand analogy it's hard to not get a little bit dystopian and think if you could instruct the bot to do anything you want, who applies the ethical framework for what's okay and what's not okay for the bot?
Speaker 1:to do? Yeah, of course we think about that and we've thought about it for 10 plus years. The main thing here is this goes deep into philosophy, but I'll give you a very shallow answer and then, if you're interested, i'll dig into a much deeper answer. So the shallow answer is that we would like to have a universal framework and a cultural framework and an individual framework. So there's three layers on top of each other. There are some universal rules that our AI would never do. There are some rules that are left for the culture. So Japanese culture, for example, where I am right now, we value elderly people more, we value social harmony more, whereas in the US we value individualism, individual freedom, and it's not as black and white. So every culture will have to make up its own priorities, its own constitution, its own philosophy. And then there's an individual layer where, within that particular culture or cross-culturally, you should have a significant amount of autonomy to define your own culture, your own personal culture or the AI. So I think about it at this three level And it's always that the AI would follow the priority of universal, then culture, then individual. And if you're interested in a more deeper answer, it might get confusing to listeners most likely if they don't have a background philosophy. But I am actually working on a model for this.
Speaker 1:I'm working on a paper on. You know, there's this philosophical kind of. There's a history in attempts to define objective morality. Many people have tried to do that And actually the history of the late 19th century, early 20th century is an example of how these things went wrong. People took these kind of objective moralities and took them to extreme and you know we've had a lot of tragedy as a result. So after that philosophy had kind of like a traumatic response where we kind of abandoned that dream of having a sense of objective morality. But I argue that AGI will force us to rethink the question of objective morality again, not go back to something rigid and totalitarian as the past, but something a bit more open-ended and expressive. And I'm working on a concept like that.
Speaker 1:I call that concept freedom, but it's a reinterpretation of our traditional notion of freedom. In that concept, freedom is an ideal not to be achieved, right? So it's not. It's like a point at infinity, it's not something that you can ever hope to achieve. But having that gives you kind of like a North Star as a guidance for what to do, and that sense of freedom involves goodness and responsibility and understanding and wisdom as an essential part of it. So it's not freedom as opposed to responsibility, as joining them together. That ideal of freedom involves, i would say, the nexus of four different things. One is ultimate knowing, so knowing everything. Ultimate goodness, which is minimizing harm, minimizing waste. Ultimate power, which is being able to do everything. And ultimate vitality, which means everlasting life. And I don't know if you noticed that they correspond a little bit to some of the things I'm interested in with AI right, power, knowledge, vitality. To me, I think aspiring towards that is a potential candidate for an objective morality.
Speaker 2:Yeah, i don't know if you went to, i gotta ask at least one follow-up to that Doesn't that ultimately give the algorithm developer god-like powers to essentially define what objective morality is? It's subjective inherently because someone needs to define how the machine behaves based on certain circumstances and that becomes kind of a way to codify objective morality.
Speaker 1:Yeah, i agree, but the counterpoint is we already have god-like power, so I wouldn't want to abdicate responsibility and say no. we have to somehow come together and agree and discuss and somehow reach a conclusion that this is a shared vision for us, because we don't want to abdicate responsibility, but we don't want to take matters completely in our own personal programming hands. right.
Speaker 2:So that's the insight. It's the collective that needs to own that definition of objective morality, all the principles that you're saying makes so much sense, but there needs to be some distributed ownership of how that gets codified.
Speaker 1:Absolutely yes, and I'm working on an article precisely with that, that idea of freedom and the idea of distributing how we go about moving towards freedom.
Speaker 1:And in fact there's an interesting perspective there which lets you reinterpret the notion of money.
Speaker 1:So money is traditionally thought of, as you know, a unit of account, a store of value, blah, blah, blah. But really, if we somehow can all agree to a collective vision, to kind of objective morality, you can then define money as simply deviations from freedom. And it's a very clean definition, like everything we know about money, including things like supply and demand, come out of it, emerge out of that more fundamental definition. But this fundamental definition actually has more interesting applications, like how we account for economic externalities, which is gonna become a huge issue when we have AGI. Like, imagine you have 10 billion humans running around with magic wands in their hand, how much impact on nature you know. So we're gonna need to definitely do a better job at understanding and accounting for economic externalities, whether they're in terms of attention, economy, or whether it's sense of nature or generally entropy. So we're gonna have a lot of very interesting conversations moving forward as a society And I think we have to start and my hope is to contribute to this conversation with the upcoming people.
Speaker 2:When your freedom paper is draft ready, i'd love to see it and I'd love to share it with our audience, if that would be okay with you.
Speaker 1:Sounds great.
Speaker 2:Good, We're going way over time, but I'm gonna ask you one last question before I let you off the hot seat. You were surrounded by some of the most brilliant minds on the planet arguably, I know that's a big statement. but at Google Research, share with us one thing that you think the world doesn't yet know. that's kind of being incubated at Google Research, that something that we can look forward to in the years ahead.
Speaker 1:I don't think I'm ethically allowed to share something like that, but I will release the hot seat by saying like Google, research is extremely dynamic and moves very, very, very fast, so I really doubt that anything I remember from two to three years ago is still relevant to today. They're a bunch of brilliant people and they're very motivated and they care a lot. I know they get a lot of heat for being maybe slow, but I think it comes down to they feel this immense sense of responsibility and I have a lot of respect for these people.
Speaker 2:When we have you back on another version of this discussion, let's say in a couple of years, two, three years, what's something that we're talking about, that is just commonplace, that today would seem like science fiction.
Speaker 1:I. It's hard to give an answer to this question that will age well, because it seems that we're really approaching AGI and the closer we are to AGI, the harder it's to see. It's kind of the metaphor of the event horizon or a black hole. It's getting harder and harder to predict beyond the certain event horizon. But I would say the thing I am most excited about is kind of like, if I'm most excited about it, that's what I would be working on, right, and that's what I'm working on, exactly the thing I was. So the thing I believe is gonna be most relevant is we're gonna see these AI models actually affect the real world with everything from robotics to real-world assistance, to automation of science, and they're gonna contribute more and more percentage of our economy and I think the impact is going to be largely positive. Although that's not guaranteed, we have to do our best.
Speaker 2:Well, Zahad, there's a lot to talk about. I hope you'll come back and we can continue the conversation.
Speaker 1:Sure, yeah, this was interesting. I love spontaneous discussions that you know randomly go, so anytime.
Speaker 2:With your permission, we went off script. If you prefer next conversation, we can stick to the script, but this was fascinating, actually, much better, much better that way. Good, well, i really enjoyed this and I look forward to staying in contact and certainly supporting the work that you and the team are doing. So where can the audience learn more about you and Integral?
Speaker 1:Yeah, so Integralai is our website. We I'm happy I am in the process of spreading more awareness about what we're doing, so you know, there's my emails there. Zahad at Integralai, happy to go through our company deck. We were in stealth mode for two years, but we are coming out of stealth mode end of this quarter, so expect some exciting announcements. In the meantime, just feel free to reach out to me directly.
Speaker 2:Good. Well, I'll have. You know that this podcast makes unicorns, So, wishing you the best and hope good things come from this discussion and a lot of people get excited about what you're doing. Thanks for hanging out and thanks for doing this. I know it's the start of your day in Japan. I appreciate you hanging out. Thank you, have a good evening. Well, gosh, that's a wrap for this week on AI and the Future of Work. As always, I'm your host, Dan Turchin of People Rain, and of course, we're back next week with another fascinating guest.