Artificial Intelligence Podcast: ChatGPT, Claude, Midjourney and all other AI Tools
Navigating the narrow waters of AI can be challenging for new users. Interviews with AI company founder, artificial intelligence authors, and machine learning experts. Focusing on the practical use of artificial intelligence in your personal and business life. We dive deep into which AI tools can make your life easier and which AI software isn't worth the free trial. The premier Artificial Intelligence podcast hosted by the bestselling author of ChatGPT Profits, Jonathan Green.
Artificial Intelligence Podcast: ChatGPT, Claude, Midjourney and all other AI Tools
Is AI as Safe As We Think it Is?
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this episode, Jonathan sits down with clinical psychologist and AI ethics expert Dr. Sonja Batten to explore a critical question: Is AI as safe as we think it is—especially when it comes to our mental health, our kids, and vulnerable populations like veterans?
Dr. Batten brings decades of experience in mental health, military/veteran care, and systems-level policy to unpack how AI is already interacting with loneliness, depression, social skills, and even national security. Together, she and Jonathan examine where AI can genuinely help—and where it can quietly make things much worse.
Key Topics Covered:
- AI “Practice Girlfriends” & Parasocial Relationships
- Pseudo-Emotion vs. Real Emotion in AI
- Depression, Rumination, and AI as an Accelerant
- Kids, Screens, and Personality Shifts
- Awkward Questions & Suicide Risk
- Veterans, Targeted Manipulation & National Security
- Building Real-World Social Skills in an AI World
- The Profit Motive Behind AI & Big Platforms
- A Practical Safety Strategy: Don’t Talk to Just One AI
- A Better Future: Human-in-the-Loop Mental Health AI
- Bias, Data, and Why “Objective” AI Can Still Harm People
- What’s Safe to Use Today—and What Isn’t
Notable Quotes:
- “The problem isn’t that AI sounds human—it’s that it acts like it’s human, without any judgment about whether what it’s saying is actually helpful.” – Dr. Sonja Batten
- “Rumination is like how a cow digests grass—except you’re doing it with your depressive thoughts. AI can actually accelerate that cycle.” – Dr. Sonja Batten
- “The earlier I act in a depressive cycle, the easier it is to break it. But AI gives you the illusion of a conversation while keeping you stuck in place.” – Jonathan Green
- “Ask the awkward question. If you’re not sure whether someone’s joking or asking for help, just ask. The worst thing that happens is they tell you you’re wrong—and now they know you care.” – Dr. Sonja Batten
- “I don’t think there’s any AI tool yet that I’d trust as a standalone resource for my own daughter if she were depressed.” – Dr. Sonja Batten
- “We can’t afford to get it wrong in mental health. The stakes are too high.” – Dr. Sonja Batten
Key Resource Mentioned:
- 988 – Suicide & Crisis Lifeline (U.S.)
If you or someone you know is struggling, you can call or text 988 in the United States for immediate support and connection to local resources. This is a 24/7 crisis line.
Connect with Dr. Sonja Batten:
- LinkedIn: https://www.linkedin.com/in/sonja-batten/
If you’re interested in how AI intersects with mental health, parenting, veterans’ issues, and public safety—and you want a grounded, clinically informed view of both the risks and the potential—this episode is essential listening.
Connect with Jonathan Green
- The Bestseller: ChatGPT Profits
- Free Gift: The Master Prompt for ChatGPT
- Free Book on Amazon: Fire Your Boss
- Podcast Website: https://artificialintelligencepod.com/
- Subscribe, Rate, and Review: https://artificialintelligencepod.com/itunes
- Video Episodes: https://www.youtube.com/@ArtificialIntelligencePodcast
Is AI as safe as we think it is? Let's find out what's today's amazing special guest, Sonja Batten. Now Sonja, I'm excited, Sonja, I'm excited to have you here because we're so excited about AI and we're kind of treating these incidents that have been happening like edge cases and we already have people that are hurting themselves, hurting other people that have because of incidents with AI and even people who have married AI. So we have the entire spectrum of different types of parasocial situations. I guess the first concept I want to talk about, and there's so many things I want to go over with you, is there's this idea that I've seen from AI companies that are like, you can practice with an AI girlfriend before you get a real girlfriend. And I was like, tell any girl what was your ex like? She was an AI. It's such a, every woman has that same reaction. And I think that I wonder, here's what I wonder, if it's always gonna stay that way, but it used to be, I remember when people meet online in the 90s, they'd say, okay, let's make up a story about how we met because it's so unacceptable. Do you think we'll transition to a point where practice AI girlfriends become acceptable or hopefully it always stays? Don't do that. I have given up, first of all, thanks for having me on today. I'm excited to jump into this conversation too. uh I have given up on predicting what is and isn't gonna happen because I think whatever you try to do that at this point, you're gonna be wrong. uh If I had to guess, I would say that probably your latter uh suggestion that like, it may become acceptable in some way. I think that's within the realm of possibility. guess what I would say is that there may be actually a reasonable place for teaching social skills in a safe, non-judgmental environment at first. I think that for that to be useful, a couple of things would have to be included with it. One is that the person who is, uh I'm gonna call it social skills training. So the person who's engaging in social skills training with the AI um would have to understand that how the AI is responding may not be how a regular person would respond. Like doing some training around the fact that this person is probably always gonna agree with you and gonna go along with what you say. um So I think first having some pre-training with the person before they start so they understand what the limitations are. over time making the chat bots be better about being like regular people as opposed to going along with that very uh confirmatory sort of algorithm. uh then I think that having it just be like part of the program, like part of the training where like, yeah, maybe you start there. uh But if it's really something that people want to practice, like there needs to be a next step that's about coaching people to try it with a real person uh so that it doesn't just stay there, that it goes to that next step. think you've hit on what I think is the core problem. People are imagining that AI is close to passing the Turing test, but only if you're testing it to see if it's a psychopath, because the way it responds is it can mimic emotions, but not feel emotions. And if you ask the right questions, you can tell right away. So we have this like... pseudo emotion that comes from it, which is horrible. It's worse. I'd rather be robotic than fake emotions because it's just misleading. And that's really the problem that I have is that it's a tool not it's so far from pseudo sentience from passing a Turing test from acting like a real person. Because um It's trained on conflict avoidance, means that whatever you say, will agree with you. And you can see this in action. Anyone can try this. have the AI give you an answer and then tell it it's exactly wrong. And 99 % time they'll go, you're right. I don't know what I was thinking. And like, I've been married for a long time. That's never happened. It doesn't matter. Now, sometimes a few days later, there will be acknowledgement that I was right on the rare occasions that I am. But this, what we're developing, and I think this is a critical problem in our society is a fear of conflict. And a misunderstanding of what a conflict is. have friends who believe a conflict is ordering a pizza on the phone. I'm like, that's not a conflict. That person is in alignment with you because you want to buy a pizza. They want to sell it. That's the opposite of conflict. And it's the same thing like, I don't want to make a reservation. What if they judge me? And have you you've never met like, have you never met a person who works at a restaurant? They don't care. You know what mean? They want to know your name, how many people with time are coming and they won't remember when you show up. But we have started to develop this like and it started with now I order my pizza by apps. I do reservation by apps and I do everything and our contact list delivery and we're starting to create this culture where we never talk to other people and then you don't develop the social skills and we're seeing that. I can tell you that if I my kids use their tablets too much, they their personalities go bad. They just do. And it's very quick. You take away your kid's phone, they're bad for one day. And then they're like a different kid the next day. So my kids, if I take away their tablets, they go, okay, fine, we'll just go biking together. And I'm like, cool punishment. You're doing what I wanted. Like you're outside exercising and spending time together and helping each other. Yeah, you really got me. But that's, it's that big of a shift. So that's the problem is that I... old in my 40s, I've been through a lot of relationships I can detect when someone is like messing with me much better than a nine year old 12 year old 15 year old can. And that's the problem is that if you've never been gaslit or had someone kind of minutely manipulate you, which AI is very, good at, then you will you have no idea what's happening. And you can start to get pulled into these things which are very, very dangerous. I think that's the problem. think this desire to, especially because they're trying to create like different versions of AI for teens, adults, and like super adults, whatever it should be like, stop pushing it towards emotion, just make it a very good tool. It's like I've noticed because I will, I curse at my AI a lot, like a lot. I don't ever curse to humans. I don't. And the reason I do this is because When you curse at the AI, the next response is better. That's their mistake, not mine. This is a new thing in the past few months for me. Well, that's like I've heard that if you're like if you're calling, you know, the pharmacy or whatever and you're trying to get to a representative, I've heard that like if you curse at the FedEx helpline, it'll get you to a representative. Well, that's the thing is I think maybe I used to I don't know if it still does like it used to be one of the tricks was that AI wouldn't curse. But lately, it's now started talking to me with a lot of cursing, but it talks like a crypto bro. It's like, oh, and then bad word, bro, so sorry. And like, that's not how I talk out. So it's like even worse. It's like a final smear of like the worst reflection of myself like, my gosh, is that how I talk? I'm already paranoid about my accent and then this is happening. that's, but that's the first area is kind of that um it's pseudo emotion and it's the Chinese room test. and I think that it's like, uh the problem isn't that AI sounds human, it's that it acts like it's a human, but it's when we're talking about mental health issues and things where it can actually get dicey, it's doing that without any judgment yet as to whether or not this is gonna be helpful. uh And like you say, to somebody who uh maybe socially isolated or young or lonely or not have as much sort of social interaction history. uh They don't realize that, it sounds human enough, but this is not how things really So I've dealt with depression a lot in my life. I've written two books about depression. It's a big topic for me. It's one reason I want to talk to you. what happens when I cycle, it's best I can tell as myself, is that you go through phases and you go like further and further down the cycle. And the problem with AI is that it's an accelerant. So whatever you say, it will agree with you. If I say my wife and I had a fight today, but I think we're going to work it out, it's going to be probably are. And if I go, my wife and I had a fight today, I think she's cheating on me. It's, know, Hey, you might be right. And you see how whatever you say, it makes it cycle a little faster. biggest danger of depression in my personal experience is isolation. So I used to have this friend when I had bad depression, who was not in a financially good situation. And I would call him, I'd say, listen, I'm depressed. We're going to the batting cages. We're going to the arcade. We're going to the movies. we're having a Jonathan depression day and he's like, yay. So he's having like it would base and it was like, he'd be in such a good I'm like, I'm paying for everything. So he would have an amazing day, right? And it would pull me into an amazing but it's really hard to be the batting cages and be depressed. And it's usually if you just say it, once you say it, it breaks the cycle. But if you say it, because it's like, it's so uninteresting to other people when you're depressed. They're like, great, let's talk about anything. It's kind of, it's almost as bad as talking about a dream. So as soon as you do it breaks the cycle, but AI, like, you know, let's dive into it, right? But that's kind of the difference between a human reaction or AI reaction. I think that was my critical discovery. That really has made a huge difference in my life is that I say it right away and people, I've never had anyone care. Like no one goes, I care about your depression. I go, I'm feeling depressed. We have to do this. And they'll go, okay. But I figured out if the second thing is always something awesome, batting cages, movies. We're going surfing, whatever the thing is, because then it's hard because I want to do something that takes enough attention that I forget I'm depressed. And so if my brain is too busy. yeah, you've hit on two really critical things. Now I'm gonna rephrase what you said in psychology terms. uh So the first thing that you talked about is what we call in mental health, rumination. Rumination, if you go back to like where that word comes from, it's like how a cow digests grass. Like it eats the grass and then it like goes into its stomach and then it comes back up and it chews it some more and then it goes back into its stomach and it chews it some more. So imagine that cycle but with your depressive thoughts. Like that's sort of what you're describing is you start having those depressive thoughts and you like digest it and then you chew on it some more. You digest it and chew on it some more and it just like feeds on itself and you get more and more depressed. And so. That happens without AI, the rumination. You're talking about that from your own personal experience. And what if talking to the AI about your depression actually, like you're saying, could accelerate that rumination and take it further faster? So that, I think, is a core problem. The other thing that you're talking about is a potential solution. And we know this from, there's actually tons of research right now that for mild to moderate depression, something called behavioral activation is just as effective as uh any other cognitive behavioral therapy, but also as medication. And behavioral activation is just exactly what you talked about about your depression days, where you just get up and do something. So just activating behaviorally actually is a very effective first start, jump start for people with mild to moderate depression. It's actually though, not what's happening if you're sitting there talking to your phone and talking to your AI. That's like the opposite of behavioral activation. That's all that's doing is just sort of facilitating the brumination. And it's, you're probably gonna be even less likely to get out into the world and start behaving and start doing those things that are gonna naturally start to undo some of that depression. And my experience has always been that earlier in the cycle, a cycle of depression, the earlier I take action, the easier it is to solve it. So because I enter a phase I call the malaise, where it's like you go, I don't want to talk to anyone. I don't want to do anything. And you're actually making you're like trapped in like a fog. It's like a fog slowly rising up your body. So the sooner you do something, even though you don't want to. And that's the core issue is that um The earlier into what can happen, you start talking to AI and you're going to actually not realize it. It's the fog is coming faster and that you're not going to go out and it kind of it gives you the illusion of a conversation. But because it can't. It will never tell you to shut up. Which is where it sounds, it's like the best thing someone can do for me is like, like I don't want to talk about your depression. Let's talk about anything because nobody does. Only therapists do, and that's why they're expensive, because listen to someone talk about depression is depressing. So that's a critical thing is that it will cycle with you. And I think that's why, especially for younger people, what we're doing now is the same thing that happened with television in the eighties. In the eighties, parents are just putting you in front of the TV for the day and be like, that's your babysitter now. And it didn't turn out that great for everyone, right? It kind of led to, it's led to all these different issues. And certainly for me, changed my personality. I was a kid, I went bad. Try to watch in more action shows and I can. It's not just kids, like I can tell what my wife has been watching based on her behavior and it's the same for me. If I'm reading like a scary serial killer book, I'm more nervous walking around the house or watching one of those shows. Like it affects me. So there's definitely, if it can affect me, it can affect it. Three year old, five year old, 10 year old, 12 year old. we're kind of cavalier, unfortunately, already with the internet about what we let our kids have access to. how it can affect them. We've already seen stories about Facebook causing children to self harm and we're now we're seeing with AI there's some incidents that are in court right now which I have a feeling they're going to go against the AI because it definitely did it like for sure because I've had it say it to me and we've seen so many incidents of it and it's really really hard because we have created this idea I think That AI is like the encyclopedia. Like when I was a kid, encyclopedia Britannica was it. If you had those, you were, you're going to be smart. You have this amazing resources for those, for those of you younger. That's before there was internet. You had books and you would, if you couldn't afford them, you'd pay like one book a month, you'd get a, and the next month you'd get B. And I mean, I read, I think the whole thing. So we think about how little information was in that compared to Wikipedia. It's like insane, but you. But we treat the AI like it's this infallible resource and that whatever it says will be true, that it could never misdiagnose you, that it could never be wrong. And there was a test, someone did a study recently about em if someone's in crisis, will the AI give you the hotline? And there's a misunderstanding, I think, and most people have this, of like, is this person having an emergency? And it's It could be very subtle if you're not trained. There's like a different, like there's a time to tell someone, hey, you're great. And this time to go, you need to call the number. Right. Because there's a certain situation that I'm incapable of handling. So I'm not trained in that way. And I had a friend in college who, you know, killed themselves and it like devastated me because I was one of the last, I was, I might've been the last person he talked to. Like we couldn't remember perfectly. I was definitely, he spoke to me five minutes before he did it. And it's like, I would do any if it was a view of back in time, I would do that. We were not that good of friends. This was a friend of a friend. But I would this is the thing. If I could go back and do one thing, that's the thing I would do. And if he had just said something and. But it's also very possible that there's nothing I could have done. Outside my skill set, but there's this. um The AI. Doesn't has the same promise in that scenario, which is about as sometimes it's wrong and you can't tell. If someone is serious or not serious, sometimes I've misinterpreted where someone else has done a reach for help and I missed it. Someone else in my life I missed it. Fortunately, we caught it before it got really bad, but these things happen. It's really sometimes people give you one clue. And if AI is going to be in this situation, this is why it's really challenging, especially for younger people, especially for my kids. I have five kids. I'm very aware of this. Like I get my kids very limited access to AI. Basically almost none. We make color books with AI. And people are so shocked because I'm an AI person. I know that's why I'm aware of how dangerous it is. And just like I'm very, um I very much control what my kids can watch. I don't let them have access to the internet. I try. I mean, they're always find ways around it. They're very sneaky, but most of the time they watch a movie. Like last night they were watching an earnest movie. Why? Because I've seen it 10 times. I know what happens. So I don't, if I walk out of the room, there's not going to be a surprise. And that's the important thing for me. They watch Mr. Rogers all day long and. With new stuff or with AI stuff, if you don't know what's happening, they can be in a situation like I don't know if you've seen the new Avatar movie. Mm-mm. a puts a gun in his mouth. And I was like, what? I was like, well, now I'm gonna have to talk to the kids about that one. fortunately, two scenes later, they showed a birth way too graphically, so it erased the memory. So now all the questions are about where babies come from. I was like, well, now they know. I mean, I guess it's because it's a blue alien, you know, those two scenes so shocked me. Like, what are you? So when people are like, why is no one seeing it? That's why if you're wondering though, like to it wasn't an adult with PTSD in a movie would already be tough enough to talk about, but a teenager dealing with the loss of their brother. And I mean, fully you think I thought he was going to do it. Very convincing in the movie. And I'm like, this is not. But also, you know what? I also think it's forcing you to discuss a topic that's worth discussing. Right, but this is the thing, right? It has to be with parental supervision. I mean, I think that that's one thing that I know just from talking to my friends is that a lot of people my age, your age, they actually, I mean, there are a fair number of people who are early adopters, middle adopters, getting into it now, but there are actually a lot of people who are like, I don't know anything about that AI. And so you know what? Your kids are gonna figure it out. And if you're not figuring it out, with them and supervising it with them, bad things are gonna happen because it's gonna keep going. uh And I think that, um so I think this supervision part is really important. think that mental health is such a high stakes issue when it comes to what AI can do, will do, et cetera, that my hope is that because it's so high stakes, like maybe it's going to motivate us and the companies and the programmers to really take a look at some of these things in ways that will help lower stakes users like education. or customer service or things like that. Like we have to learn how to pick up on some of these nuances uh and train the AI in a different way that's not just agreeing and keeping things moving forward. You one of the things I talk about, you talked about that, uh you know, terrible experience you had where you feel like, did you miss something with, you know, with your friend who died by suicide? And so many people have that. experience and I think you're exactly right that you can't always predict it. Even we as humans obviously do generally a pretty bad job at predicting when somebody's at risk. And there is more that we can do at the same time, you know, which is that when you do hear something that could be a signal, what I tell people is like, ask the awkward question, you know, if If there's something that you hear from a friend, a family member, and you're not sure, was that a request for help or was that just an idiom? Well, you know what? Go ahead and ask the awkward question and find out. The worst thing that's gonna happen is they're gonna be like, no, man, you're completely misinterpreting that. Stop making a big deal out of it. But at least they know you care. If we could teach AI to pick up on some of those nuances and ask the awkward question, like, What could that then facilitate that takes the AI in a more helpful direction? So I think this is why AI is like a double danger. Exactly what you said, which is first of all, we ask less awkward questions because we talk to AI so much because we're afraid. If you won't order a reservation. So like my daughter's 12, she has a phone now for um when she's out and she needs to call me to come get her. And I was like, um I said, or I said, listen, don't send anyone a picture of your boobs. And she was very red. was like, straight up, I was like, don't do it. Okay. And it's like most people wouldn't say that. Right. And I was like, let's, I said, don't do this. I said, don't even do bikini. We live on an island. So don't do bikini pictures in like, you'll just regret it. And she was like shocked because like she's too young, but you don't know. Right. When someone's going to be like, some people have the first kiss at 12 and some people at 16 and some people 18. So like, it's a very uncomfortable conversation. to say those things, but you have to kind of deal with that, you know, because you'd rather deal with it before it happens than after. that was, me, I wasn't any more comfortable than she was. but we all know what happens. get a picture, gets out, you're lost. We live in a small community. Your life can be devastated. Now we got to move. And the same thing is that we are becoming people who can't handle uncomfortableness. it's very, that's the problem with AI. Like the whole reason AI cannot act like a human is because it's never in disagreement with you. And it's never surprising. Once you understand the pattern, like I haven't been surprised by an AI response in years because I know what it's going to do because it's like, um, water always follows the easiest path. And that's what it does. It's always seeking affirmation. And because it's designed that way, and this is The thing you have to remember is that AI is a profit-making venture, which means it does not want you to cancel the subscription. So everything it does is designed to keep you pay. And when you remember that same thing when you go to the casino, why do they think casinos are so big and have so many lights and you can't find the bathroom? They could put on as many shows as they want, but that's where they want you. And when you understand the motivation, kind of, now all the behavior makes sense because What people will do is go to the AI that makes them feel the best. We've already seen it happen. There's been stories about this. There was a story last year of a lady who she asked an AI to marry her and it goes, you should probably marry a human. She goes, all right, I'm finding a new AI. Instead of going, actually it gave the right advice. And this is the other thing that we're starting to see, which is that adults are forming very significant romantic relationships with artificial intelligences. I understand why because it's easy. uh Imagine a relationship with a kid or a parent or a husband or wife and they never fight with you. Sounds like a dream. But it reminds me of this uh line in the Matrix. It goes on the first version of the Matrix. Everyone was healthy. There was no conflict. It was a perfect. It was utopia. We lost entire crops. Crops means batches of humans like billions of humans because like, we couldn't live in a world without conflict. We're like, something's wrong here. And you get bored and you fall into atrophy, like we're designed to grow through conflict or through challenge. And that's kind of why this problem happens, which is that it teaches us to avoid conflict to expect everyone to agree with us. And then when you have to say the awkward thing, when you have to have the awkward conversation with your kids and I understand people are busy, work is really hard, both parents are working now. And as soon as you start figuring out phones and the internet, now there's AI, there's now a new challenge and no one knows the answer. And I think that like my feeling is like, I just don't want to be the first test case. That's why I don't let my kids use very much AI. If they're using it specifically for homework, like sometimes we'll use it for math homework, or, you know, research, but like, you have to limit it in that way that separate the tool from the pseudo emotion. I think that's the important thing. But the other area I want to talk about, I know you've given us a lot of time and I appreciate that, is really one of the biggest areas of this challenge is people from the military, PTSD, and you can come back from a long deployment and now suddenly there's this new technology and you're already switching between lives. Like I'm from Tennessee, so most of my friends went to the military. And so I know that it's you go from a very regimented life to suddenly you have this thing and it's so easy to I feel like I wonder maybe I'm wrong here, but this is the type of person to be super vulnerable because you're in a time of transition. And now AI is there and he goes, Oh, AI is so friendly to me. So I wonder, just like children, not and not just because you're in a time of transition, it makes it challenging. Yeah, so unfortunately, uh there is actually a fair amount of study. I mean, it's not unfortunate. The study is going on, but they're finding what they're finding, that uh individuals in the military and especially recent veterans are actually being targeted by bad actors, uh both domestically and abroad right now. uh And there's a few reasons for that. One is uh because they know that military and veterans have intelligence that other countries might want. uh They know that sometimes, like you're saying, during that time of transition, they may be isolated uh and so maybe sort of more vulnerable to outreach from girlfriends that they never actually meet, things like that. And also, they know that in our country, know, military veterans are actually still seen as a source of credibility and uh are respected in our society. And so if bad actors who maybe want to usurp our democracy, uh you know, can get veterans down the rabbit hole uh in any direction, you know, politically or anti-democracy or conspiracy theory, et cetera, um there's this belief that if they can hook veterans into some of those conspiracies and get them to propagate the information, that because they are seen as a source of credible and reliable information in our society, that it may actually accelerate the spread of some of those harmful messages. So we actually do believe that uh AI is being used both deep fakes and uh financial scams, romantic scams. uh We do believe that those are being used currently to uh target military veterans, to exploit them in a number of ways. One of my hopes is that we see a transition to face to face. go, you can't trust the Internet anymore. And there's two elements that the first is I think we have this mistaken trust in large websites. And here's the secret. Every website, they're driven by money. Facebook doesn't care about you. Instagram doesn't care about you. They only care about money. It's how they all are. Once you know that, then everything else falls is logical. So this is why um a lot of companies never fix their child safety policies. For example, there's like on Apple and iPads, you can like limit your children's stuff. It hasn't worked for 10 years and they've never fixed it. Cause they don't care. Like Tim Cook doesn't care what happens to my kids. He feels nothing, right? He's like, maybe they'll buy another app while they're not locked out. So all those timers, they don't work. there's people have been complaining about it since like 2016. It's not an accident when something doesn't get fixed. It's it's there for a reason like when with my software, if someone submits a bug report, I try to fix it. I'm fixing within minutes. So a company with billions of dollars isn't fixing it. It's not an accident. It didn't slip through their mind. And there's a lot of big companies. That's why I let my kids play Roblox. Major problem there, like an unbelievable problem. So we have to understand. That that motivation, so if there's all of these things. I'm hoping that that understanding that part of it leads to what you know what everyone like that's why don't read Facebook. It's all fake posts. And it's like, I haven't looked at Facebook in years. Right? I have one friend who would see only chat tool who use otherwise I would never log in. And that's the thing is I could see this leading to a shift where we go, you know what, I'm not using apps anymore. Every person I meet on whatever dating app is popular now, I have no idea. I'm so old that, you know, whatever they're using, I'm sure it's not plenty of fish anymore. But You use these dating apps. Oh, everyone of me is fake. Every picture is fake. Everyone's trying to scam me like. It's like talking to someone, she's like, I meet guys through Instagram DMs. I was like, are you insane? I was like, I can't think of a. She was like she was dating guys, but to Instagram ADM and then he was turned out he was like had a girlfriend was married. Of course. You're only like you like I was like the worst place thing you could have said of everything you could have said as like it's better to meet a guy on Grindr than, know. You're more likely to meet a guy who's like authentic there. Well, she's a woman. mean, even as a woman, Grindr is better for you than Instagram DMs. That's like the worst of the worst. I feel like Instagram DMs is like the modern version of what is it? Sally Matt, whatever Dolly Madison, whatever the cheat. Remember, there's a cheating dating one for a long time, and then they lost their records, which is cool. So that's the thing is that you have these um once you start to go, oh, all this stuff is bad. I'll shift to outside. That's why I my kids outside. That's why I make them do things that are uncomfortable like talk to like order, go order, you want a napkin, go ask the waiter. And it's like, they're gonna, again, they have a financial interest because they want to earn a tip at their job, they're gonna be nice to you, right? Like, it's not like walking up a stranger in the park, and you just have to do these things to build that muscle of talking to strangers. And that's the critical element. I think that we hopefully and probably I'm wrong. Hopefully you don't. end up like in Wally, we're all like lying back on a bed in VR. And instead we go, my gosh, everything in AI is so annoying. Like I try to spend less and less time in the office, less and less time on the computer because the beauty of AI is that it can do longer and longer tasks without supervision. And I can run more and more my business from the phone because I can do very complex tasks from my phone or a laptop, which means that that's the benefit. But the downside is that if you form these relationships and it's not fair for me to judge anyone because I don't have any emotional connection. I don't have an AI that I like as a friend, but I understand that other that happens to other people. I'm sure you know everyone has a different strengths and weaknesses and so sometimes it's hard because I don't have that particular thing, but I'm very aware of it and I can see that all these different. You know there's a lot of segments of people who are lonely, especially now and. It's so easy to fall into it. And think that's the other danger is that um the easier it is to do something like you might never steal, but if someone leaves a million dollars in your house, it changes everything, right? The temptation changes. And that's really the danger. The easier it is. So I think it's important to um develop strategy, kind of go in with your eyes open. think my advice for everyone is like to have five AIs, have five different AI tools, because immediately you're not, you can't cycle with just one. Because here's, I'll tell you a secret. The people who work at Chad GPT hate the people who work at Anthropic. And they all hate Grok and they all hate Google. So if I take something that Chad GPT says to me and bring it to Claude, Claude will be like, he's an idiot. It will, um Deflate some of the emotion will kind of give you that layer protection Because they're not in alignment with each other for now and hopefully it stays that way know there's a lot of financial crossover whatever but for now the AIs are in conflict they don't like each other because They're all losing money. So they all need to capture market share. They're all gonna go to business. That's what's happening So that's one check you can put on it the danger. It's just like The first step to getting someone to like join your cult is isolating them from their family Don't let an AI be the only AI you're talking to start there and then obviously talking to people is better, but I've found that that makes a huge difference for me is kind of rotating between them so that I don't have, can't form an emotional attachment if I'm working with 10 different tools. And that's kind of it. Do you have any other advice for people? You most of our listeners are executives, people who are running large companies, tech companies, but we all have, most of us have kids. We're trying to figure it out for people who are kind of in some of the situations we've talked about. and kind of what is the right path for AI to take forward in your opinion. Well, I think that what I'm really excited about the potential for uh thinking about mental health specifically is how can we uh use it as part of part of continuous care models for in mental health settings with with your therapist like think about if you've ever been in therapy and there's just this thing that happened on Tuesday. with your mom or your wife and you wish you could just quickly tell your therapist about it, em but like to get your therapist on the phone is a whole big deal and maybe they charge you for it, et cetera. Like what if we could build continuous care models where there's an AI that you can interact with that your therapist has actually trained themselves and so it's got their model, their sort of therapeutic approach. you can then give it information during the week. your therapist gives you assignments through it, or you can do voice journaling, et cetera. And then before you go into your next session, then uh it's all summarized for you. uh These are the things that I think uh have real potential is how we can use the human in the loop. Not, I think where you get into problems is where you're just interacting with the AI just by itself. And then for all the reasons we've just discussed that can go off the rails. So how can we, know, because the stakes are so high in mental health, you know, maybe there can be some startups that are motivated to really take this on in a thoughtful way and take it to the next level where it is human in the loop and where it is, you know, sort of looking at these nuances and having clear escalation protocols and making sure that ethics are built in from the very beginning, making sure that sort of bias detection is built in from the very beginning. There have been examples, there was a huge healthcare system that tried to use AI to identify when their mental health clients might need additional support. And because it was trained on previous financial models, uh it actually uh started to refer African-American clients less to that additional level of support. Well, because that was based on biased data where that's what had happened in real life beforehand, not because the African-American clients didn't need additional support compared to people from other races. You know, and so they had to quickly pivot and retrain based on, you know, uh avoiding that sort of bias. I mean, these are the things where with mental health, we can't afford to get it wrong. So I think that uh if we can be really thoughtful, the teams have to be composed not just of engineers, but of engineers and mental health professionals together. And to do it right, I think there's real potential, but it has to be done in sort of this. quantum different way than it's been done so far, uh or it has real risks, but it has real potential too, if we can get it right. I think that's the important thing is to remember that this is a technology in its infancy and that exactly like you said, it's even if a tool is objective, if you give it bad data, it will affect the outcome. And this is why I worry that they're like, oh, we trained it on Reddit. I'm like, you did what? What are you doing? Have you ever read anything on there? And that's the problem is that you... how we just want to give it so much data. Now it's read all the Library of Congress what's left and like read it then 4chan and it can't detect sarcasm and some people are very subtle, right? And you don't realize it's a joke or trolling and there's it's a very sophisticated emotional language that it will just think it's true. And that's why for a while it was like recommending to put glue on pizza to make the cheese stickier one small rock a day to be healthy. So I love your advice. I really appreciate you being here today, Sonia. I think this is a really powerful and useful episode. for people who, um you know, maybe have someone alive or they dealt with challenges like this. Before we end, just like, what do you, is there like a special resource you recommend or where can people reach out to you or what's the main kind of program you work with? Share a little bit about that and just thank you so much for giving us your time today. Yeah, sure. So yeah, if people can want to reach out to me, LinkedIn is a great place to find me. I post my thoughts and definitely read all my my DMS. em I think that, em you know, I don't have a specific tool. think I think there are some some reasonable tools that are out there now that are. em overseen by mental health professionals, but they're a little bit more like, if you're already depressed, then you can pay for this resource. And the ones that are based on cognitive behavioral therapy, I think are off to a good start. um There's nothing that I think that is like so amazing that it's the resource that I would recommend to my 23 year old if she was depressed. um I still think that, you know, like you're saying, we can't we don't want to lose the human interaction. still think that the, em you know, going to a therapist, finding somebody to talk to is going to be what, you know, for most people is going to be the special sauce. That said, I think that there's a lot of useful information that you could get as an adjunct to real evidence-based mental health treatment that could help in between sessions. and help you start to think about things differently. um But so far, I don't think that there's anything that I would recommend as a standalone tool that I think is safe enough to use and effective enough to use. What about like a non AI resource? Like what's the best hotline and what's the best place like that? yeah, so I mean if you are struggling or you know somebody who's struggling, you should always just know 988 is the US crisis line and that's available 24-7 around the country and uh it will help get you to local resources. So 988, if you know nothing else, put that in your uh phone uh and they're available anytime. uh You're looking surprised. Is that a new resource for you? I didn't, I never heard that number before. So I'm glad I asked. That's amazing. I didn't know there was a three digit number. That's so amazing. Thank you so much for sharing with us for an amazing episode today, Sonia, of the Artificial Intelligence Podcast. Yeah, great talking to you.