The ThinkND Podcast
The ThinkND Podcast
The New AI, Part 7: Virtue in the Generative Revolution
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this episode of The New AI, John Behrens '83, Director of Technology and Digital Studies, introduces Graham Wolfe, Editor in Chief of The New AI Project's Explained series in a discussion that also features Professor Walter Scheirer and student expert Claire Hill. They dive into topics such as AI's impact on disinformation, the ethical dilemmas faced by developers, regulatory frameworks in different regions, and the role of AI in fostering human flourishing. Additionally, Walter shares insights from his books and emphasizes the importance of virtuous technology use. Tune in for a deep conversation on navigating the evolving digital landscape with AI.
Thanks for listening! The ThinkND Podcast is brought to you by ThinkND, the University of Notre Dame's online learning community. We connect you with videos, podcasts, articles, courses, and other resources to inspire minds and spark conversations on topics that matter to you — everything from faith and politics, to science, technology, and your career.
- Learn more about ThinkND and register for upcoming live events at think.nd.edu.
- Join our LinkedIn community for updates, episode clips, and more.
Hi everybody. I'm John Barons, director of Technology and Digital Studies and Professor of the Practice and the faculty advisor for the new AI project. Today I'm here to introduce Graham Wolf, the editor in chief of the new AI Projects Explained series, as well as the student director of the new AI project, which is focused on sharing the journey of how society and ourselves. Live with ai. So I'm gonna hand it over to Graham take it away. Graham.
graham-wolfe_1_03-06-2025_130131my name's Graham Wolf as Dr. Barons, introduced me I am the student director of the new AI project and editor in Chief of Explained. we've done some great work, as a team over the past two years, developing, student experts in different domains of how AI is impacting our daily lives and, changing the world. with us today, we have Professor Walter Shire, who is an expert on, Artificial intelligence, disinformation and internet history. last year he authored the book A History of Fake Things on the Internet, and more recently co-authored the book Virtue in Virtual Spaces. we're really excited to have wide reaching conversation about how AI is evolving disinformation and how we're ethically and morally dealing with the consequences Walter, thanks so much for being here.
walter-scheirer_1_03-06-2025_130131Thank you so much, Graham. It's my pleasure to be here.
graham-wolfe_1_03-06-2025_130131great to have you. also with us today is Claire Hill, one of those student experts I, mentioned earlier. she's done some great work, on what we call taming ai, which is column that talks about any attempts to, tame AI and reel it in, whether through regulation, ethical considerations or governance. Claire, thanks so much for being here.
clare-hill--she-her-_1_03-06-2025_130131Thanks Graham.
graham-wolfe_1_03-06-2025_130131All right, so the first thing we want to talk about, Walter, I want to start with the book, A History of Fake Things on the Internet. we talk a lot about how disinformation is, being augmented by this new variable of artificial intelligence and how it's adapting to this new digital medium. But to take a step back, you're a computer science guy. history of the internet doesn't really, sound so much like computer science. but it's certainly a very interesting take. I'm curious, how did you come to write this book? and on top of that, for our readers who have perhaps not read the book, what are some of the main takeaways that we should know?
walter-scheirer_1_03-06-2025_130131Yeah, it was a wild journey putting this book together. so the book really starts with my work. In computer science in a particular area known as media forensics. for many years I've been developing algorithms that are able to detect, if an image or video has been edited, if it's synthetic. and of course there are all sorts of, interesting applications of having detectors of that nature. for many years this was an obscure, research area. some folks in the government, specifically law enforcement, were interested in these capabilities, but the broader public, wasn't really aware of this work. it predates all of the hysteria around fake things on the internet. and so about 20 15, 20 16, right? as there's a presidential election gearing up all of a sudden, right? You heard this term, fake news being used routinely. a huge outcry over all of this fake stuff that had, allegedly just suddenly appeared on the internet. as somebody who had been looking into these matters for a long time, and had been on the internet since the early days, I was very skeptical that this was a new thing. in the back of my mind I was thinking, someone should really write a history of this, explain how we got to this moment. I got the opportunity to do that, as we got into 20 19, 20 20. as, the then Notre Dame Institute for Advanced Study approached me, about writing a book on this type of stuff. I said, absolutely. In fact, I've been thinking about that for a number of years. I really wanted to take a deep dive into the origin of creative technologies used to create the fake stuff to understand, where it first appeared and what context. were there certain communities that were interested, in telling stories on the internet? Who were they? Would people be willing to talk to me? as somebody who's been on the internet for a long time, I had an inkling who to talk to first. I know plenty of contacts in industry, plenty of contacts in various subcultures on the internet. some associated with fake stuff, particularly computer hackers. as it turns out, people were more than happy to talk to me about this. it was just as the pandemic was starting, so people were at home and had extra time. They wanted some kind of human connection. I was hearing war stories and developing a bigger picture about why this happens and why it's far more complicated, than these superficial narratives seem to indicate, especially when it comes to political material.
graham-wolfe_1_03-06-2025_130131Yeah, really interesting journey. Like you said. Just to zoom in on, disinformation, and misinformation. we recognize there's a difference between the two. if you could summarize, on top of those base level definitional things, what are some of those key takeaways that a reader might have?
walter-scheirer_1_03-06-2025_130131Yeah. there's a lot of debate over what term to even use. I think there's a common term that's been around like fake news, but what is it, Is it disinformation? Has it been crafted by, some intelligence service? For some specific political. objective. Is it misinformation? It's a story that was misinterpreted, and it's now circulating, and allegedly people believe it. or is it something completely different? A main takeaway of the book is that it's usually this latter category. fiction serves a number of different roles in communication. in many cases it's not malicious, even if the content appears to be malicious. the book makes a case to rehabilitate. Parody and satire, Common forms of humor often used in a political context, but these days are dismissed as fake news. something that needs to be regulated. a lot of this reduces to partisan politics, which I think is the bigger story when it comes to all of this. what I found talking to folks, especially in the early days of the internet, they were building subcultures, by creating myths essentially. the book, basically draws from. In anthropology, specifically Claude Levy Strauss, talking about mythical thinking versus rational thinking, and why you need both. Why there is a human inclination to tell stories, why entire civilizations were built around the notion of myth, If you think about the Homeric myths, in antiquity, that never really stops. even though the enlightenment, tries to squash it. people love telling stories, In a fictional mode. People love exaggerations. that's done now using creative software. if you look at how human communication has proceeded, it's not surprising that we have all these interesting creative tools that allow us to project our imaginations to other people. but again, there's a lot of misunderstandings. It's easy to. Change the interpretation of this material, To suit a particular political narrative, which is typically what we see. Now, that's not to say there's nothing bad out there on the internet, certainly there is. but the book looks at that material as well. It makes the case that, when governments are doing this right, or if communities are doing this in a malicious manner, they're typically creating fake material that is obviously fake, that the messages are more clear to interpret. Historically, this is how propaganda has worked. again, there's this myth of the, perfect fake that somehow history will be revised. But the most effective propaganda is something, is fake, right? But the message is terrifying. think about photographs from, the Soviet Union where someone has been edited out of a photo. that person has been executed, right? Like the message is clear. They no longer exist, in the physical world, and the government may do this to you too, right? If you're not. following, the orders, right? and again, it's a complicated history, like how different governments have used the media, how individuals have used the media, how different communities have used the media. You have to unpack all of that to understand the nature of the fake stuff.
graham-wolfe_1_03-06-2025_130131Yeah, really interesting. I think today we do wanna get into the regulatory side of things, because that's what a lot of our recent work has focused on. you made that interesting, segmentation between, the deliberate use of these creative tools to. advance a political agenda and disin inform. but also there's that latter case, which is a lot less malicious, and more stemming from that creative impulse that, we have to tell stories about ourselves and to, express ourselves creatively. Dr. Brands, I do want to loop you into this conversation about, what's real and fake, in terms of what's created by ai. I've heard you use some really interesting talking points about that before. I'm curious, when we talk about AI hallucinations, a lot of people talk about that as this kind of, doomsday scenario where we can't tell what's real or fake and, AI is just spouting nonsense and making things up. I've heard you talk a little bit about, the difference between using AI in a creative sense, in a way that's not linear or not factual, versus using AI in a way that's, rooted in fact expecting us to get these true versus false things. what's the line between those two is there a gray area where, maybe we shouldn't be calling this hallucination and rather it's more of a creative tool.
john-behrens_1_03-06-2025_130131Yeah, I really liked what Walter said and I was lucky enough to hear Walter speak about his book, a year and a half ago when it first came out. great memory of that conversation. When I talk about hallucinations, one of the things that strikes me the most about how people engage with generative AI right now is that they often approach it as an information retrieval problem. They think it's like Google or it's like a library search. I'm gonna ask a question. And I'm gonna get an answer. But in generative ai, you use it for all kinds of purposes. You can write a poem, you can write creative writing, you can create an image that you may consider art or not. And this, aligns with, Walter's analysis, which I love, about, different ways we use tools and different purposes. one of the things I say when I talk to business groups is, you can think about a place like Notre Dame. in the art department, we're trying to teach students to hallucinate, there is no right answer in the engineering department. There is hallucinations because things can be right or wrong and you don't want bridges falling down because I had a very creative idea about how to build a bridge. the generative AI where the relationship between because they can do so many different kinds of things, I think it really feeds into this model that Walter has about the continuum of how we use language. So Walter, I'm wondering, how that connects to, all this deep experience you've had over these last years?
walter-scheirer_1_03-06-2025_130131coming back to the original question about a regulatory framework, I believe that can only be developed in very narrow cases. if you think about the general use of these technologies, it really is speech, right? And in the United States, we have very strong guarantees around. Free speech, So if you're gonna start to, move towards the control of speech, that would be a serious drift towards tyranny, which the Constitution doesn't support. That said, there is one particular area, which I find encouraging in terms of regulation, and that is emerging laws, at the state level, around, deep fake pornography. this is obviously fake material. it's increasingly targeting young women in like middle schools, high schools, right? and it serves no purpose. it's hard to make a case that, it would be bad to control this particular form of speech, it sends no important messages, It's clearly being used, to bully people. In some cases, it's outright illegal if it's depicting minors. and so there's been, a strong movement. in various states, including here in Indiana, completely get this off the internet, which I think is an excellent move. but beyond some of these narrow cases, I'm struggling to see why this would be a good idea in many cases, Again, you have, partisan politics coming into play, right? Someone wants to silence the political opposition, right? And so the proposing regulation, as a mode to do this, that's clearly, a bad move in a democracy. I just can't see that. happening in America in any same way.
graham-wolfe_1_03-06-2025_130131Yeah, really interesting. and like you say, taking a look at the narrow cases, I understand how it can be very promising and then abstracting it up to the level of politics and governance as a whole, is where it starts to break down get constitutionally difficult. Claire, I think this is an excellent opportunity for you to talk a little bit about the different regulatory frameworks, the different regulatory aims that we've seen emerge throughout the world recently since this Generat ai revolution began about two years ago. You just recently, published an amazing piece this past week about, the regulatory, Philosophies of the us, the eu, and China. I encourage our listeners to go take a look at that after this podcast. But, I'll turn it over to you. If you could touch a little bit on, what you've seen emerging at that higher level that Walter was talking about. and then we can, parse through what's useful about that and maybe what's a little bit more questionable. I'll hand it over to you.
clare-hill--she-her-_1_03-06-2025_130131Thank you. like you said, Graham, I've noticed that there are three emerging. approaches to AI regulation exemplified in the us the EU and Chinese, approaches. the US under the Biden administration started to implement regulations on ai. Biden had an executive order that,
walter-scheirer_1_03-06-2025_130131Yes.
clare-hill--she-her-_1_03-06-2025_130131the federal workforce safe AI usage. we've seen a big shift since President Trump has, been inaugurated and those regulations are pretty much gone. he's very much in favor of pushing for innovation as opposed to regulation. and that's seen as a competition, especially with China. and so that's put an opposite view as the eu. the EU has the EU AI Act and that separates technologies into different risk levels and regulates them accordingly. there are unacceptable, levels of risk for certain types of AI under this act. such as social, scoring systems that would be unacceptable under the act. but then there's a high risk, category, and that's where most of the regulations come in. regulations on those sort of technologies are, requirements to have a risk management system, conduct data governance, have a lot of documentation around AI use, stuff like that. the third, approach is with China and it's been shown that state backed investment groups have helped, these startups get their legs, Commentators have been calling this, the new Cold War. And that's, a framework that a lot of people are looking at this sort the race to AI innovation. whereas. With the eus regulation is seen as a cut to that innovation, in the European space. something interesting is distinguishing between different types of regulations. I would love to hear Walter's thoughts. I appreciated you talking about the line between parody and satire and misinformation and the importance of free speech. you also mentioned working on algorithms to detect synthetic materials, so I'm wondering if you see a place for algorithms such as those in a transparency type of regulation.
walter-scheirer_1_03-06-2025_130131this is a great question. The algorithms work, but they have high error rates. They're not quite there yet. They're still in the research space. I'd be wary of rolling those out. where I think those algorithms though, are interesting. it's are they combined with other evidence to fact check? one thing I've been doing recently is working with journalists to debunk Different things that appear on the internet. some of these things are pretty obvious, But still having a body of evidence. helps a lot, So if the algorithm is saying, this is a synthetic image. If you're looking at it and you're saying, there's all these strange artifacts, like this can't possibly be a real photo because of that. if you're looking at the providence and saying, this was posted by an anonymous account on social media, you're putting all these. Factors together. this is not trustworthy information. it's probably not even newsworthy, yet here we are discussing it. I think that's helpful. but again, that's not forced through regulation, right? I think that's just, reporting, right? That's just journalism. could they be used in the future? maybe not. it seems to me within the federal government in the United States, those most interested in such capabilities tend to live, in the intelligence community and in the Department of Defense. just for basic intelligence collection purposes is there something trending on the internet? Where did it come from? what is the composition of the media object? those are questions those communities have been asking for a long time, and I don't think that's gonna go away. I'd be wary of deploying it, at large scale. it would be easy to complain about the political opposition and get their stuff pulled, Because you saw it was fake, but it could just be an obviously fake meme. like political satire. that'd be very bad to pull that from the internet.
graham-wolfe_1_03-06-2025_130131Yeah. Thank you guys for that. That's, a very interesting conversation, emerging conversation, I think. any of that surprise you, Walter, or stand out to you of what Claire was talking about in terms of these emerging philosophies around governance, hands-on, hands off, as hands-on as China funding startups, as hands off as, the US or as hands-on in the opposite direction with regulation in the eu? What stands out to you about that? and perhaps, what strikes you as unproductive from those three different philosophies?
walter-scheirer_1_03-06-2025_130131I would say the Chinese system is quite heavy handed. the government can intervene anytime it wants, And it often does do this, You think about companies like dance, right? They can't do what you can do here in the United States, right? In terms of product development, the way you market the material, and especially the data crossing the platforms, right? In China, there is heavy control of speech, Something we don't see here in the United States. I think that's a huge contrast. of course, the Chinese have been very active in building up a domestic, tech industry, and they've done a phenomenal job. I feel like in America. a lot of commentators, underrate how good China is at artificial intelligence. You often hear, the United States is x number of years beyond where the Chinese are. That's not true at all. I would say, there's even in terms of the software development. where they are behind, of course, is in the production of chips. they can't produce, advanced GPUs, the United States and its partners can, they just don't have the foundries. the manufacturing capabilities, that's the only area they're lagging in. But it's not hard for them to get chips from the west, even though they're, supposed to be sanctions. and, the export controls, they're not that effective. compared to the eu. Yeah, I think the EU is holding itself back with too much regulation, and that's always been their thing. So it's not surprising That's not a new issue for them. is it helping them? I don't think so. Especially a lot of these regulations around AI are very speculative, a lot of what the Biden administration was thinking about was some kind of intelligence explosion, which I don't think is plausible. In fact, I would characterize that as a fake thing on the internet. a lot of PR operations, specifically at big tech companies like to promote that idea, because it gives them a lot of leeway to create, regulations that it seems like we're doing something, but they're very favorable to the companies crafting that legislation. in some cases though, it can get off the rails. In fact, that's why a number of, VCs broke with the Biden administration, Most prominently Mark Andreesen, feeling that the Biden administration was starting actually believe it. the singularity was near and we'd have to claw back these technologies like we did back, in the Cold War, thinking about, nuclear technologies, cryptography, things that were very much, classified technologies. and that is just completely unreasonable. as a technologist, I know where the state of the art is, and it's not anywhere close to a super intelligence. it's a popular narrative, right? And it works for PR purposes really well. I think the public just is not aware of that. Maybe I'm saying the quiet part out loud, but I don't care.
graham-wolfe_1_03-06-2025_130131that's definitely not a macro narrative. It's not a popular thing. and that's evident. It's never, really come to the forefront for us. specifically when we're writing about taming. it really is all about, the substance reeling in the capabilities of ai. by this background, hum of fear about where the state of the art is, as you said. a very interesting, take that not many folks are exposed to. perhaps pivot a little and talk more about. the developers and the companies behind these new models that have, changed our lives in so many ways.
walter-scheirer_1_03-06-2025_130131Cool.
graham-wolfe_1_03-06-2025_130131for a long time now they've been dealing with these emerging regulatory, considerations. but also that's trickled down into lots of conversations about morals and ethics, that they ought to be considering as well. what do you see as the most immediate ethical dilemmas, cropping up for teams of developers and programmers right now? and what's the framework that you might develop and recommend to them in terms of, navigating ethical and moral dilemmas?
walter-scheirer_1_03-06-2025_130131Yeah, this is a great question. one that comes up a lot, but I think the answers have been unsatisfying and it's certainly one that I've been thinking about quite a bit and trying to come up with better answers to what is happening out there in the technology world. I think the biggest dilemma is really. Map back to, the internet and how the internet is, a communications technology, right? It's connecting the whole globe. it's bringing a lot of voices, into one space, and that has not always worked out. I would say AI folds into this too, because it's a feature of the internet, if you think about it, right? It doesn't really exist in the physical world. It's trained on the internet, right? it's. Doing things on the internet, but it doesn't really have much reach outside of the internet. when we think about all these technologies, like what are they doing to human behavior? what are they doing to us, right? as people living in a community? I'd say that the biggest ethical concern is really alienation, the more we use these technologies, the more time we spend in virtual spaces, the less time we're spending. Inauthentic encounters. And I think that's extremely damaging. again, there's a lot of controversy over smartphones, right? especially the use of smartphones, by children, by teenagers. what is that doing to society? I know you hear, talk about mental health, but I think this extends farther beyond that. there's been some fundamental shifts, in human behavior as the internet became, so pervasive, As we carry it around with us everywhere, with the phones, this is a big issue, right? It's and I think again, the programmers wanna keep us there, right? It's it'd be great to have this virtualized world as much safer, simpler, right? To navigate in some sense. but it's tearing you away from your neighbors, It's tearing you away from your family in some cases, right? you're making friends on the internet. That can be good, right? But those shouldn't, those virtual friends replace your in-person friends. I think the internet's great when you can't talk to people otherwise. forging a relationship with somebody living in Asia is great, if you're here in South Bend,'cause you can't talk to them in person. but that shouldn't displace the friendships you have right here on campus. that's a big thing a lot of folks are missing. this is, what. The Vatican has been criticizing as the technocratic paradigm where we can use science and technology to solve all problems. the virtualization of the world is part of that project, right? It's let's create a vastly simpler space for us to exist. think about the ethical implications of that, right? that's alarming when we think about, what is happening, right? In terms of communities and our interactions. Again, why are people so lonely? Why are people suffering from all these different mental conditions, right? I believe it's because there's been this huge shift in humanity, right? it's global,
graham-wolfe_1_03-06-2025_130131thank you for that. I think part of what we were talking about earlier with this background hum of fear that makes regulation so important and such a potent topic at all times is not necessarily just that fear of the singularity or fear the robot overlord, but also of those interpersonal kind of. perpetuation of the existing shift toward despair so that's really interesting. I think, this is also a great opportunity to touch on Claire, your work, on the Pope's most recent take on ai. you had a great article that came out a few weeks ago about that. If you could talk us through at a high level the direction that the Vatican trying to take this conversation. What kind of leadership they're trying to establish, regarding, ethical and proper implementation of AI in our lives.
clare-hill--she-her-_1_03-06-2025_130131Absolutely. So a lot of that article I spent thinking about the Vatican's most recent published document, Antia at Nova, which I'm probably mispronouncing, this was a document published by the Vatican that. Goes through what is artificial intelligence and how should we think about it? it's a really interesting read, but, some high level takeaways from it are that we should not be replacing human intelligence with artificial intelligence. that was a theme, throughout the document it makes a really interesting distinction. a lot of the distinction comes from the fact that while artificial intelligence, systems will have. very great capabilities for what you're asking it to do. It doesn't have the embodied experience of a human who has lived, through their experiences. and there's something, almost intangible, but very meaningful about what those experiences can teach us. the document isn't saying, don't use artificial intelligence. It's not saying, Technology is bad. it's saying we should, use this technology in ways that are productive. While making sure that we don't mistake it for being human.
graham-wolfe_1_03-06-2025_130131Really interesting. I think a lot of people, particularly in terms of creating that visibility to. The moral dimension of our, AI use are very motivated by what the Vatican has to say. and so it's great, to get the word out there about, the pope's take on everything starting with, the technocratic paradigm certainly has been, a big agenda for the past couple years. starting back in 2015 with the big encyclical. all the way up until now, that's been a really major through line the Vaticans take. Thank you for that. I think, I also do wanna pivot a little bit into some more interpersonal human level, expression of these concerns. Walter, in your more recent book, virtue and Virtual Spaces, you talk a little bit about how. there are some technologies that are, let me actually pull it up. I'll restart that as well, because I don't wanna get this wrong. Yeah. So here we go. In your more recent book, virtue in Virtual Spaces, you make the distinguish between a technology that's designed to extract our attention, perhaps something we would call, an unproductive use, versus technology that's aimed at human flourishing, in a virtue, ethics sense of the word flourishing. Amid this current generative revolution where, new models and new products are cropping up everywhere. we're being encouraged to integrate AI into every facet of our lives. how can regular users tell the difference between those energy gaining and energy extracting uses of ai? what should we look for to decide, which ones to use and which ones to avoid?
walter-scheirer_1_03-06-2025_130131Yeah, this is a tricky dilemma. in the book, we make the case that one should turn to Catholic social teaching, which is an existing framework that responds directly to technology in its dilemmas, supporting. human flourishing. what we found is that there were really good mappings between various principles. like solidarity, subsidiarity, and software. in some cases, you can find apps and platforms that inherently embody some of these, principles in other cases, right? You probably need to start from the ground up. and walk away from a lot of existing, software, which could be a huge burden for users, but maybe not terrible, If you think about how much time you're spending on social media. Or how much time you're spending using a generative model instead of talking to somebody in real life. I think, again, the software serves a specific purpose, rated it as a tool that is allowing you, to do your work in a more effective way, a more efficient way, that isn't leading to increased levels of alienation, that isn't being used for partisan political purposes. things that would, basically demonstrate poor character, then I think it's good, right? if it's not you probably, have to start from scratch or walk away. I don't think that's a terribly hard case to make in 2025. I teach, a very large required course in CSE on technology ethics, and students are really complaining about, their technology use, it doesn't seem like they're happy with these things, in the present moment. And it's like, why are you still using them? Or why aren't you building alternatives? your programmers, you could do that, right? It's wow, maybe we could there is that possibility. Yeah, it takes time. but maybe you could repurpose some of the time you're wasting scrolling Instagram, to build something. that definitely, contributes to human flourishing. I think there's certainly a path forward there.
graham-wolfe_1_03-06-2025_130131a big discussion that I think is, starting amongst, members of my generation and Z in general, is Allocation of your time, right? You make time for the things you care about. if we're all collectively making time to scroll on Instagram for three or four hours a day, that adds up to, a completely different lifestyle. It adds up to a completely different, how you spend your days is how you spend your years kind of thing, right? But also on top of that, there's a moral dimension to how you spend your time. and like you said, if we're gonna adequately apply the rightness and wrongness our moral frameworks to our time use, then, you gotta walk away from some software, right? really interesting, way to put that and, speaks very closely to the things that, I'm dealing with as a member of that generation. if we could talk a little bit more in detail about some of the products that people are using day to day, and ground conversations about morals in some more concrete products that people are familiar with. could you maybe talk us through, the popular use cases of chat, GBT, and of other text generative models how that virtuous or flourishing use of AI maps onto our day-to-day use of, for example, chat, GBT.
walter-scheirer_1_03-06-2025_130131Yeah, so this is one I've been thinking about a lot too. the thing about chat GPT, yes, it's a new AI system, relatively speaking, but if you look at the applications like how folks are. Using it. in many cases it's really just another way to do something they could do, on the internet. for instance, a lot of folks use it for search, I could do a normal Google search, but I could also go to chat GPT. the results, at least in my experience, are more or less the same. in some cases, chat, GBT is giving me the runaround to get the information, which isn't great. we already touched on the problem of hallucinations. in many cases too, it returns fake stuff. But Google searches have historically done that too, so it's not terribly different. when I use it for search, it's in combination with other things, just to see if it's surface is something, a different search engine wouldn't. but that doesn't seem to be a killer app. a lot of folks are using it to generate. Text, Especially boilerplate text, like I need to write some kind of form letter. It's useful for that. I think a lot of people use it for that kind of thing, for these low level clerical tasks. the concern there ethically would be it's displacing, workers, though I haven't seen any evidence of that yet. Maybe in the future there will be some kind of mass layoff, campaign. I think the best use case for technologists is code generation. works really well there's now mass hysteria, is this the end of the programmer? I don't think that's the case because it's just another abstraction layer. Like we've always had software libraries, right? That made programing more efficient. There was hysteria over that decades ago. there's more than one way to program. I don't think you're ever gonna wanna lose the expertise of programmers that know this stuff deeply, We still have assembly language programmers in computer science. That's not done routinely, but people have to understand, The machine level instructions. various debugging purposes, optimization purposes, that never really went away. so I don't think we're losing programmers either. And again, I think that's probably the best application. But that doesn't seem like a killer app either, because it's just, in the toolkit of programmers. what is the disruptive capability? not seeing it, even outside of the LLM world, their image, and video generators And yes, that might, replace, some low level work in graphic design. But Adobe tools have been stocked with stock images and videos you can modify So I'm not sure if that's a good argument either. interesting tools, but I don't think, it's as disruptive as, the companies are portraying. I just haven't seen anything where it was like, wow, this is gonna be huge. This stuff, it's these are good incremental advances, but I'm not losing my mind over this. I'm not rushing home to use chat. I'm not excited. It's just yet another business productivity tool.
graham-wolfe_1_03-06-2025_130131Yeah, really interesting. Walter, I think you might've bumped your microphone it got a little warbly there until the end. You can still hear you. I just wanna make sure we get the
walter-scheirer_1_03-06-2025_130131Is that still working?
graham-wolfe_1_03-06-2025_130131that's better.
walter-scheirer_1_03-06-2025_130131Okay.
graham-wolfe_1_03-06-2025_130131I do have one more question and then, I'm gonna pass it back to Claire for you to ask Walter a question and then we'll have our wrap up question. So it'll be me, Claire, wrap up if that sounds good.
walter-scheirer_1_03-06-2025_130131Okay. Sounds good.
graham-wolfe_1_03-06-2025_130131Okay. Walter, you talk, sorry. Yeah, I'll just pick up where you left off. Very interesting that you say it's less disruptive in a lot of ways than people are characterizing it to be. what I'm curious about is you see it as an opportunity at all? not just for, obviously the many ways it's changing our workforce as a productivity tool, changing our day-to-day lives as a productivity tool, but, as a means for making us all more virtuous or ethical in how we use the internet. is there an opportunity for it to make our engagements on the internet more empathetic? as you talk about virtual spaces, a chance to make us. Engage more with the moral consequences of our internet behavior, or do you see it more as something that would undermine that?
walter-scheirer_1_03-06-2025_130131I think there's a huge opportunity here, especially since I would characterize this technology as fairly rudimentary. a number of computer scientists would agree with me, though, maybe some of them only privately. the book makes the case that we probably do need to re-engineer large pieces of This infrastructure, again, thinking about virtue, in the design process. I feel like there's a clear path, for doing this. In fact, in my laboratory, we're working on this, a co-author of that book, is a graduate student in my lab, Louisa Conwell, who has been thinking about this and actively working on new paradigms in software engineering to incorporate virtue and Actually doing the implementation. this is the final phase of her PhD. so it's certainly feasible in a technical sense. Can we build a community that wants to use these things? I think that's the big question. but something I'm still upbeat about. the response in the Catholic world has been very positive. even going beyond that, talking to different religious communities, especially those folks who are interested in technology, this message really resonates. especially if you're thinking about virtue ethics, broadly defined, you're encompassing a number of different traditions, right? you're thinking about Buddhism, Confucianism, even Aristotelian virtues, You could be a secular philosopher and this could still resonate.
graham-wolfe_1_03-06-2025_130131really interesting stuff. as we wrap up I'm gonna pass it back to Claire. anything on your mind, feel free to raise now and ask Walter.
clare-hill--she-her-_1_03-06-2025_130131Thanks Graham. Yeah, I would like to go back to when you suggested using Catholic social teaching as a framework, for understanding AI ethics. I think that's a really interesting suggestion. I know that CST is a lot about solidarity in and I'm wondering if you see specific ways that we can use AI proactively to practice that sort of solidarity.
walter-scheirer_1_03-06-2025_130131Yeah. So I think, developing AI tools that help, healthy communities is key especially if you're thinking about solidarity. are there technologies that could help us address poverty? that's an interesting question. are there AI technologies that can just help communities organize? I just wanna have an authentic, connection. even though it may be virtual because I'm physically separated, what can AI do to facilitate those conversations? in many cases, we forget that ai, isn't just LLMs, There are computer vision tools that could be used to create, more naturalistic virtual environments, I. that seems interesting. Like I would wanna see that kind of technology. but those are hard problems. and a lot of money and time is being invested in that sort of thing. if we could build in perhaps a little bit of virtue into that process, I think we could get even better technologies. Zoom is an unsatisfying experience. could it be better? that's an interesting question. the strength of the internet is its ability to connect people, but we have not settled on a system, to make that, hospitable To many different communities. subsidiarity is another big piece to all of this, communities don't have the control they might want, In terms of managing their virtual space. we're using these globalized social media platforms that are, in some cases very exposed to the public. That's not always a good thing. Does every single conversation have to be, broadcast to the world? I don't think so. that has seemed to. drive a lot of the problems out there on the internet. I think AI a feature of the internet that can help us manage some of these things, or may not even be part of the solution. I think we need a broad re-imagining of the internet, including ai, to address a lot of these social concerns that are cropping up.
clare-hill--she-her-_1_03-06-2025_130131Thank you.
graham-wolfe_1_03-06-2025_130131Yeah. as we wrap up here, I wanna pull from these last two answers, and talk, about your vision for a reimagined internet, or an ideal future. if you could map out, two or three decades down the line, where generative AI is well integrated, and evolved and, ethically managed what would that look like for you? whether it's starting from scratch, tearing things down, or, very curious about, luis's frameworks as well, maybe in implementing and augmenting as you say, she's looking to do. yeah, talk to us about that ideal future.
walter-scheirer_1_03-06-2025_130131I think the ideal future is pulling back from having everybody shouting at each other, in these decentralized platforms. I think the internet of the future is gonna be far more distributed, which is a good thing. I think you'll see special purpose ais. Being used for a lot of different applications. ideally if they could be designed with Louisa's, ideas about software engineering in mind, that would be a big plus. I suspect different communities will have different norms, And be engineering these things in different ways, and that may be okay. as long as they don't come into, serious conflict with one another, the misunderstandings on the internet have been driving a lot of the problems. I would love to see ai, embraced more in a cultural sense. this is something I'm quite upbeat about, especially in the world of image and video synthesis, I know these tools have been, trashed, especially, by the media. there's a community of artists that is growing, looking at the creative potential for this. I had the pleasure last year, to organize a very large AI art gallery, I-E-E-C-V-F conference on computer vision and pattern recognition, which is one of the biggest computer science conferences in the world. One of the biggest AI meetings, one of the most important, by impact. we got tremendous input from. Not just people within the community dabbling in AI art, but like our professional artists, amateur artists, just random folks on the internet, like submitting pieces. A lot of them were really smart. It wasn't that AI was displacing the artists. the artists were using it as a new medium, I think there's tremendous potential. for this technology. in many cases, the artists were making, social commentary on AI using ai, which I thought was really smart. so again, I think there's a lot of hope there. I think you have to get through a lot of these, mainstream narratives that, are in the press, that are in the, more mainstream of technology ethics and really look at what people are doing, how they're using the technology. In a virtuous way. And then, that sort of restores my faith in, the technology world. so again, communities of amateurs, paying more attention to what they're doing is really important. US experts are often wrong about this stuff. Just look at what people are doing on the internet, that isn't malicious, that isn't causing trouble. And you'll find a lot of good there.
graham-wolfe_1_03-06-2025_130131Thank you so much for that. you talk about the creative expression, the potential that a lot of these models have in so many ways. That's what we're writing about weekly, the different potential these models have. what we look forward to as a growing team of experts is, Seeing that potential come to fruition through these creative mediums like you talk about, but then also taking part in ourselves. we're also on the optimistic side of that I thank you for that outlook. Walter, thanks so much for joining us. this has been an amazing conversation. just lastly, I'll plug the work that Claire has done. Claire has done some great work recently writing on taming ai, our column on governance and ethics and regulation. check that out, as well as our other four columns at the new AI project on LinkedIn. Dr. Barons, anything to add to put a bow on this?
john-behrens_1_03-06-2025_130131Yeah. Thanks so much, Walter. It was, delightful, commentary and as always, super prescient and we'll watch all your ideas unfold over the next decades.
walter-scheirer_1_03-06-2025_130131Yeah, stay soon.
graham-wolfe_1_03-06-2025_130131Thank you so much, Walter.
clare-hill--she-her-_1_03-06-2025_130131Thanks.