The Enterprise Alchemists

Transforming Academia and Industry: Insights into Generative AI with Greg Benson

Dominic Wellington & Guy Murphy — SnapLogic Season 1 Episode 2

Unlock the secrets of Generative AI’s (GenAI) latest trends and its transformative impacts on academia and industry. We sit down with Greg Benson, Chief Scientist at SnapLogic and Professor of Computer Science at the University of San Francisco, to gain invaluable insights into how large language models are reshaping the way we learn and conduct business. From enhancing educational delivery to boosting business efficiencies with innovative AI-driven solutions, Greg's unique perspective provides an enlightening look at the future of GenAI.

Discover the intricate processes behind training and aligning large language models. Greg emphasizes the critical role of human guidance to ensure these models avoid controversial outputs, offering a balanced view on the potential and limitations of AI technology. Dive deep into the concept of Retrieval-Augmented Generation (RAG) and how it revolutionizes business operations by leveraging internal company data to refine task performance. We also navigate the experimental terrain of prompt engineering and discuss the vital evaluation frameworks needed to minimize errors and optimize AI effectiveness.

Explore the profound impact of GenAI on higher education and various industries. Greg unfolds how educators are harnessing AI to create customized learning experiences and research opportunities, while also confronting challenges like over-reliance on AI-generated answers. The conversation broadens to reflect on the evolution of AI technology, recognizing its potential to augment knowledge work across sectors. Learn about the practical applications of AI and how tools like SnapLogic are democratizing access to advanced technology, paving the way for increased productivity and efficiency.

Find more details and a full transcript on SnapLogic's Integration Nation community site.

Dominic Wellington:

So welcome back to Enterprise Alchemists, a podcast by SnapLogic for Enterprise Architects. My name is Dominic Wellington and I'm here with my colleague, guy Murphy. Greetings to Episode 2. Today we are joined also by Greg Benson. So Greg is our Chief Scientist here at SnapLogic, but he's also a professor at USF, and so he seemed like the perfect person to go into a little bit more about AI where the market is, where it's going, what are the interesting trends that are building already, and what he sees coming from his particular vantage point. So thanks for joining us, greg.

Greg Benson:

Thanks for having me. I'm excited to have the conversation today,

Dominic Wellington:

Do first of all, how did you get here? Could you give us a little canned bio and background?

Greg Benson:

Sure. So, like you mentioned, my role here at SnapLogic is Chief Scientist, and I'm also Professor of Computer Science at the University of San Francisco. I joined SnapLogic in 2010, so it's been a little bit of time, and I've been involved in no-transcript.

Guy Murphy:

It's 2024, we're in June. From the public's point of view, probably, this has erupted on the marketplace 18 months to two years ago. What's your take on where we're at on the journey with the emergence of AI into the business context rather than, obviously, into your very deep academic view of it?

Greg Benson:

Yeah, I'll tell you, all of us have been able to experience some pretty significant technology innovations in our lifetimes and I tell my students that what we're experiencing now is nothing compared to what we thought were some of the big ones, like the internet and the iPhone and things like Java. You know there's been, there's been some pretty impressive software technologies. This one, I think, is going to overshadow the other ones, and also it's just an exciting time to be experiencing this in two very distinct industries, if you will academia, higher education, where this is already generative. Ai is already transforming how we do our jobs. Never experienced something that came at us that when I say us, I mean educators. That requires, like an immediate revisiting of how we deliver education. What are the possibilities, both the positive aspects and the negative ones, the fact that you can get answers to lots of things right. And then in industry, it's also an exciting time. I tell everybody I know in the next five years, everything changes. Companies get more efficient, they can produce better value right, All of our products and services well, maybe not all of them, but I think many of them will experience a huge leap in delivery to customer across sectors. So it's an incredibly exciting time to be involved.

Greg Benson:

I guess, like I said, I have a foot in these two types of industries. But I want to go deeper in, maybe, what you were getting at. But I want to go deeper in maybe what you were getting at. Look. First of all, let me say I'm a Gen AI optimist. I see a huge amount of potential. I know I can just speak from my experience. I would like everybody else, once the UIs came out for the models.

Guy Murphy:

So, greg, just pause. Yeah, go ahead, because the podcast is hopefully going to be for a very wide audience.

Dominic Wellington:

Yeah, go ahead.

Guy Murphy:

So could you define what you mean by Gen-AI? Yeah, let's talk about that.

Greg Benson:

So generative AI, or GenAI for short, encompasses any type of activity that we're using any of these so-called large language models to perform some sort of task, whether it's to generate something which and that's very wide open could be summarize text. It could be create some text based on some prompting. It could be hey, let me give you some examples and then can you solve a new problem that uses these examples as the basis for how you should approach a new problem. And generative also doesn't refer to just text, right, it can refer to other forms of media, so audio, video, movies, any sorts of rich media.

Greg Benson:

So the generative part is the fact that we're combining human intent in natural language along with some task that you want to perform. And ultimately, what these models can do is respond in kind, to follow your directions and base their output on all of this training data that they have been exposed to. And that's a whole other. You know we can go a little bit deeper on what all that entails. We can go a little bit deeper on what all that entails, but at the end of the day, these models don't just get the information, but they're trained in such a way that they will hopefully generate useful, correct, interesting, creative answers. But part of that is guided by the model training process and there's been lots of human feedback to help guide how those models are developed.

Guy Murphy:

Thank you. That's probably a really good segue to maybe transition into the work you're doing with SnapLogic. So obviously we've been talking about Gen AI specifically over the last six months, very much focused around your team here and what you've been developing. From my point of view, I see two different aspects that are fascinating and fabulous. So one of them is what you said about.

Guy Murphy:

Snaplogic is obviously historically an integration technology platform, but seeing like a third class of integration supporting these platforms, but there's almost like a fallout.

Guy Murphy:

There's almost a fallout unexpected consequence which is actually seeing a modern but traditional application data integration product actually moving into multimedia data aggregation and processing.

Guy Murphy:

And I know that we've been seeing a lot of business use cases. That started with document processing and I was actually talking to a very large customer where, after we went through a workshop, they came back and said are you competing against very specialist document processing platforms that again, have been around for decades and they probably in many, many manufacturing, oil and gas, telco, banking're very, very niche? And this chief architect said this feels like you're actually unlocking that, what was a very niche concept because of these new, much more flexible platforms, into becoming a generic capability that can be applied across any business line, any business process and it'd be great to see how you see that reflection on something that is coming with very positive respect as a very academic view. But I'm starting to see like a rapid trickle-down effect which is almost unlocking proprietary high-value processes now to become actually cross-cutting concepts and transcends the traditional integration out of traditional document processing and, as you say, they're now moving into media music bots. How are you seeing that type of work like today in reality when you're working with clients?

Greg Benson:

Yeah. So I think to get at the heart of your question is generative AI and large language models. Are they opening up capabilities that previously might have required specialized domain knowledge, specialized dedicated teams and or companies to provide certain type of capabilities? You mentioned different types of document processing and I would say, yeah, absolutely. Generative AI is opening up such a is opening up such a large number of capabilities and we've only I would argue we've only really scratched the surface.

Greg Benson:

Maybe another way to say to get at it like the document processing is that, with the models now like OpenAI, chatgpt and Anthropic Cloud, now have multi-modal support, which you don't just provide text and prompts, but you can actually provide images and audio for it to process, along with your prompt and your intent to extract information.

Greg Benson:

One way I'd like to characterize what you can do with these models is it really opens up machine learning-like capabilities, but without having to go through the really arduous process of training conventional machine learning models, because you can do, in some cases, so-called zero-shot that is, no examples to few-shot examples in a particular activity or task that you want to perform. Particular activity or tasks that you want to perform, for example, you can tag certain images in certain ways in your prompt and get it to understand what you're looking for. And these models have this incredible ability to generalize. Again, we could have done these things, like these types of document processing and extraction of information out of documents of different forms, and now it's seemingly with a well-crafted prompt and a few examples. You've basically cut down that whole sort of traditional data science, machine learning methodology and process, which is time consuming, requires quite a bit of expertise, and now you, you can, you can do it in in these prompts so great, that's really great.

Guy Murphy:

now, obviously, with what's been going, what they are, there's been a couple of areas of question, one of which is obviously the tendency for some of the models to hallucinate, and obviously it's been a hot topic in the industry, especially in this overlap between the AI teams and the more traditional people, that RAC seems to become a very hot topic because it seems to be an ability to sort of cross link between the two end extremes, partly being able to filter out some of the hallucinations and also, from a business point of view, being able to actually load in the business context and categorization of the business environment, customers, definitions of product, what are you really seeing there, how are you seeing it working and what's your views on it, when? When does it work? When doesn't work? Right, right, so there's yeah.

Greg Benson:

So there's a lot of things going on there I would. I'll start with just to clarify that. Well, these large language models are trained on lots of information from the internet and other information that's available in electronic or digital form. In some way Most of the models go through as part of the training, go through an alignment phase where humans are guiding it in the tone and how it should respond and how it answers questions, and that's done through several interactions in which humans in some cases human experts clarify how a response should be given. But then alignment also refers to making sure that you know it doesn't respond with controversial, you know opinions or offensive material or even potentially dangerous guidance like making a bomb or things like that. Right, so there's that activity that has gone on and continues to go on in a lot of these models. Now, getting back to you know, how do we get the models to respond correctly, or our definition of correct, and so I want to talk a little bit about.

Greg Benson:

You mentioned RAG and how that applies here. So one thing I do want to mention is just a lot of business problems can be solved without RAG, but let's define what RAG is, okay. So RAG is a very popular technique. It stands for Retrieval, augmented Generation, and it's really a fancy term for when you go to construct your prompt with some tasks that you want to achieve that. Additionally, in the prompt you provide information that's specific and relevant to the task that you want to perform.

Greg Benson:

In the business context.

Greg Benson:

This could be business documentation, it could be web pages, PDFs, it could be any content that you would like the language model to specifically use when completing the task that you describe in your intent.

Greg Benson:

And this is very powerful because while the language models are trained on large amounts of information, they may or may not be trained on internal company data, and so this is very useful to give it that context. The other reason why it's useful is that while the so-called context window, that is the amount of input, say words they call them tokens but the amount of input words that you can put into a prompt, that has been increasing over time, in some cases up to 200,000 words, let's say, which is quite a bit actually, but even still, the reason why RAG is useful is that for some uses you can't fit all the context into that prompt directly, and maybe you wouldn't want to anyway, because if you use the APIs. There's cost associated with it and you could reduce your cost by only putting in the information that is most relevant. So there's a bunch of processes by which you can retrieve that relevant information and put it into your, into your prompt.

Greg Benson:

Now you also, sort of, were indicating that RAGCAD might be able to help with avoiding hallucinations Interesting. Just a side note, I'm not a big fan of the term hallucination.

Greg Benson:

I think so yeah yeah, Well, because the way I think about it is a hallucination. If you talk about a person, right, a hallucination is something that we experience right Now. We could describe that experience, we could describe the hallucination, but the act of hallucination is something that is perception, not generation. The term that you'll see people use, which I think hasn't caught on, is confabulation. That is, confabulation is something when you're producing or saying or writing, right, and things get confused, right, that's what I think these language models are. Hallucination is pretty widely used. Anyway, getting back to your point, so, yes, we can do lots of things in the prompt, combined with the information to help guide the model, to pay attention to the information that we think is relevant and important. And now the thing is, is because these models are probabilistic, mathematical entities, it's still challenging to with 100% certainty get rid of potential anomalous or incorrect generation. Right, not a database, it's not a search engine right it's right.

Greg Benson:

It's not something that, yes, that has a sort of there's a data set. We do a quick like an SQL query and get back a exactly what it's exactly what it says, and that's that's important for everybody understand. In fact, you know, with our gen AI app builder clients, one of the first things that we recommend everybody to do is you need to build out an evaluation framework, which you can do in SnapLogic directly, and we have patterns and help to do that. The point is, if you kind of think about, we're kind of in this gray area.

Greg Benson:

I don't like to anthropomorphize these models too much, but because they're probabilistic, just like humans are to some degree probabilistic, right, uh, and let's say, we train humans in a certain task, we hope that they, in whatever activity, like judging whether a transaction is fraudulent or not right, uh, and you want the language model to do something similar. They're not infallible, right. But what can we do to sort of mitigate or to hopefully get to a point where we're minimizing errors in the task that we're trying to perform, is you need to establish a test framework that gives you confidence in the output that you're getting. The other reason that you want to have test or evaluation framework is that prompt engineering and use of generative AI. You could call it trial and error, you could call it experimental.

Greg Benson:

You could call it, science and the scientific process, whatever you want to call it. There is fiddling, right. There is, hey, I've written a prompt or I've built out my rag system to give me this information, but there's lots of levers right, lots of knobs that you can turn as you're developing some generative AI-based solution. So you need some grounding. So, if you're going to turn those knobs, you want to have something, because what's amazing is that you can give ChatGPT or Claude some data and you get back these amazing results and you're saying, wow, it can answer my question or it can summarize this data.

Greg Benson:

And we're very much in the early days. It was very anecdotal, oh, it did well on this, it did well on this. But as soon as you want to scale out a gen ai application, you you need some confidence in it, and the only way to get confidence is that you spend time on a rigorous evaluation framework. So, um, I think the short answer to your question is I don't think we can, just like with humans, we can't eliminate the possibility that for some tasks, it might give you a variation. Maybe it's incorrect or maybe it's slightly incorrect. I don't think we can eliminate those possibilities, but we can minimize those possibilities.

Guy Murphy:

That makes perfect sense because, again, everything from manufacturing and everything else you actually designed and test your environment's risk error levels. But it's interesting that it's actually now formalizing, because we spoke about it a year ago when we actually over lunch and the discussion was actually like how do you test models? And only 12 months ago I already made a response to you.

Guy Murphy:

You sort of looked at me quite quickly and went we don't quite have that concept yet so it's incredible already people are closing that gap so very quickly because of them, because we could talk for hours on these subjects um, absolutely when do you think?

Guy Murphy:

from a business point of view, this particular area of the solution space, some of these technologies aren't applicable. So you talked about Rag is a powerful thing. Are there things that you would quite openly be saying? Actually, this emerging marketplace, it's not suitable. Or do you think actually, the flexibility of the platform is just going to be how rigorous you put the control on it, how rigorous you put the control on it.

Guy Murphy:

So let me make it more rigorous. So if we were going to a car manufacturer, I could absolutely see using these technologies to enhance the customer journey on the website and having the Boston information and going to the contract processing and driving down through that piece. But then if you actually were saying saying right now, we're going to plug this into a fully automated factory that the error level was such that you could actually build a car incorrectly.

Guy Murphy:

Where have you seen with the research and the adoption, that kind of soft impact to. I mean, the ultimate dream is these types of engines could actually have, like this, self-learning capability right through an enterprise. Where do you feel we are in? Only the last 18 months?

Greg Benson:

Right, well, so I think you're going to, you know, as businesses start to adopt this, and they are there, as we know, because our customers are adopting it, and, and they are there, as we know, because our customers are adopting it. And, um, you know, as you can, you know, read, uh, you know story after story of of how gen ai is transforming industries. I think what you, what you need, is a couple of things. One is you need to sort of gauge the sort of, the level of risk or level of criticality to the type of tasks that you're trying to insert a gen ai based solution.

Greg Benson:

And for a lot of businesses, as they determine where in the process that you're asking Gen AI to perform some task, what you need, what a lot of companies are doing, is they're not removing the human in the loop, meaning that if you're ultimately in the medical industry and you're going to use gen ai to assist in a diagnosis and then eventually recommend a remedy, right, well, there's an example of where the gen ai can do a lot of work for you. And, by the way, there's great studies of of of Jenny, I've been, you know, being very good at, you know, reading radiology, right, and, and and how successful it can compared to sort of human experts. Right, there's, but at the end of the day, when you get to a point where you're going to recommend a course of action, I think in those types of situations there's always going to be a human in the loop. Now can we make that human's job go faster, more efficient, give them more information and actually let them use their expertise in a way that leads to even better outcomes? Like absolutely.

Greg Benson:

Yeah, don't waste the valuable time of the expensive human Right, right, right and also to provide that expense, that expert with, again, additional signals that would help them synthesize a response. Not only use the response of, say, the Gen AI response, not only use the response of, say, the Gen AI response, but have Gen AI highlight additional signals or information that should be considered. So I think it's an interesting question of what types of tasks can we hand over to Gen AI and feel confident in doing so. And there's, by the way, look, a lot of the use cases are around making the humans more efficient, right? Yes, absolutely.

Greg Benson:

And when you're talking about that, we're not talking about, oh, you know, the Gen AI is going to be in charge of deleting user accounts if they violate our? I mean, maybe they would be, but the point is, is that, depending on the service, there, again, they can alert and then humans can make final decisions, right, but there's, but there's plenty of tasks. Like, if you are in a particular line of business and you want to, we have some examples where you want to augment your Salesforce data with Gen AI based research, that then can help the sales team better focus, better analyze, right, then those aren't. It's not, it's automating a process that might have been time consuming previously. We shorten that time, then give the human the tools to utilize that information. So there's plenty of examples like that, where the AI isn't making the decision right. The AI is helping humans with their decision making processes. Making a recommendation.

Guy Murphy:

You talked about it just before we started recording about actually how you're not just teaching your students answers studying the domain, but you're actually now adopting and embedding these tools and processes and a different process and a different culture into your actual day job.

Guy Murphy:

Could you touch upon that? Because I found it fascinating, because I could think it's different to the normal day jobs of most of the enterprise architectures, but actually I think it's quite profound to actually see the beginnings of the role is not the role that it was even 12 months ago, and I think that's the challenge I'm seeing from a lot of IT professionals is we see a lot of possibility, probability, but, but in some ways from the conversation I'd like you to touch upon, actually you're showing actually not just a, it's giving extra information to do your job, it's actually reshaping your job.

Greg Benson:

Gen AI is already radically changing higher education and I can speak from my point of view as a professor and how students are changing their learning, how we as instructors are changing the practice of delivering education. I would say again, we're still very much in the early stages, but you know, I've used Gen AI to help create projects or part of projects, to create exam questions, to answer exam questions, to generate very targeted guides that combine all of the topics that I'm focusing on for a particular project or concept in the classroom. So, on the one hand, as a tool for an educator, it's got vast possibilities. In fact, you know, like the traditional book, even though I've kind of moved away from books generally because I tend to be very project-based and build concepts around those projects books themselves, you know. I know for me personally, to go learn, I'm engaging with LLMs. In fact, I've greatly reduced searching right, like general search. I pretty much use things like Perplexity and OpenAI and Cloud when learning and researching material. I expect students to be doing the same thing and so, on the student side, right, the question is, you know, and a challenge for us is we have this amazing tool, but we also want them to learn and the concern, of course, is, you know, if we give them problems, do they use the LLMs in a way to derive their answers without actually learning.

Greg Benson:

There's lots of work to um, to both embrace uh, gen, ai and also to put up sort of the, the guardrails to uh to help students of course. My favorite of course is uh, uh. I love pencil and paper, exams, no technology. So you know there's one way, old school, one way to, and it's funny because a lot of um, a lot of faculty are coming around to that right because um, it's, it's, it's one way to put the responsibility on to the student right uh to to make sure that they've um, they've learned the material um so radically changing the academic landscape and I'm not an expert, I teach at the college level, but it's having impact, obviously even in K-12 education as well.

Greg Benson:

So I think your question got at in education, how do I see things evolving and transforming? I mean, for me it's made my job. It's like it makes the job that much more exciting because there's all this potential for not only me to learn faster or do research faster, for not only me to learn faster or do research faster, but I'm somewhat optimistic. It's gonna help students learn faster and better and more efficient and meet students where they are in their learning. Quick, little example is the great thing that you can do with LLMs if you're trying to learn something is you can tell it what you know, what you understand, and you can ask it to explain something new in terms of the knowledge that you have. So analogy, right, is a great way to learn and everybody comes with their own knowledge or understanding so you can explain your understanding. You can say I'm learning this new concept, please help me. And it does an amazing job.

Greg Benson:

So, but just transitioning, I think you were alluding to, you were asking me because of my role as an educator, but I think the same effect, what I just described, right, that across the board, if you think of my clients as students, right, and my role and their role and how we interact and how the practice of education, is just going to fundamentally change. You can imagine that in various industries it's also going to fundamentally change. Like you were saying, the role, roles will change. I you know. Look, will you know?

Greg Benson:

A topic that comes up is you know, will this eliminate jobs? I think you know this has been well discussed and it remains to be seen the full impact. But yes, when new technologies over time have been introduced, it required a reorientation of where people do work and how they do their work. Maybe this is a little bit too optimistic on my part. I think it would allow us to allow humans to focus on what they're really good at. The models still don't have the capabilities to do and I should say I'm not an AGI optimist. I optimist. I'm not sure that we will reach consciousness and morality and empathy and motivation. Will it be able to fake it? Maybe?

Dominic Wellington:

Right, but I'm not convinced of the business model for AGI.

Greg Benson:

Well, right, right, right Anyway. So I think I hope I've answered your question. I think I think it's going to have just like in education, as I've described it. I think it's going to have a transformative role in, you know, most every industry.

Guy Murphy:

Absolutely, and we've been talking about the knowledge workers since I started my career 35 years ago, and to me it's like this will be a new class of knowledge worker yeah so an augmented knowledge working and so, dominic, you had.

Dominic Wellington:

Just to wrap this up because it's been a fascinating conversation. But following on from that point, as we've seen, this fear of AI, we've also seen some of the early hype already fail to deliver, and in fact I was at a Gartner event recently and one of the Gartner analysts showed the famous hype cycle with a peak of inflated expectation ensuring AI already starting to slide down. And at the bottom of that slope is the trough of disillusionment where everyone throws up their hands and goes oh no, this is is never going to work and then you see an upslope onto the plateau of productivity where people all figure out what is this actually good for in practice, in the real world?

Dominic Wellington:

What are the things it can do that are useful? And so I was wondering, as you were saying you know in academia, you're starting to see how this can be integrated into your work and your students work and so on. What are the other examples of applications of gen AI and AI more widely that? Are already hitting that plateau of productivity that you think are going to be part of that future state could I actually add this slightly different part to the question?

Greg Benson:

so Gartner's made part to that question, so Gartner's made a very big assumption there and Gartner's a very well regarded analyst and they've been doing these sorts of research forever Do you think that's true Because we started the conversation with maybe we're at the very, very, very early days of AI, and are they actually potentially jumping the gun on that by saying AI is now past the hype and it's in the downswing? When do we not know what the next wave of AI, what the next wave of technology, what the next wave of research will actually produce? Is it maybe too early to actually start putting it into the traditional IT lifecycle?

Dominic Wellington:

So, dominic, to go into your first point, I think I would take issue with Gartner's view of is that we're already sort of you know, I think in this.

Dominic Wellington:

you know, obviously we've had several rounds of AI technology booms and then several so-called AI winters. Right, I would say we're at this current stage. We're still this current stage, we're still in the beginning, and I would argue that, from an enterprise and business perspective, there's a lot of just getting an understanding of what you can do and how do you even get to a point where you can start experimenting and it's interesting, right, I think um. Coming back to, I mean, look, I think rag is a good tool, but I think, um, I think some, because if you kind of follow, you know the blogs and the trends and you know people are led to oh, I got to do RAG. Well, yeah, for your use case, rag might be a very good thing to do, but there are so many other things that you can tap into with Gen AI without going down this RAG path. And we didn't even I should say we mentioned RAG and we haven't even mentioned in this interview vector databases and embeddings, right?

Guy Murphy:

We will invite you back for maybe a whole new topic.

Dominic Wellington:

But the point is that I bring those up because, again, those are useful concepts and constructs, but there's so much you can do outside of those constructs by just what we're calling prompt engineering or constructing prompts injected with data, potentially with examples, and there's just so much you can do. Another way I like to describe it is you can think of the prompt as the program, meaning that something that would have been very difficult using traditional machine learning techniques, or something that would be nearly impossible to encode as a traditional function in a conventional programming language, now can be specified in natural language with lots of nuances of how you want the language model to respond. And this is extremely powerful. And I and I would argue that and again to counter against this potential Gartner view is A. We haven't, companies haven't tapped into this capability as much as they probably could, and one of the reasons for it and to plug SnapLogic here is meeting the technology worker with the right tooling to unlock that, meaning that you can do whatever you want with programming. And there's lots of open source libraries you know langchain being one of the early and popular ones um, but even that, that that's a. That's a. That's a.

Dominic Wellington:

That is a world that a lot of you know very capable, very technology focused, um, uh, workers, focused workers don't necessarily want or need to have to program right, but they know what they want to do.

Dominic Wellington:

They want to express what they want to do in a prompt and they can give it data. In fact, when we consult with customers, one of the best things is when a customer comes to us and says, and they share their screen and they say, hey, I copied this data from my spreadsheet or from my database, I wrote this prompt, I put it into chat, gpt and it's giving me this incredible answer. That is spot on. I want to operationalize that, I want to make that work. Not just I don't want to have to cut and paste this data, I want to put it into my enterprise and I want to get that value, but repeatedly. And that's exactly what we can do with the GenAI app builder capability and SnapLogic. So I think part of the adoption issue is meeting people with their skill set, with the right tooling, so that we can unlock these gen AI applications more broadly. And again, I think, once you know, once the tooling becomes more widely available, I think companies are gonna just explode in terms of finding use cases and increasing efficiency and productivity.

Guy Murphy:

Excellent. So, on that incredibly positive note, thank you for your time.

Greg Benson:

Thank you, I've enjoyed the conversation.

Dominic Wellington:

Thank you to all our listeners as well, and if you're liking what you're hearing hearing, do join us on the Integration Nation community. There are threads for each episode with show notes and links to interesting materials that will help you go deeper in the topics that we've all discussed. But, as ever, see you next time.

People on this episode