The Enterprise Alchemists
The Enterprise Alchemists is a new podcast for Enterprise Architects to have honest and in-depth conversations about what is relevant to our world. Expert guests provide additional context on the topics of the day in Enterprise IT.
Your hosts, Guy Murphy and Dominic Wellington, are Enterprise Architects at SnapLogic, with more decades of experience between them than they care to admit to, and the stories that go with it.
The Enterprise Alchemists
Taming AI Tool Sprawl: from siloed features to a shared AI layer across the enterprise
We explore how embedded AI in SaaS tools amplifies sprawl and risk, and outline a practical path to governed, composable, cross‑platform agents. Chris Ward shares real use cases for Customer 360 and finance reconciliation, plus metrics, onboarding and versioning that keep AI accountable.
The LinkedIn post that kicked everything off
More details on the finance use case Chris discussed
SnapLogic's own support for MCP
Hi and welcome back to the Enterprise Alchemists, a podcast by SnapLogic for enterprise architects and those who are curious about enterprise architecture. As ever, I'm joined by my co-host Guy Murphy. Hey Guy.
Guy Murphy:Hi there.
Dominic Wellington:And uh we have a returning guest of uh Chris Ward, uh but with a new title this time. Uh what's your new title, Chris?
Chris Ward:Hey everyone, I'm Chris Ward. I'm the director of AI for the Centre of Enablement uh at SnapLogic.
Dominic Wellington:Excellent. And as I said on LinkedIn when Chris got his his new title, I can't think of anyone better to take this role because Chris has been with SnapLogic for some time working directly with customers, but also has been experimenting pretty intensively with AI and how to apply it to day-to-day tasks to make his life better and the life of his customers better. So this is just a recognition of what he's doing. And that's why we wanted to get Chris back on because there's a lot of conversation right now about exactly how we can integrate AI, gen AI, agentic AI into enterprise architectures at scale. And so it was a post on LinkedIn that kind of exemplified all of this. Uh, I'll put the URL directly into the show notes. But uh the pods are interesting. Uh Nithan Shapira, he's saying AI won't fix tool sprawl, it amplifies it. And uh he's talking about uh the difficulties in basically running workflows on data that is scattered across multiple tools, and this leads to a few consequences. Uh, the various AI features embedded in all of those tools, because each one of them is adding its own AI features, only sees partial context, and so you get shallower recommendations. Automations break on handoffs between tools, so you might not get the benefit of automation that you're expecting, and your governance and audit trails fragment so you're actually at higher risk. Uh and so he's talking about how you can consolidate a view into a shared data layer, such as hi, SnapLogic. Uh, but that was the uh that and many, many conversations like that was the genesis uh of this conversation. Uh so Guy, you had some thoughts. So do you want to lead us off?
Guy Murphy:Yeah, so you know, everyone's incredibly excited about what we're seeing with AI. Uh obviously there are some controversies around it. Um I'll be honest, I'm there's now not a customer that's not looking at AI to some degree. Um, but this conversation is becoming a real accelerator because obviously, you know, we have customers like Siemens Healthenears and AstraZeneca who are quite openly and logically, due to their industry positions, developing their own AIs, they have large-scale AI data science teams building specialist models which they can control and govern of end-to-end. Obviously, what we're also seeing is kind of the mass market is now looking at the public AI platforms out there, open AIs and and others. But really, what the tipping point seems to be coming out this year is the imposition of AI capability from the SaaS providers and the platform providers, where we're starting to see Salesforce, and again, this is not an insult to any of these vendors, it's a logical move for them. But now you're getting these AI services that are virtually not optional, and going back to this point, and now we're introducing non-deterministic systems across classic business process landscapes, and I'm starting to see that a lot of leading architects are saying we have no governance for this, and yet we have no choice or control over this. So this really does feel like a strange combination of both opportunity and threat to IT strategy. Um and really this discussion is to really frame the beginnings of a wider context that is, you know, how does the IT landscape, how does the CIO, CTO cope with this, not just disruptive technology, but one that is actually just being levered into their landscapes directly to their business owners and business customers at scale. Chris, you'd be obviously talking with your new role to many of our leading customers who are adopting AI today. How does this reflect what you're seeing out there?
Chris Ward:Yeah, I mean it certainly confirms what we're seeing uh within our own sort of business teams and and how we're thinking about building agents and incorporating AI into business workflows as well as uh externally with customers as well. Uh, there is yeah, certainly a proliferation now of AI uh across um you know a number of different different domains. So we we see, as you say, um vendors, SaaS platforms embedding AI within uh their own capabilities. Um I think a good example of that is uh you know tools like Zendesk and Salesforce. Um I think the challenge there is you you become um maybe siloed uh in what you can actually achieve uh beyond those systems. Uh so it does raise a question um and and an approach architecturally, uh how you you you unlock that value and think about you know cross cross-platform AI and you know leveraging that in a more holistic way.
Guy Murphy:So yeah, it's it's an interesting time for everyone because obviously in the traditional data strategy domain, you know, this this was end-to-end data tracing, this was data lineage, there's been concepts around for 25 years plus nowadays, um, that was a very clear understanding of you know how to manage these silos and understand what resides where in the landscape. And then obviously the answer was with the shift towards modern cloud data warehousing was to get the key data into a cloud data warehouse uh or data stored data lake, depending on what year you're talking, um, to then build a unified view. I think, from my point of view, what's the difference, of course, is that AIs aren't just storing, they are actually generating subjective change, more importantly, creating their own interpretation of the data, which is obviously their value. But how will a modern architecture cope with this fact that when information goes into Salesforce, you're not just re-retrieving the customer golden record? It's actually surrounded by a vast, potentially a vast amount of contextual redefining of new data sets that weren't actually driven by a human being. And how do you then move across you know five, six, ten, twenty different services that then have these different AIs generating arbitrarily different interpretations of the data that um will also change over time? And again, for everybody out there, I mean the days of stable databases have been around for 20 years, whereas with AIs, it feels like there's a new release every four to six months of capability, and we're all all obviously waiting for the new OpenAI um service to be launched in a matter of weeks from now. So, not only is it gonna be understanding where the this data and where the data sets came from, but actually why the data set was created, um, is also moving at pace. Um I'm gonna open the floor to both the gentlemen on this podcast, really. Um, because Dominic, you've obviously got a rich heritage in the data storage and data management space. How do you see this working out as we start looking at this from a sort of non-classical data management point of view?
Dominic Wellington:And it's an interesting question because the assumption has been mostly that the data generated by uh the Gen AI systems is not going to be stored for the long term. It's generated live and discarded pretty promptly. But as you say, that may start changing as the AI becomes more ambient as we move away from the default chat GPT model of Gen AI, where there's a chat bot as a front end and we move to AI in the back end in the pipelines. And we've already seen some use cases like that. Once you get beyond a few seconds of latency, a chatbot is not the right front end uh for these experiences. Uh but when you're interfacing with enterprise systems, you may have latency that is significantly longer than that. Uh, you might even be going into some sort of batch process. And so it makes sense uh potentially to start thinking more permanently uh about that data and how it works. And that's even leaving aside the new challenges of uh vector databases or vector indexes on traditional databases that you use to expose something like uh uh like RAG. And so the it's let me restart that that thought. Uh the key is as usual with uh AI, what we've seen is, well, as our CEO loves to say, uh AI dies of data dehydration. Uh all of those projects that fail to get into production, it's usually because they don't have access to the existing data, the existing systems. But that's a problem that we have now. As that adoption happens, then we're going to move to a different set of problems of how all of these different things can integrate with each other and what new needs will emerge uh from a shared agentic AI layer that spans across everything. And I don't know that we've seen that yet. What do you think, Chris? Are you certain to see the movements in that direction?
Chris Ward:Yeah, so one thing we've started to think about as we've started to roll out uh internally within SnapLogic a set of a set of agents that are looking to replicate um and substitute some other work that some of our team are doing, is thinking about having the concept of a metadata registry around uh the these systems that incorporate AI or we we extract data from and uh supplement you know AI uh within within the agents themselves is uh finding a way or a common language, clear a language, so we can discover and understand you know what is the purpose of um you know these systems and and the data that uh that we're deriving. Um and and also when we think about observability of those agents as well. It's all about sort of think thinking about how we can have a common language across the agents themselves and making them a bit more discoverable. Uh so we can we can think about what's coming out, what are the outputs and the inputs?
Guy Murphy:So so Chris, um you talk about needing a sort of a metadata registry. Now, obviously that that's been a concept for data management for decades. So what makes it different when we're thinking about applying sort of canonical data definitions, unified data structures to agents rather than just to databases and data stores? You know, what what is it that's making this agent factor um something that needs to be considered through a different lens?
Chris Ward:Yeah, so I think for me it's it's certainly the the fact that agents now are bringing or AI specifically is bringing the ability to incorporate decision making or intelligence around uh business decisions that need to be made in and around the data itself. So it's less of you know what what does this schema mean, what what does this field mean in the database, it's you know what are the actions and the outputs of of the AI specifically, and it's really looking to understand what what is the persona um or the objective of the AI and what kind of decision is it looking to take and what path is it going to go down? So it kind of opens up more of a broad uh spectrum of of possibility.
Guy Murphy:Um can you give us a more concrete use case so the listeners can understand where you're coming from? Because I we've spoken about this outside of the podcast a good few times. Um so if you think about um the invoice processing work and some of our internal um customer analysis work we're doing, could you highlight from those points of view those two use cases through this new growing issue or challenge?
Chris Ward:Yeah, so I think a great example of this is some of the work we're doing around building out a customer 360 uh agent, which is looking at pulling on a number of different dimensions and data that we have within our organization that tells us uh you know holistically what what uh uh the situation is for a specific customer, and that's cutting across our support uh ticketing system, which is NDesk, our CRM Salesforce, which has all the uh information you know relating to to renewals, opportunity account data, industry metrics, that kind of thing. Um also our platform um uh consumption metrics as well. So, how uh how is the customer engaging in using the platform? You know, are they trending uh up or down on usage? So if we think about those uh I mean three very different um sets of uh data there is um we we we're taking the the insights that these systems provide and having AI make uh informed decisions as to maybe analyzing the the the risk, the churn risk of of a specific customer, um maybe going and looking in at understanding the uh the sentiment around some of the support cases and how that you know relates to to engineering activity. You've broadened the spectrum of of possibility uh there. So again, sort of going back to my original point, it's you know what what are the commonalities across these different systems and where I can kind of fit in and augment and support some of the decision making and outcomes there. Um and it's been able to do that so that in a way that we can easily understand uh and discover what those possible possibilities are, and we're starting to see as we emerge um uh into different use cases within the business, there's there's overlaps as well where other areas of the business can use some of the work that we're doing within that agent specifically. Um, so being able to surface it uh and you know providing easy ways to find and understand uh what what the agents are doing is yeah, is something that I think is really important going forward.
Dominic Wellington:Reusable components, what a concept. AI continues to recapitulate the last 40 years of EA best practice, but at least we're doing it at speed this time around.
Guy Murphy:So we're obviously talking about a lot of governance and control here. So, you know, there's obviously a few things, you know, you've talked about like a metadata registry, so you know I envisage that into being able to, when capturing this data, being very clear on suffering out sort of the traditional data sets that are being fed and stored into these systems, and as you said, the more interpretive subjective return data sets that will still need to be stored and audited. Um, and again, we've all heard of cases in the public domain where AIs are actually now making these decisions within the business process, but tagging that directly to be very clear that it's a different class of data.
Dominic Wellington:Yeah, this goes back to the conversation guy that you and I had with Eugene Lenetsky when he was talking about the difference between syntactic integrations and semantic integrations. So we're moving from something that's defined very much by the particular format that the data is in, by the technical aspects, to something that focuses much more on what it means, what it means to the business, what is the value, and that's uh a shift that I don't think we've seen fully play out.
Guy Murphy:Um no, absolutely. Um Chris, obviously, you you know, with a lot of the other work of you and the wider virtual team that you're managing, um how do you see the onboarding of new AI services um emerging? Obviously, you have to be careful on our internal confidentiality, but I'm just intrigued to see, you know, we're you know, we're a sort of a classic tech-driven uh SaaS provider ourselves, um, but you're obviously doing a lot of work near our more traditional business process systems. Um, how how are you envisioning the ability to roll in capability and roll out correctly and make sure that we've got a clear chain of um ownership and insight across these different services? Uh, what's your point of view on that?
Chris Ward:Great question. So I think when we look at uh you know moving into more of an agentic uh sort of landscape where we're building out again like potentially digital workers to support uh you know some of the mundane activity and and again bring intelligence in into the business workflows is we've started to think about how we can maybe onboard uh having more um or concrete sort of onboarding steps for the agent. So uh thinking in a sort of a traditional software development lifecycle um uh approach where we want to understand you know the again the purpose of the agent, what it's doing, uh, what are the risks uh and potential um you know attack services that that agent might expose that could bring um you know challenges around sort of security and compliance. Um so it's really getting ahead of that uh early on. So as we start to scope out the the purpose of uh of UVAI capabilities, looking and assessing against a concrete set of um a checklist if you like, uh and having that drive whether we feel it's um you know it's it's suitable for for the business and doesn't expose it too much uh from a risk and compliance perspective. Um and then and then also I think having you know your typical sort of versioning and um uh thought process around when we start to roll it out once it's in production, um thinking beyond you know the development sort of QA cycles is as the models evolve and uh the the needs uh of the agent evolve relative to the business, being able to gate that in a way that's safe and and you know potentially roll it back if uh if it runs into two issues there. So that's kind of how we're thinking, certainly from there an age agentic angle.
Guy Murphy:Um what what's your thoughts on how you measure the quality of the output? Because that that seems to be a massive discussion in the marketplace, obviously, because you know, and as Dominic said, you know, we're now moving away beyond just the here's an agent sitting across three or four data stores, bring bring me the data, and now we're actually saying interpret this data, give me a subjective statement. Um, how what sort of frameworks or how are you going about today looking at the output of these agentic agents to say actually that is right?
Chris Ward:Yeah, I for me it it really ties back to you know what are the business objectives. Uh so thinking thinking about very early on in the process, um aligning again with the business stakeholders, it's not just a technical conversation. We're not just building something, delivering it, we're we're tying it back to you know measurable business value. It's really looking to us understand and set out a set of you know uh key performance indicators, measurements of success uh that we can continually refer back to, um, not just at the end uh when we deploy and run it in production, but through the the lifecycle of development and QA. So a great example of this uh to give you a sort of a concrete idea is we um, as you know, built a finance reconciliation agent that our finance teams use at month's close that uh effectively parses through uh customer order forms and does a recis reconciliation of that data against uh data that we have in Salesforce, which is our system of record for uh for opportunity and sales data, is quite early on. We uh we we wanted to to lay out well what what does success look like uh for this particular agent. Uh and we we wanted to achieve a degree of accuracy uh in terms of the the reconciliation um that we were comfortable with, or the finance team were comfortable with. So um we were able to capture those metrics and we built a really nice dashboard on top of the agent that we could continually uh refer back to as we evolved the um you know the the different data sets that we're bringing in and and the the prompt uh because that that changed over time as well. So I think yeah, going back to my um kind of the essence of what I'm trying to say is really understanding what what those metrics are, what what is the uh the the business outcome that you're looking to achieve and and finding some concrete measurements that uh yeah that that we can continually measure against as as it evolves.
Dominic Wellington:So again, going back to what I was saying, this is really the the acceleration of things that were best practice in the previous world, because uh, I mean I've said many times there's no point doing some sort of technical project unless you have a good idea of you know what would happen if this succeeded, how would we know, how would we measure that? What's interesting with AI is how it goes beyond being just the business case justification, the ROI calculation, and it's also a way of evaluating the correctness of the outputs in a way that wasn't true in previous generations of automation. If you ran a script, the script would produce an output and it was deterministic. With Gen AI, it's not always quite so deterministic, and so having the the same processes that we would previously have used to calculate a return on investment to a project is also how we evaluate it for correctness, and that's a a fascinating evolution, and again, potentially a way of driving best practices that the things that we know we should be doing, we should be eating our vegetables, we should be flossing, and yeah, yeah, yeah, you know, it's a busy, busy world out there, you don't always get to those things until they become no no no, it won't work unless you do these things. Uh, and if you don't, bad things will happen. So, in that uh in that vein, how are you thinking about some of the uh new uh protocols that are starting to emerge uh to manage this explosion of AI functionality as every tool, every vendor adds uh AI support, but only for their own little pod, only their own little silo? Uh how are you thinking about the protocols like MCP or A2A that promise, propose, uh depending on which one or which combination gets the adoption to make it possible to tie all of this together in some sort of a genetic layer that spans all of the different silos?
Chris Ward:Yeah, so so we're starting to think about how we can, I mean, it's emerging, right? We're still quite early on in um uh in the deployment of it. Uh is I I see it as an opportunity to uh help, again, going back to my original point, as I identify the purpose, the scope, um, and the task of of the tools that we're looking to expose these agents to. I think it provides a cleaner interface and approach to to achieving that that we didn't have previously. Um and again, going back to the kind of the traditional um development lifecycle stuff is we can now think about versioning um you know in a way that that was quite challenging previously. And again, just having clearer or an easier way to discover uh the the different capabilities that we're we'll expose as tools to the agents and being able to build them in a more of a composable way that we you know we um if we think about how how it worked in the in the solar world, uh it's akin to that, isn't it? Um and then I think if uh if we think about interoperability uh across different vendors, uh it's it's a cleaner interface for being able to achieve that and delegating um you know between different agents as well as as we move into more complex agent architectures, where there are handoffs you know potentially between multiple agents. Um it gives uh you know a cleaner way of of passing off the context and uh how the those agents will communicate with one another. So that's how we're starting to think about it. And uh again, it's there's no sort of tried and tested uh approach yet because it is a relatively new thing, and I think uh things may change you know in the next 12 to 24 months in that domain, but it it gives us a good starting point uh to to start addressing some of these challenges that we're seeing, and and agents being able to speak in a bit more of a common uh language, if you like.
Dominic Wellington:Oh, agreed. Uh although the one component that uh I I see missing from a lot of these architectures is the central repository portal discovery and governance uh capability that you know we all as an industry agreed was necessary with APIs. I think we're going to end up uh coming back to that also with uh with the agents. It won't just be uh peer-to-peer connections everywhere. Uh although, of course, we would say that uh looking for SnapLogic, we we're very much in that business. Uh, but uh I do think there's a a clear need for it. And in the same vein, so let me bounce a thought off you. I'm not sure that we'll see uh MCP, A2A, whatever it ends up being, uh becoming the the only or even the main way that uh commercial systems, industry standard systems, will end up communicating with each other. Where I see it being very useful is uh in exposing the sorts of in-house developed tools uh or highly customized tools that have previously been a bit of a black box just because of the complexity of integrating with them and their one-off nature. Uh, and so if they end up being an abstraction that uh makes it easier to integrate uh those sorts of systems, uh then it'll bring bring them into the light, so to speak, uh make them capable of being managed, being integrated, and delivering additional return on the investment that's involved in their creation. What do you think?
Chris Ward:Completely agree. Yeah, yeah. So uh I think be being able to have a more consistent way of uh again get communicating between the different tools and and capabilities is uh is much needed. So we've seen, I think, up until now, um like you say, um you know, businesses building their own agent capabilities and maybe struggling to expose some of um some of that to you know to the external world in a way that it's as you say easy to discover and understand um you know what what it's all about. So you know being able to now lean on a protocol or a number of different protocols that that bring that common language in um easier way to to sort of communicate is is is a welcome thing.
Guy Murphy:Yeah, and I I'm gonna sound a little bit cynical, which is I'm pretty certain the Oracles, the IBMs, the Microsoft's will have skunk work teams trying to build up their own protocols because that's just how the industry rolls. You know, we've we've basically got two early emerging protocols out there. They're both very early days and but maturing quickly. But um, yeah, I don't think there's any track record in our industry that says other other people aren't going to start building, extending, or just thinking I can do this better. So I fully agree. I think we're gonna see multiple AI style interaction patterns, protocols, um, vendor stacks over time. And uh I think this is we're in that nice early days where we've got uh some lots of unknowns with some interesting early adopters in the marketplace.
Dominic Wellington:Yeah, but as we know from previous waves, that's not always uh first mover advantage uh situation. So we shall see. See. Great. That's been a very interesting conversation with the two of you, and I think we've come up with some interesting insights as to where we think this space is going to go in terms of preventing all of these new AI features from staying as black boxes and coming up with an architecture that's composable, that's observable, that is governed, and spanning all of the different emerging AI features that we have today, and as we just said, that will emerge tomorrow because we're still very, very early days in this industry. Thanks for coming on, Chris. It's always interesting to talk to you because you keep us grounded in things that are going on in the real world. And uh thanks as ever, Guy, for helping the conversation move along. Uh, for the audience, there will as ever be a transcript and the comment thread over on the Integration Nation. I'll put a link in the show notes for that uh for that, as well as to uh some of the work that Chris has been doing and uh a couple of other additional resources uh that uh might uh be useful as a follow up to this conversation. So thank you both. Thank you. Thank you very much, and we'll talk to you all again soon.