The Enterprise Alchemists

How Semantic ETL Redefines TIME ITSELF And Enables The Fully Automated Enterprise — with Gene Linetsky, CIO of Embroker

Dominic Wellington & Guy Murphy — SnapLogic Season 3 Episode 1

Welcome to Season Three of the Enterprise Alchemists podcasts!

We start the season with a special guest: Gene Linetsky, CTO of Embroker.

Gene talks to Guy and Dominic about his need to integrate bettter with existing systems and processes to avoid wasting the time of over-qualified human users. AgentCreator from SnapLogic solved the technical problem, thanks to what Gene describes as "semantic ETL", but that leads to more philosophical questions.

Over the course of the episode, Gene describes how agentic flows are ultimately a management question: humans will still be a lot better positioned to organize the work, whether that is carried out by humans or by AI agents. The goal is to facilitate the work solving business problems with code, not just solving coding problems.

Dominic Wellington:

Hi and welcome back to the Enterprise Alchemists. I'm your host, Dominic Wellington. I'm here with my co-host, Guy Murphy. Hey, Guy!

Guy Murphy:

Greetings as ever from across the Atlantic!

Dominic Wellington:

And we're joined by a guest today, so Gene is here from Embroker. Gene, why don't you introduce yourself for the listeners?

Gene Linetsky:

Hello everyone. My name is Gene Linetsky. I've been the CTO at Embroker for the last year and a half and I'm very happy to be here.

Dominic Wellington:

So thank you very much for joining us.

Guy Murphy:

Maybe if you could actually spend a little bit of time explaining what your company does, what your journey was, obviously from traditional IT moving towards your thoughts around using AI within your strategy.

Gene Linetsky:

Great questions. Those are the questions that probably every company is asking itself right now, and mBroker is not an exception. We've been asking this question pretty much from the time I joined the company, and by now we've made some progress. But first a little bit about Embroker. Embroker is one of the few insured tech companies that is laser focused on solving the last remaining problem in commercial insurance, and in our opinion it's the last mile problem indeed because in commercial insurance some reason, are collectively sitting on probably $10 trillion of insurance capacity for the commercial world.

Gene Linetsky:

And then on the commercial side, on the demand side, the demand is unlimited and guaranteed, in the sense that every new company, every new startup, can't even start operations and can't even get the check from the VCs until the VCs see that they have the main risks of a startup covered.

Gene Linetsky:

So they have to present the insurance certificate, literally.

Gene Linetsky:

And so the challenge for the industry, which Embroker and companies like us, which are very few and far between, are trying to solve right now, is to solve this last mile problem how to make this insurance capacity that is unlimited available to hundreds or maybe thousands of small startups and medium-sized companies around the world that are now are still like in the last millennium.

Gene Linetsky:

Pick up the phone and talk to some broker. Brokers are wonderful people, but they may have relationship with maybe five or ten carriers in their entire careers and then broker is trying to build a system where we can, in real time, assemble the right coverage for the right company from any number of carriers around the world, and those can be hundreds or thousands. It's not dissimilar from what Amazon did when it said people want to read books. If you live far from a big commercial center, you probably have to order it online, order it by the phone I'm talking about 20 years ago and maybe get it a few months later. Right now, any book anywhere in the world that is published you can get two days later in your doorstep, and so that's what we want to build for commercial insurance.

Dominic Wellington:

A little bit like an enterprise equivalent of those B2C places where you can go to compare car insurance or whatever.

Gene Linetsky:

Essentially, except of course. The complexity in the commercial world is much, much higher than you might encounter when looking for a personal policy or car insurance, for example.

Dominic Wellington:

Of course. Yes, enterprise is always tougher to do, so that makes sense. There's a demand there, a need to have something like Embroker facilitate these things. So how does that lead to here, to SnapLogic? What was it that you were trying to do?

Gene Linetsky:

Well, to continue the theme that I started with, if the supply is unlimited and the demand is guaranteed, then the real play in a company like ours is the efficiency of connecting supply with demand, just like Amazon did and Uber did more recently and many other companies who are really in that business of the last mile delivery of something, something that has unlimited supply.

Gene Linetsky:

But it's the interface between the business and that supply that needs refining and dynamic adjustment all the time. So for us constantly looking for ways to shorten the time between the visit by a prospective customer and us figuring out what risk coverage they need and immediately providing the way for them to apply for insurance and maybe ideally in the same session, within several minutes, present several quotes. Some of those could be from our partner carriers. Some of them would be bespoke insurance products that InBroker itself provides, because as a company, we are a hybrid we are an insurer tech, but we are also a licensed broker and we're also a so-called MGA, Multiple General Agent. So we have the right to assemble the right coverage dynamically in real time across the insurance industry from many different carriers and including our own products, and the efficiency and speed of this process is the game we're playing.

Dominic Wellington:

For sure, so that you can accelerate things compared to having to pick up the phone and have a conversation with somebody.

Gene Linetsky:

Absolutely.

Dominic Wellington:

Okay, so for someone who's not familiar with this industry, so that means that you have interfaces and feeds from the big reinsurance houses and that you have to consume those and assemble those, as you said, in real time.

Gene Linetsky:

Precisely. Sometimes customers will even bring us the existing applications or even quotes from third parties third parties, and that's where we're getting into the AI territory, because we need to very quickly understand what those applications or even quotes from competitors or maybe existing providers to our potential customers are. It used to take several days for our team to understand what the existing coverage is, to try to assemble the competing and better coverage for the customers from our partners and our own products and then present it to the prospective customer, essentially as a PDF or a bunch of videos. Right now we've built a system from SnapLogic and that's why I'm talking to you where this process is done using LLMs, orchestrating the existing best models for understanding how PDFs are formatted and what the information is in order to understand what the competing coverage is and assemble the competing coverage from our side in real time.

Dominic Wellington:

So consuming both the feeds, the automated feeds, and the unstructured data coming from the PDFs potentially and who knows, what else at this point.

Gene Linetsky:

Precisely, I want to actually follow up your usage of the word unstructured. The beauty of LLMs that came online maybe a couple years ago and became starting to become ready for primetime for production.

Dominic Wellington:

It's crazy to remember, with the amount of hype, that this market is only a couple of years old.

Gene Linetsky:

Exactly, exactly. And despite people constantly asking each other, what's your use case, what's your business case, we see that, objectively, the amount of money that the enterprise is spending on nlms is going exponentially for the last couple years. People are really getting true value out of this. I can tell you that, um, just skipping ahead and probably we're going to talk about this a little bit more our little pilot by by using the word little I don't mean it wasn't, um, it wasn't a great effort, but it only took two months and within two months we've managed to demonstrate the roi for this particular one narrow case of 10x of what we were spending before. On the particular process, we'll talk about it later in the conversation, but, um, it just shows to say that the utilization of LLMs is a true multiplier right now in the enterprise and I was really happy to get this partner to show my company what's possible because the process is unfolding outside of users and inside every single company.

Gene Linetsky:

The beauty of this technology is that it's a beauty folding outside of users and inside every single company. The beauty of this technology is that it's a beauty, or maybe the challenge is that it's evolving much, much faster than ever before. If you remember, it took maybe 20 years for companies to realize the ROI on personal computing and just personal computers on every single desk. They started buying them in bulk in the last years of the last millennium, but the actual payback didn't occur until much later and we had to endure the dot-com bubble collapse and many other bad events. But right now the business value that many, many companies are starting to extract from the NLM ecosystem is, within a year or even several months, becomes positive ROI. And that's the challenge, in the sense that every company has to adapt with the speed of industry and that hasn't happened before.

Dominic Wellington:

If you are making investments and getting the return inside of a year. From the point of view of a company, from the point of view of the CFO, that's effectively free money and that's always the grail when you can get free money right, precisely, precisely.

Guy Murphy:

So, before we move on to get digging into the pilot that you did with us, um, because, of course, um, a lot of people would like to know sort of what you did, how you achieved it, that that time, time to the on the project was very impressive. Um, I'm going to make a generalized comment. So insurance, traditionally speaking, has not really been the cutting edge of change management and adopting new technologies. Even with being an insurance tech new company in the new, in this new market, how did you talk to your leadership about not just bringing in technology but potentially effectively changing the entire concept of how the insurance processes would work in this new world? And did you get a lot of pushback? Or have you got that great luxury that everyone's like, yes, let's just adopt it and move on?

Gene Linetsky:

Great question, and of course I didn't have that luxury Really really astute observation. And that's again because most companies are not used to having to adapt so fast to the new technology and by now they realize that if they don't adapt this technology fast enough, then somebody else will and they will go out of business. The way it worked at our company is that I realized early on that we need to, and basically that's the feature of our business as it was set up. Many insert techs say we don't care about the legacy industry, we don't really care about the way it's been done for the last thousand years. We're going to do things completely new way way, and many times they're overstretching themselves and they don't have enough knowledge of the industry. They're trying to disrupt, to disrupt it effectively.

Gene Linetsky:

Our company is kind of a schizophrenic in that sense we are half startup and half the traditional, really experienced people from the industry, and so that gives us the advantage of not making stupid rookie mistakes in the industry.

Gene Linetsky:

But at the same time, as you said, we have to convince ourselves as a company many times that what we're doing makes sense. And so in this particular case, what I had to do on the previous project it wasn't with SnapLogic, it was just our internal project was to say, guys, remember, we're trying to connect the unlimited supply to guaranteed demand. What are our building blocks and pieces that we can improve and so like, take one piece and I'll show you, or the engineering team will show the company, how this one piece can be streamlined and improved by a factor of 10 or 100. And so I deliberately picked a very small, very narrow case of this ingestion of the third-party application, as I mentioned before, as one of those cases, because the ROI was so easy to demonstrate. It used to take us several hours for humans to look and understand every single application.

Gene Linetsky:

And not inexpensive humans, because they have to understand what they're reading Well, of course, and they have to be industry savvy, they need to understand what they're talking about. They essentially have to have the entire industry data dictionary in their heads, not just from our company, but from all the carriers, from all the other potential um participants in this process. And when you switch this process into llms that not only syntactically understands that field X in application Y is the same as the field Z in application W, right, it can automatically translate one set of answers to another set of answers from one data dictionary to another data dictionary. And this process, which I'm calling the semantic ELT, is one of the biggest advancements in the LLM arena.

Gene Linetsky:

People are sometimes complaining about hallucinations. People complain about not precisely following the prompts, but for some reason and it's really interesting nobody is complaining about the LLMs not understanding you. You have the distinct feeling that this thing understands you. Obviously it doesn't, but you have this feeling because semantically it can really map what you're asking to its own compressed knowledge. So the advent of semantic ELT is one of the biggest use cases that companies are using right now, and that was the case that convinced our company that this new technology can deliberately squeeze out inefficiencies from every single step of connecting unlimited supply to guaranteed demand.

Dominic Wellington:

By bringing that semantic understanding. I love that: "Semantic ELT, I'm going to use that, I'm going to steal it. And there's also another parallel. That's uh that I loved in what you said. Uh, because at the technical level, obviously SnapL ogic is a great fit for that and we'll talk about that some more. But also it's a philosophical level, because snap logic we always talk about how we are the sweet spots between the old guards that were good in a previous generation of technology but maybe not keeping up with the pace of evolution, and the new startups that are native to the new wave but don't really have the understanding of the old tech, that they're still out there and deployed and running businesses for our customers. And we have a foot firmly in both camps. So very much parallel to what you're saying about Embroker.

Gene Linetsky:

Absolutely.

Guy Murphy:

So that's probably the perfect segue. Could you share with us and obviously our listeners a bit more technical detail about what the pilot did, what the challenges were that you overcame and you've already touched upon the business outcomes but it'd be really interesting to understand how you approached it, how you were able to adopt the technology into your landscape and then how to support your case that you talked about, with this hybrid culture that you face on a daily basis.

Gene Linetsky:

Great questions again.

Gene Linetsky:

So for this project that we've successfully completed with SnapLogic, we picked a slightly more involved and slightly less obvious use case, and that's why it became obvious that internally, we didn't have enough expertise and enough mindset even yet in order to pull it off ourselves, and so our sister company, wayfound, which is in agent orchestration and observability, pointed us to SnapLogic, who is one of their partners, and then we happily connected with the company and explained what we were trying to do, and that goes to the answer, the direct answer of the question you just asked.

Gene Linetsky:

Again, looking at every single workflow that connects unlimited supply to guaranteed demand. We were looking at certain processes and deciding which, and in what order, need to be, and can be, automated, and for this particular one, we picked the workflow that connects our team of underwriters to every opportunity that comes into our door, either through the website or maybe through other means. Sometimes people still pick up the phone and call Embroker as if it's just a traditional broker. What happens in those cases is that, once we understand what the company is and what it does, the team of underwriters has to collect all the information publicly and maybe sometimes ask follow-up questions of the prospect themselves, in order to decide if a broker and or the constellation or carriers has enough appetite and will be in a position to cover those risks, which would be beneficial for the customer and not losing money. For us. That's an important consideration, obviously for any insurance company.

Dominic Wellington:

Only the ones that want to stay in business. Yes, yeah that's right.

Gene Linetsky:

So all we have to do is look at what those people do, what our small but strong underwriting team does, and what we discovered is that most of the time not most of the intellectual, like cognitive energy they're spending on this, but just sheer time is waiting for the browser to load yet another piece of information for them to figure out if this company is worth insuring by us at this moment in time by this coverage.

Gene Linetsky:

Those pieces of information might be company classifications, the information about their funding story, the information about comparable companies in the market, a particular coverage that they want and how that coverage, what incidence of claims in a particular coverage that they want and how that coverage, what incidence of claims in a particular area and in a particular industry for that coverage might be. And all these data points would go into some combined view that one of the members of the underwriting team is analyzing. And by the time all the information is collected, it takes them maybe a couple of hours to make the decision. Just by thinking we're not automating the process of making that decision, because that's, from the regulatory point of view and even from our peace of mind point of view, is not there yet.

Dominic Wellington:

That's what we have the experts for! Exactly, it might be in a few years, but right now it's not. You can avoid wasting the experts' time on pulling information manually from repositories waiting for the browser to update, goodness, in this day and age, amazing, precisely.

Gene Linetsky:

Not only that, this process only kicked off when the underwriting team got the name of the next prospect. So they get the prospect, then they have to spend at least a day, maybe a couple of days, gathering all the information, and it was done by hand all the time. So we're like why don't we identify the types of information and the types of checking humans do and try to build agents that will do exactly the same thing, but automatically and before the human even takes a first look at the prospect? So that's what SnapLogic did for us, and most of the coding itself was done by SnapLogic. What resulted is right now, even before the first look at the prospective customer underwriting team already has the fully assembled information dashboard about all the data pieces that would have been probably necessary to spend a couple of days gathering prior to this solution.

Dominic Wellington:

That's fantastic bringing it all to their fingertips so once they sit down they can immediately get to work and start providing value, start earning their salaries the classic centaur use case. So the AI is assisting the human.

Gene Linetsky:

Exactly. By the way, there was a sweet side effect of this process. So previously, before this went into production and I have to say, it went into production two months after we even launched the project so there was absolutely nothing two months, and two months later we've achieved the result. That is highly measurable, and it's measurable in the most meaningful sense for us, because it used to take on average several days for the underwriting team to make the decision whether this prospect is insurable or not by us, and now it takes two days because there are some other processes. But we've saved at least 50% of this timeline. What it also means is that the people who are getting a quote from us much earlier have fewer days to shop around.

Dominic Wellington:

I was going to say because if you were doing it in two days and your competitor down the street is doing it in five days, I get a quote. It looks like a good quote, so I'm not going to wait around for those guys forever.

Gene Linetsky:

Exactly, and even more so because the people who are savvy enough to shop around would be higher quality prospects in the first place. So this way, not only we are getting the quotes and policies to customers much faster, we're getting a much higher quality of the customers that we are in a position to insure.

Dominic Wellington:

I love this LLM providing a true competitive advantage right away in your business model.

Gene Linetsky:

Precisely Just by eliminating the non-essential steps that were performed by overqualified people manually.

Guy Murphy:

Yeah, for the pilot. What type of architectural if not components but at least challenges, were you facing? Private LLMs, public? What type of governance were you considering wrapping around these if you're going down the public model, especially obviously when you start moving into production? So I guess the boring, operational how do you go from the lab's environment into a productive mode? Because this is a conversation we're having with a lot of customers and you've obviously been able to move at extreme pace with this successfully. And we've all heard the reports from Gartner where actually a lot of customers are caught in this chasm of, we don't actually know how to turn it on.

Gene Linetsky:

The difficulty in that second case and that's why we honestly decided that we needed external help was that in the first case, our workflows were fully, 100% reactive. Somebody uploads a PDF, we know what to do and that's the trigger. We take this one document, process this one document, extract the information, map it to our data dictionary. We're done For the agentic flow. We didn't have enough expertise, because agentic flow is three things really. One is the data pipelines that figure out how to get data from external and internal sources, and SnapLogic figured out how to help us even within our environment.

Gene Linetsky:

The second is the agentic action itself. So somebody has to tell certain agents to do certain things, and we didn't know exactly how to do it. We could have learned, obviously, but it would have taken a lot longer than two months for the entire team to get acclimated to this way of thinking. And number three is how to present the results of this whole thing. So in our case, the underwriting team is heavily using Salesforce and even though initially we thought maybe there'll be a separate portal where all this information about the potential customer is gathered, before the underwriting team gets to it, and I realized why do this? Why not just surface the information they need in the Salesforce views that they already are accustomed to.

Dominic Wellington:

Why create a new UX?

Gene Linetsky:

Right, and so all three aspects of this were kind of wobbly for us. It was the first time for us to do any of those things, and SnapLogic helped us in every one of these three categories Data pipelines figuring out. And that's really your, all of your expertise.

Dominic Wellington:

That's our meat and potatoes, exactly.

Gene Linetsky:

Bread and butter, yeah, so figuring out how to effectively build the pipelines from scraping websites from internal sources like Snowflake that we have internally, from internal sources like Snowflake that we have internally, from places like Crunchbase and a few other places where we can get the public information about any human company from the websites that provide you the industry classification or try to. So that's one thing. The other thing is figuring out how to cause the agents to check each other's work, and that's the important part. That is only possible with agentic flows and we weren't clear on how to orchestrate that.

Dominic Wellington:

And finally, that's still a very emerging, fast-moving area putting guardrails on the agents to make sure that they do the right thing and don't wander off and get distracted effectively.

Gene Linetsky:

Exactly. Actually, what you said makes me want to go on a philosophical tangent. The problem of organizing software workers' work has always been a problem. That's really what executives and the managers are trying to do. They themselves don't code and therefore their contribution to their companies is organizing the work. Well, if you replace a bunch of developers, especially junior developers, with AI agents and that's the trend right now, which has its own side effects, but anyway, let's put those side effects aside. For now, the problem of organizing the work still remains and for the next several years, until the singularity, at least, humans will still be a lot better positioned to organize the work of software entities. And maybe the software entities were 100 human in the past, but, but now more and more. Some of those software entities were 100% human in the past, but now more and more some of those software entities are AI agents, the orchestration part. Even if it's done with software, it still requires a lot of human expertise, and that's why we went to SnapLogic for it.

Dominic Wellington:

And domain-specific knowledge. Snaplogic customers who attended integrate our big event that we do typically in the autumn timeframe. The last year we ran this AI escape room and it was very well subscribed. I was running the one in London together with a few colleagues to help out. We had, I think, 70 or 80 people in the room. It was a big success. But there was the chief data officer of one of our customers. Astrazeneca was there in the room. It was a big success, but there was the chief data officer of one of our customers.

Dominic Wellington:

Astrazeneca was there in the room with his right-hand man, effectively for technical aspects, and what we were doing was we were running through a very abbreviated version of the sort of scenario you were doing. Can we ingest some data from unstructured data sources, from PDFs, and can we do some lightweight prompt engineering to extract some value? Just to give people a taste of what that might look like. And what was interesting was that the CDO, the exec who was used to organizing work, finished in about half the time of the classically trained IT technical person who approached it with a programming mindset. Can I explicitly instruct this thing to do my bidding? No, that's not how prompt engineering works. It's a lot more about management and the skills come much more from that side precisely gene.

Guy Murphy:

Um, thanks for sharing what you've been doing in the project. Um, one thing I'd be great for to understand these um a couple of points. So I've, in my head, made an assumption you probably did the pilot or the proof point for the project with public LLMs. Do you see the public services being the backbone of an of an ai, or can you envisage, david, that you'd need to stand up your own AI factory, ai platform team to support what you're doing? Yeah, because, again, this is a question I'm starting to see from many of the larger organisations about do they have to stand up a fairly intensive self-foundry for AI, or will the public engines effectively eat that custom-built world?

Gene Linetsky:

Absolutely, and that's the question we've grappled with for the initial phase of this project that we did with you guys, but also previous projects. Our conclusion in the end is that line from one of my favorite jokes you don't need to outrun the bear, you only need to outrun the grandma. And our conclusion based on that is that it's useless to try to compete with frontier models where the humanity as a whole is throwing all kinds of money and, more important, all kinds of energy at the data centers and, specifically for chips and for the energy consumption around this, it would be strange to try to outrun them.

Gene Linetsky:

My analogy for this is trying to build a data center yourself when you have AWS. Yes, in the giant, very specialized cases where I'm myself trying to become a frontier company, it makes sense for me to build a data center, or if I'm a super secret installation by NSA or something. But in vast majority of cases, just like vast majority of companies, even large companies, even gigantic companies in the end decided to standardize on one of the cloud platforms. I think the same economic playbook is going to be applied to this area and in that sense, I've been thinking about our podcast and what we're going to be talking about and I came to the conclusion that the companies like yours that provide the glue for orchestration of whatever the frontier models are doing or not doing and inserting themselves as the indirection layer that not only shields me from the changes in the model, because the layer itself can be intelligent enough to recognize that maybe the new release from frontier model X is sufficiently different from their previous release that I need to intelligently, at least for a time, massage the queries coming from my system. Basically, in software engineering, as we all know, indirection is a good thing. It allows you to isolate modules, it allows you to upgrade modules independently and it allows you not to worry about the underlying changes in the core capabilities.

Gene Linetsky:

Another analogy with AWS in that sense is that look what really happened when people went from self-hosting data centers to the cloud. The hardware did change, but it's not changing that fast. But the amount of management around this purely hardware capability, between the bare metal and the utility that AWS or Azure provides to end users, is vastly improved. Think of your AWS dashboard. It's massively complex. It's not because anybody wants to make it complex, it's just the management of this capability. Even deterministic capabilities such as just pure raw compute is so non-trivial that takes the entire company, which basically charges us not for hardware but for the ability to manage the hardware. So I want SnapLogix of the world to charge me for the management, for being the interaction between me and the fast-moving frontier models so that'll be uh music for the years of my uh marketing and uh, so we'll we'll take notes on that one um.

Guy Murphy:

But you've literally just started going into the next question I had. So I'm talking to a lot of IT leaders and architects and, very much like you know yourselves, most organizations are now across that line of is AI real? Is there a value to it? And we touched upon this earlier in the podcast. You know there, you know there is definitely some interesting challenges around plugging non-deterministic systems around deterministic systems and processes, notably in your use case.

Guy Murphy:

But one thing I've observed in the last six to nine months and I'm starting to hear echoes of this from certain customers is we see seeing not just a fast movement around the AIs, but it's almost becoming insane in the sense of new large-scale models are coming out three to four months apart now because of the industry pressure, and I was talking to one customer where they're starting to kick off an SAP migration. It's a four to five-year project. The whole strategy includes having AI around it. But now they're starting to get worried about do they wait, do they try to keep up? There's a lot of pressure from the vendors, even some doubts around will some of the older models still be supported by the time these things actually get live? At the end of the meta project around it? What's your thinking, now that you've been doing this for a while, on coping with this pressure of this sort of quite extreme pace? Or do you think the pace will just slow down over time and it'll just become another part of your standard IT upgrade process?

Gene Linetsky:

Well, to start with the end of your question, what you said over time, time itself is already redefined. When we say over time or in the long run, the most we can possibly mean is two years. It used to be five, it used to be ten, it used to be a millennium, it used to be in the Middle Ages, you can wake up 100 years later and not notice any differences, not that anybody tried to do this, but I think what's going to happen in the next couple of years is that people will realize that the only way to not to only look at the cake and not eat it is to turn the cake in and of itself, meaning that the only level of intelligence that is sufficient to deal with this fast pace of frontier model evolution are frontier models themselves. Right, so we have the magnitude of the problem that can only be solved by the technology that powers the problem in the first place.

Gene Linetsky:

There are first glimpses of this. I mean companies like OpenRouter are trying to orchestrate models and isolate, as we said earlier in this conversation, the consumers of intelligence, so to speak. Just like AWS isolates consumers of compute from whatever reconfigurations or mishaps might happen in their data center. The same level of intelligence, configurations or mishaps might happen in their data center. The same level of intelligence, or times X, will probably be required to keep up with pace and not succumb to. What is this fear of losing out?

Dominic Wellington:

The FOMO, the fear of missing out.

Gene Linetsky:

The fear of missing out. Sorry, I was looking for the abbreviation, so it might not be a satisfactory answer because it doesn't give you the recipe of how to do this. But I think the general direction will be to use the power of the LLMs themselves to try to isolate the end user of knowledge and intelligence from the changes in the mechanics of how they access it. I don't think the nature of knowledge and nature from the changes in the mechanics of how do you access it. I don't think the nature of knowledge and nature of intelligence provided by these systems will change much, because in the end it's just aggregate human intelligence, just like, in the end, compute is compute, but the way to consume it is changing all the time, and the level of intelligence itself is changing, and so we might as well use the intelligence itself to help us cope with it.

Dominic Wellington:

Would you agree, then this is a theory of mine that I've long espoused that when you get a wave of technological evolution, what was previously some out there complete outlier of a process becomes best practice and then becomes table stakes just to be able to keep up so much, in the same way as we saw previous layers of abstraction like API management. So try to separate out the different levels and components so that you could talk to a thing and the implementation of the thing could change, but it would present a uniform interface to you and you wouldn't be disrupted, so it could change on its own schedule. If you're already doing API management composable enterprise, decomposed architectures properly, you'll be in a better position, surely, for all of this AI-led further acceleration.

Gene Linetsky:

Absolutely 100%. The good news in this new era is that the new language of integration is English. The new language of integration used to be API endpoints. There was some innovation in that area, like graph, the ability to access API points and get the structured data in return. It was all semantic, all syntactic. All the integrations to date had been syntactic. That's why there are companies who are proud of having hundreds and hundreds of connectors so that they can do so-called intelligent ETL. But now that the language of integration is English, we can plausibly expect those things to start talking to themselves. And I think the emerging philosophy around this is agentic flows. It's explicitly when some agents are checking out the work of the other agents, identifying the gaps, identifying exceptions and trying to reason around them, just like a human troubleshooter would.

Guy Murphy:

That potentially makes sense. I think it's. You know, especially around that domain. You know we've seen the rise of MCP from again, out of nowhere. It's a standard. I've been tracking in detail and again it's a little bit like the wider AI discussion, which is I'm almost having to look at it every 10 working days because it's moving through the specs so quickly as people try to work out how to move it from an anthropic and this is not a criticism of anthropic, but it was blatantly an internal technology stack for a very particular use case and now it's in the wild. People are now coming back, going. That's great, but where's the rest of it If we're going to make it the new backbone for AI integration? But it's definitely getting traction. So, yeah, everything you said makes a lot of sense to me around these sorts of areas.

Guy Murphy:

Dominic, I know we're starting to close towards the end of the podcast. Any more questions from your point of view?

Dominic Wellington:

No, yeah, that last point about the abstraction is pretty much where I wanted to get to and, as you say, new emerging formalizations like MCP, A2A, whatever somebody is working on right now. No doubt there are any number of these things, but something to help enable what we talked about right, are any number of these things but something to help enable what we talked about right at the beginning of the conversation, the semantic ETL. That concept has really stuck with me, which has been what we've been reaching for. We've been trying to do it by enriching more and more of these formal interfaces, but perhaps the answer is simply just talking to the machines in plain English.

Gene Linetsky:

Absolutely. The other thing I would mention here is that and this is directly applicable to, if I were SnapLogic to my business future is that, yes, frontier LLMs are trained on a vast majority of human knowledge that's publicly acceptable and expressed in text, in pictures a little bit too. But what about the implicit knowledge and the processes that every single company uses internally? It's not really published widely enough for any meaningful training to occur.

Dominic Wellington:

Not even internally very often.

Gene Linetsky:

Exactly. It's like the famous line by IBM: If only IBM knew what IBM knows. Remember that one?

Gene Linetsky:

And so I think the critical mass of that information is being slurped by frontier models right now through companies like Jellyfish or Pharos or any even more intrusive surveillance systems that some companies, as we know, deploy that are down to counting employee keystrokes. So from that body of information, sooner or later we'll be able to deduce and derive the internal company processes, and if this happens with the help of the mediator like SnapLogic, then you can then turn around and say people have been talking about best practices in the industry for forever. We can actually box and bring it to you because we have learned factually what actually happens in companies, not what they say happens. This happens in this company and these are the business results. You can't argue with this connection between how the company operates and what its last couple of quarters were. It's just Harvard Business School case studies are not enough in order for me to implement it, and therefore I go to McKinsey and pay them a million dollars to translate the business study and business case into my case.

Dominic Wellington:

To make it actionable.

Gene Linetsky:

Exactly. But if this can be done automatically and it will be in the next couple of years because the economic demand for this is probably infinite then we can enrich that semantic ETL by the internal company understanding what this ETL is for, because it's not the end of itself, like I'm trying to get data from external sources and map it to my data dictionary with zero expense. It's why I'm doing this. I'm using this information in a particular business process, and how it's used right now is translated to requirements by engineers who might not even understand the larger implications of the business or how the business truly operates, especially in a large company. So I think that's the last step before we can reasonably talk about not just junior engineers not having anything else left to do but, also seniors who are solving business problems with code, unlike juniors who are just solving coding problems.

Gene Linetsky:

And then the last line of defense is executives, whose mandate is to translate what the stakeholders want, what the board wants, what the society wants from this business, into those implicitly most of the time implicitly defined processes that bring value to the world. So I think to me that's the most exciting part of where the industry is going, and I'm just looking forward to companies emerging. Just like AWS emerged out of the stupid online trading system to become the universal provider of compute to the world. I'm looking to see the universal providers of wisdom, of business wisdom, to all kinds of verticals, at which point the final frontier will fall and the CEO can be automated.

Dominic Wellington:

Fully automated enterprise. There we go. You heard it here first.

Dominic Wellington:

So, thank you so much, Gene. You've been so generous with your time. It's been a fantastic conversation and I love that we wound up where we started from with the idea of these central applications of AI, facilitating the human experts and freeing them up from the drudgery of the toil so that they can focus on the value added bits of the job. It's great when you can provide a full circle closed like that, so for our listeners. Gene Linetsky from Embroker, thank you so much for your time. If you need any insurance brokerage services, you should look them up. They have some cool ideas, as I hope you agreed over this conversation. But for now, thank you so much for listening. Thank you, Guy, for joining me once again and hosting, and thank you one final time, Gene.

Gene Linetsky:

Thank you, Guy, thank you Dominic, it was a pleasure. Thank you so much.

Guy Murphy:

Thank you.

People on this episode