
Security Unfiltered
Security Unfiltered
What If AI Took Over Your Data Security Tomorrow?
In this episode, Joe sits down with Gidi Cohen, a cybersecurity expert with a rich background in the Israeli 8200 unit, to explore the evolving landscape of data security. They delve into the challenges of managing large data sets, the impact of AI on cybersecurity, and the innovative solutions offered by Bonfy AI. Whether you're a seasoned professional or new to the field, this conversation offers valuable insights into the complexities and opportunities within data security. Tune in to learn how to navigate the ocean of data and protect your organization's most valuable assets.
00:00 Introduction to Gidi Cohen and His Background
01:49 The Role of 8200 Unit in Cybersecurity
04:25 Transitioning from Military to Industry
11:32 Identifying Problems in Data Security
16:00 The Challenges of Data Management in Organizations
23:58 The Challenge of Data Classification
26:59 Understanding Context in Data Security
29:44 Adaptive Learning in AI Solutions
32:22 Proactive Risk Mitigation Strategies
34:57 Integrating Data Security Across Platforms
37:33 The Future of Data Security Solutions
Bonfy ACS is a next-gen DLP platform built for the AI era, combining contextual intelligence and adaptive remediation to secure sensitive data and enable AI innovation at scale. With high accuracy and out-of-the-box policies, it delivers fast time to value while reducing false alerts and investigation overhead. Trusted by regulated organizations, Bonfy ensures compliance and integrates seamlessly with Microsoft 365, Salesforce, Slack, and Google Workspace.
Speaker: Gidi Cohen, CEO and Co-Founder of Bonfy.AI
https://www.bonfy.ai/
Bonfy ACS is a next-gen DLP platform built for the AI era.
Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.
Follow the Podcast on Social Media!
Tesla Referral Code: https://ts.la/joseph675128
YouTube: https://www.youtube.com/@securityunfilteredpodcast
Instagram: https://www.instagram.com/secunfpodcast/
Twitter: https://twitter.com/SecUnfPodcast
Affiliates
➡️ OffGrid Faraday Bags: https://offgrid.co/?ref=gabzvajh
➡️ OffGrid Coupon Code: JOE
➡️ Unplugged Phone: https://unplugged.com/
Unplugged's UP Phone - The performance you expect, with the privacy you deserve. Meet the alternative. Use Code UNFILTERED at checkout
*See terms and conditions at affiliated webpages. Offers are subject to change. These are affiliated/paid promotions.
How's it going, Giddy? It's great to get you on the podcast. We've been working on getting this thing going for a while, and I'm actually really excited uh to hear about your background today and and dive into you know the problem that you're solving because actually the problem that you're solving is pretty relevant to some of the work that I'm encountering right now, which is just a huge ocean of a problem I'm finding out.
SPEAKER_01:I can imagine. So, first of all, Joey, thanks for having me here. It's my pleasure. Yeah, looking forward to a great uh discussion now.
SPEAKER_00:Yeah, yeah, absolutely. Well, why why don't we start with telling your background? You know, how you got into IT, how you got into security, what made you want to go down that path? Was it something that interested you? You know, what what does that look like?
SPEAKER_01:Yeah, so I would say it started the actually where when I was in high school, actually, I would say that I always uh loved programming, math, solving complex problems. I started to deal with cryptology almost like as a hobby. And a couple of years after that, uh I joined the Israeli military, right? Like every Israeli and joined the 8200 unit, right? Like the Israeli NSA. And happened to be, which I didn't know when I joined it, actually, when I was recruited, but actually go to the same place. I like to deal with the same problems as I was a kid, right? The on the cryptology side. Served for X amount of years and then kind of moved on, moved on to the industry. But I would say that was kind of like cemented my love for data analytics, complex algorithmic problems, and of course dealing with the cybersecurity, which was not the term terminology used then. It evolved along the way, but a cybersecurity type of problems and finding the right solutions for them. So I always loved it, as I said, since I was a kid.
SPEAKER_00:So talk to me, okay, talk to me about the crypto side in 8200 group, right? I've had out a lot of 8200 group people. And I know you can't tell me any specifics or anything like that, obviously. I don't want you to, you know, don't tell me anything you can't, right? But what is that like for crypto? Can can you talk to me a little bit about like just what that looked like? Were you creating crypto algorithms? Were you deploying it in you know harsh environments or just what what is that?
SPEAKER_01:Yeah, so as you said, I cannot um share much, but I can tell you that we dealt with code cracking at the end of the day, right? Like similar to what the NSA does, right? So at the end of the day, right, there's encrypted uh communication of some sort, and the job is uh to find systematic words to scale, right, to crack it. So that's what we were dealing with.
SPEAKER_00:Hmm.
SPEAKER_01:Many years ago, I probably information uh is uh very, very outdated by now, but that's how my career started, actually.
SPEAKER_00:Yeah, it's fascinating to me because you know, like you always think you always think anything digital, like the NSA, the 82 Hunter Group, you know, Russia, China, they could just break into it, you know? You just kind of assume that as like an outsider, right? And then you read a story about how like the NSA and the CIA came together, created a company in Germany that was like, you know, creating these devices to sell to their our adversaries that had a backdoor built into it, right? Because we couldn't find out how to break the crypto that like you know, just everyone is using. Like it's not, it's not like you know, necessarily that proprietary. Like it's it's out there, you know, everyone's using like AES 256 and stuff like that, you know. So like to go to that extent, to go to that length, I mean, to me, that's pretty extravagant, right? Like it kind of just shows you the difficulty that they're facing.
SPEAKER_01:Yeah. No, I agree that at the end of the day, right? The NSA, CIA, 8200, GCHQ, and all of them are intelligence agencies, right? So they go a long way, right, to to find the best sources of information, right? They can deal with whatever their country is sending to do. So where it's complex, it's tough, many things are impossible, so not everything is possible to be done. But there is a lot of creativity, which I think that's uh one of the reasons uh not just myself, right? As you know, there are a lot of uh Israeli founders coming from 8200, because it's kind of a unique uh place where on one hand you are dealing with very complex problems, many of them are not possible to solve, or you don't know ahead of time whether it will be ever solved on one hand, which leads to a lot of creativity, a lot of opportunity, right, to experiment with technologies. And that's I think one of the reasons why there's so many startups that are starting a for you know founders that actually serve there, because there's so much um, I would say it's training, it's experience in dealing with the uncertainty of technology and its ability to actually provide a day and solution at the end of the day.
SPEAKER_00:How often would you say you would be handed a problem that you would you know come back and say, like, yeah, we can't, we can't do X, Y, and Z. Like we have to go another route with it?
SPEAKER_01:If you're talking about my service then a lot, I mean it was a say I would say it's not about just uh getting to a dead end. It's more about taking projects that you don't have a clue if you'll ever be able to be successful with them at all. Because you just don't know, right? Yeah, and sometimes it takes a while, months, years to figure that out.
SPEAKER_00:How did you develop the skill set to actually figure that out? And I I ask because you know, when people are getting started in cybersecurity, I always recommend that they start, you know, on help desk, right? Because you get a lot of different experience on help desk. And one of those factors with help desk is you learn very quickly how to identify a problem. That how's it going, everyone? Before we continue on with this episode, this episode is sponsored by BonFi AI, as you probably guessed. But as always, that doesn't mean that they told me what to say or anything like that. They simply believe in the podcast, they believe in the product that I'm putting together that I'm putting out, and they wanted to support the podcast. So Giddy came on, and you know, all of the questions are unscripted as always. And uh, you know, they have a fantastic product that I think will help a whole lot of companies out there because I know firsthand how big of an issue data security and data governance can be for companies. So, with that, please enjoy the episode. This was a fantastic conversation, and please check out Bonfi AI down in the links in the description of this episode on whatever platform you find it on. Thanks everyone. We can't do X, Y, and Z. Like we have to go another route with it.
SPEAKER_01:If you're talking about Sir the Serv, my service then. Yeah. A lot. I mean, it was a it's a I would say it's not about just uh getting to a dead end, it's more about taking projects that you don't have a clue if you'll ever be able to be successful with them at all. Because you just don't know, right? Uh yeah, and sometimes it takes a while, months, years to figure that out.
SPEAKER_00:How did you develop the skill set to actually figure that out? And I I ask because you know, when people are getting started in cybersecurity, I always recommend that they start, you know, on help desk, right? Because you get a lot of different experience on help desk. Yeah. And one of those factors with help desk is you learn very quickly how to identify a problem that you're not able to solve, that someone else probably on your team can solve, or you know, you have to go to Google for it or whatever it might be, right? But you learn immediately, I don't know this. I haven't encountered it, I haven't seen it, I don't know what it is, right? And that's a critical, that's a really critical point because you're using your time and resources efficiently, which is what you need in a help desk environment, you know, to turn over these problems as quickly as possible, really no matter what the environment is, right? How how long, how long did it take you to be able to develop that kind of skill? And then what tools did you implement to kind of like factor it in to be like, okay, this is definitively something I I can't do, right? Like, does that make sense?
SPEAKER_01:Yes. But I would say that there's a lot of difference between the upbringing, as you said, right, of security professionals, security analysts that are starting with the help desk and actually learning it from the ground up. A lot of what you need to do is to supplement it with research, right? So a lot of the ways to handle the unknown is research of different sorts, right? That's when you're developing new techniques, new concepts, new ideas, and some of them, almost by definition, are not the one that's going to work at the end. So I would say it's a it's a mix of a lot of different one skills, but also an environment that actually supports you in actually doing something and spending months sometime on it without knowing whether you're taking the right approach or whether the problem is solvable at all. Which again, it's part of a just the rolling forward, my let's say, career in the in the high-tech or sci or startups. I think it's a g was a great background of how to tackle complex problems that it's not clear to solvable or not solvable in a practical way. And why, if you do solve them, you create a big technological mode, right? You solve some big problems to a lot of organizations, and it's super exciting, but in many cases, just not clear upfront, despite the experience and despite the maybe supporting environmental resources where you'll get to the right solution. But that's uh that's part of what excites me. It's a part of just dealing with the unknown.
SPEAKER_00:Yeah, it's kind of gives you a bit of a rush, you know, when you're when you're going into something that's unknown.
SPEAKER_01:It does.
SPEAKER_00:Yeah, it you know, I'm I'm getting my PhD right now, and not that my field is completely unknown, but there isn't a whole lot of material out there, you know, on what I'm trying to research. And I'm kind of pulling three critical areas together and trying to find that overlap, you know, that works, that meets the requirements and whatnot, which is something completely different from every other, you know, level of study that we've been taught. You know, like in all the way through even getting my master's degree, I was always taught, like, hey, you have a paper, it's due on this date, it needs to be on this topic, you know, whatever, right? You have a project. Right. And now I'm creating the tasks, like I'm figuring out the topic. I am then creating the tasks and creating goals for the end of each class of what I need to deliver. And the university is just letting me, you know, do it. And it took me man, it it took me probably a semester, a full semester, maybe a semester and a half to actually figure that out. Because I'm just sitting here like, well, what do we, you know, what do we do? You know, like I don't know how to figure any of this out. And then my chair finally like broke it down to me. He's like, you're the one that's deciding. You're researching. That's what researchers do. Exactly.
SPEAKER_01:Yeah, so I would say that going back to my early career right in the military service, that was a lot of what we did, right? So we had the high-level missions, but what has to be done on the day-to-day basis, how to deal with them, what to develop, how to develop it, how to test it. It was a lot of freedom to operate, which again was a great uh learning experience for startups, right? Because many of them are starting that way, right? You have ideas for something you want to solve, at least in my on my end, ideas that people did not solve before, or problems that people did not solve before. You have some concept, you are trying to invent something from nothing and make it happen and turn into a business. So uh I think it's very, very similar to that. That's why I said earlier, right? That I think that the analogy is better to actually to what you just mentioned, which is about research. It's a mix of research and practical problem solving and putting that together if you're looking on the startup in a business context. So uh multidimensional type of uh innovation.
SPEAKER_00:Yeah, yeah, absolutely. So, where did you go after 8200 group? Where did you go to actually scratch that edge of researching kind of yeah?
SPEAKER_01:So I worked a few years just after my military service. It wasn't very long, it was about five years, so longer than the minimum, but not it's not a I was not a military career person, right? So got out as a young captain, I think, if I remember correctly. It was a while back. Yeah, so I worked a few years in the industry, so in a kind of locally hired company, but I always had the urge, right, to start my own uh startup since I was a kid, as I said, right? Started to work on developing a lot of code, and just like I just have to do that, even before I knew the term startup, right? It was clear to me that's what I want to do. And that's when I started trying to embark on it, right? So I had a startup in the uh what would be considered today's the open source intelligence uh space, then started the Skybook Security, which I was CEO and ran for a many, many years. Got a pretty nice size, sold it to private equity, stayed a bit more, and left it a few years ago. And the last um the my last uh gig, let's call it that way, is bonfi.ai, which I started in 24, along with my co-founder and CTO Danny Kippin.
SPEAKER_00:So, so after the 8200 group and you go and work for a normal company, right? I I'm I'm trying to compare and contrast, right? Because in America, when like let's just assume someone from the NSA stops working at the NSA and they go work for a normal company, and somehow, you know, people there find out that they used to work for the NSA. Like that person's like like gold. You know, I I mean like everything that that guy says or girl says is like gospel. You know, I I I mean there's they can't do any wrong or anything. You would think that they like literally walk on water, but in Israel, I I feel like that would be completely different because it's like you know, you walk into a place and you're like, yeah, I used to work for 8200 group, and they're like, Oh, good, we have a whole hall of 150 people of you guys, you know, like is that is that's what it what I probably now more than when uh I started actually in my career, because then a few things.
SPEAKER_01:One, I never said to anyone, not family, not wife, no one knew where I served actually after my service. I'm not talking about during my service. So now that I'm kind of openly saying I dealt with code cracking, it took me probably 20 years even to say the words. Well, no one knew actually, you know, I said kind of intelligence core, and that's it. So no one knew exactly where I where I served. And I think that the units there were smaller. They were not small, but smaller than than let's say developed over the last uh uh 20, 20 plus years. So it it was popular to be, you know, to see kind of uh let's say uh veterans that came from those units, but not as many as you see today. And definitely the visibility to where you came from, what we did there was zero, right? We never talked about it.
SPEAKER_00:And is that is that pretty typical? You said that you did like the minimum service amount. Is that pretty typical for three plus two?
SPEAKER_01:So it wasn't the minimum, but it wasn't very long, right? It was five years. Typically in Israel, it was depends on the year was two and a half to three years. That was the minimal service, at least for boys.
SPEAKER_00:And like, do a lot of people typically go that route, or I mean, I would assume so, right? Because there's probably only a select few that like stay in for multiple you know, so there are a lot that are taking so let's say another year, two or three extension to that, like I did, right?
SPEAKER_01:Especially, you know, intelligence corps, special forces or air force and alike. But they, you know, I was actually I really enjoyed the service there, but uh I didn't want to stay a single moment more than that. Nothing bad there. It was actually it was a lot of fun. We did it with a lot of great projects and a lot of um opportunities, right, to develop there. But I was just there, right, to go to the industry, right to develop products and uh start my own companies. So after five years, I said, okay, that's uh it was a great experience, but enough for me.
SPEAKER_00:Yeah, yeah. No, that makes sense. I I could imagine myself going like one of two ways, right? Either I'm a career in the military, you know, for like the entire career, or you know, I'm in there maybe one or two contracts, right? Learning as much as I possibly can and then moving on, right? And I I feel like that's that would be pretty typical, especially for for this industry and the skill set, you know, like just it makes sense. Yeah. People in this industry, they need to constantly be learning and doing something new and everything else, you know. It it wouldn't be like them to stay in for for so long. So, so afterwards, you know, talk to me about how you identified the problem with, you know, large data sets that you're currently working towards solving. Because and I'll tell you right now, right? I was I was on the call with a customer a couple weeks ago, and they were talking about implementing Microsoft Purview for terabytes of data that they had. It was on-prem, it was in the cloud and everything. And they told me that they were starting from literal zero. I didn't think of it was literal zero. I figured you have to have something turned on, you know? Sure. I get in their environment and it is literally zero. There isn't even permissions to to use the services, right? And I'm looking at it, and it's the first time I've ever looked at purview. And, you know, maybe an hour in, I immediately thought to myself, this isn't my specialty, you know. Like pe there, there's there's things that are on the fringe of my specialty that I can get into, I can learn relatively quickly and make progress and move forward, right? But this is like beyond that, where it's like you don't just jump into the deep end with this thing, you know, you kind of like you need some guidance, you know. And just looking at it, I mean, it's a minimum eight-month project, right? Minimum. Like there's no going around it, right? So the problem is massive. And I didn't even realize that it was that big of a problem.
SPEAKER_01:Yeah. No, it's a it's it's a great point that you are making. And they maybe let's generalize it a bit and then we can get specifically to Microsoft Purview and the like, because I agree with you that they're leaving a lot to hope for, let's call it that way, from data security perspective. But if you're looking on data security, right, it's a it's one of the segments in the cybersecurity space that was the least served, in my view, over the last 15-20 years, where there are hundreds of data security companies, hundreds of products, and I would say generally almost none help organizations to address some basic needs. Where they have the sensitive data, to be able to detect and prevent the right type of leakage or misuse of information, to be able to quantify it from risk perspective. They just say the what is lacking in this capability. Now, when we start to think about doing a working on bona fi, right stands for bona fide AI, it was when Gen AI kind of started to take off. And we said, wow, there's a whole domain here, a data security domain that is completely unready for AI. And when we dug even more, we say, okay, it's unready for itself. I mean, data security solutions are just not a fit for the world predating Gen AI. There's the ability to classify and accurately analyze content, and the ability to have some single platform that they can it can be activated, let's call it that way, in a multi-channel way, in a consistent way, and it's at scale. They are not adaptive. They don't have context they can actually analyze content with. They're just not a fit. And that's before Gen AI. And then we start to see kind of what's happening with the adoption of Gen AI, regardless of the adoption is I'm going to use Microsoft 365 copilot, or I'm going to use ChatGPT, or I'm going to use some embedded Gen AI in some other custom application or whatever it might be. The problems are the same and very similar to each other. Just solutions are not providing the visibility, accurate analysis, and therefore not any practical way to detect, prevent those types of risks. Data in motion, data address, data in use, super complex problem. And that's why we decided to start the company, right? Bonfi.ai. We launched a few months ago already with customers in the in production and making our kind of inroads into the market. The more we get into it, then we see exactly what users are. That the organizations that you would expect to have some controls, maybe not great, maybe initial implementation, maybe not the most sophisticated processes, have very, very little going. And I think that the reality is that they are using tools and methodologies that are completely outdated. I mean, the concept, for example, to be able to discover everything, classify everything, ask humans to review the classifications on terabytes of data doesn't make any sense. No one ever will finish those projects. It can take eight months or five years. And by the time you finish them, probably the classification is already wrong because the context around understanding each piece of content, a document, an email, whatever it, whatever it might be, is completely off. So that's one issue. Second issue with the adoption of Gen AI, more and more of the content is generated and used on the fly. Think about, let's say I'm writing an email assisted by Gemini or Copilot, by Microsoft, or anything else. So I'm not saving the email, letting some machine classify that, maybe a human will review it, and later on I'll do something. It's created and shared instantaneously. Think about a web use of, let's say, ChatGPT or like sites, regardless if there are internal applications or commercially available applications, etc. So the world is shifting from data at trust to data in motion. The world, even for data trust, is adopted and the vendors associated with that adopted methodology that do not make sense for organizations. Look very theoretical, like, of course, let's discover everything, we'll get great visibility, and with the visibility, now we can control. Really? Show me one organization that actually, after they scanned 50 or 100 terabytes, understood what's there and took any meaningful action item. Now let's say think about your bank and you are now scanning your 50 or 100 terabytes. And God knows you're going to find a lot of banking information in your S3 buckets and Azure data stores, etc. What will you do then with that? Of course, you are a bank and you are going to have a lot of banking information. What happens then? Probably not a lot, right? So the whole is changing, right? Much more generated content, shifting from data in motion to data, from data trust to data in motion, need actual risk mitigation, visibility and risk mitigation, not just discovery. And the a lot of the solutions, both old and new, just don't fit the mark there. And that's why we started Ponfi.
SPEAKER_00:Make sense? Yeah. No, that makes a lot of sense because you know, I've been in the industry for, I don't know, maybe 10 or 12 years at this point, right? I've spent a large amount of it in healthcare and financial industry. And I can't name a single time where, you know, we we knew, you know, where the data was, we knew how it was classified, we knew, you know, all the different tags with it, we knew the policies with it. We kind of bought these tools and you know, it was a multi-year project that just never seemed to end, right? And yeah, I I've never seen, I've never seen an environment be that knowledgeable about their data. Most of the time, what they do is they throw it into a file share, they throw it into a database, they encrypt it, put least privilege on it, really enforce it hard, and that's like it, because they don't know how else to kind of scale with their data. Because, like you said, the there's a manual process involved with these legacy technologies where you need someone that comes in and actually says, Yeah, that's tagged right, that's classified right. Let's write this policy around it. And there's millions of pieces of data in just a few terabytes, you know, like it's not it's not like an insane size that we're even talking about, you know.
SPEAKER_01:Like, I agree.
SPEAKER_00:Yeah, for modern computing, like it's a manageable size, but then you look at the actual data, and it could be millions of pieces of data that you now have to classify somehow. And it's like, well, that person is going to do nothing else other than that.
SPEAKER_01:Yeah, and it'll be wrong by the time it's done. Yeah. I would say it's almost inhumane to ask humans to classify it. No, seriously, so much data, and all of that is without the use context, right? Part of the move from data trust to data in motion is not just because of the generation, right, and use of Genei, of course, the right technologies in web traffic, copilotype of use, and a lot of other types of ChatGPT and the like. That's uh part of the issue. But the but the second issue is that to classify a piece of content without understanding the context of the business and the context being used is almost meaningless. Again, take the example I mentioned earlier. Maybe let's shift the example to healthcare. Let's say you want to find where you have PHI, you'll run Discovery, after eight months, you'll see that you have everywhere PHI, okay? Because that's what you are doing as a healthcare provider. So, what would you do then with the information? So the fact you have sensitive data, it's not the issue, right? Everyone has every business as sensitive data, otherwise, probably they should not exist if they have no propriety or not custodian of customer data, or they don't they don't have their own propriety information. Why they are there even? So the question is not whether there is sensitive data. The question is sensitive data leaking to the wrong place. Can it leak by the wrong sharing? So think about let's take an example now, let's get to the purview and copilot and Microsoft World. Let's say I'm using now copilot, creating helping, getting assistants to write some document, which is great. It uses the Microsoft graph to be exposed to everything I have access to, whether I should have it or not. Copilot is writing something nice, looks good to me, saying great, let's save I'm going to share it with a customer. Fine. Not only that, if I may leak by that a lot of sensitive data, let's say I kind of share it because I was uh a bit reckless. I opened the share for the entire folder and the entire SharePoint site, and not just for this file specifically. Okay, let's say, because I want to make sure that the customer has kind of the right access. I didn't consider the fact maybe some colleague of mine or myself put some other customer data in the same place, which was relevant only for the other customers. Who would know that? So the point is that the risk is not the factory sensitive data, right? The fact that it's being shared mistakenly, maliciously, non-maliciously, recklessly or not, with the wrong party. So regardless if the sharing is opening a share, right? Like data address, opening, let's say, permissions to for someone to get to it, or sending it over email or over chat, or taking a piece and putting it in some unsanctioned chat GPT as site or like, that's a problem. And I think that's what the industry missed, right? Data security is not different from other disciplined security, it should be risk-oriented, should have context. Without that, so the problems will never be solved, my view at least.
SPEAKER_00:So you have an AI in the background that is doing what? And I'm asking that because you're you're a pretty young company, and from my limited knowledge of AI models, right, they get better over time because they're seeing more data, they're seeing more use cases and situations and whatnot. And so they're able to make more intelligent decisions over time. Do you did you find that to be a limitation just being so so young or early on because you have to train the model to some extent more? Or, you know, how did you overcome that difficulty? Maybe it wasn't a difficulty, and I'm wrong.
SPEAKER_01:Yeah, for sure. It's a it's a great question. So not because of that, but coming to think about it, that actually we can answer that question as well. But the reason what we did, we or what we're doing, we developed technology that does self-learning of the business context. So we're doing it uh by again, unique technology that can get to structured and semi-structured data source in the organization, understand entities, basically create kind of big knowledge graph that helps us to identify entities in context and other techniques. But I would say that just let's focus on that for a second. It's basically an adaptive learning technology that uh we're using, which can learn extremely fast based on the customer environment. Of course, solely for them. We never take the data, we don't create models out of that, not for us and not for other customers. It's solely for the purpose of the specific uh customer of ours. So it's very fast, it can take hours to a day or two, get the full business context, it's updated automatically, so it's adaptive to the changing landscape of customers, employees, groups, stacks, whatever you have in the organization that are it provides the right context for the use of our solution there. So adaptive, fast learning, customer-specific learning, so it has the business context of the customer, and that was our approach uh that um that makes our solutions significantly more accurate on one hand, false positive-wise, significantly less because it's all contextual, but at least as important, and I know a lot of organizations did not get to that, is that a lot of people are thinking about the false alarms is the biggest issue in data security, right? Which is true, it's a big issue, and we're doing a significantly better job due to the context in in showcasing or escalating only the right uh let's say alerts. But I think there is at least probably at least equal, equal sized issue, if not bigger, which is the false negatives. I think it's 80, 90% of the data security incidents are completely because they are not because the solution today cannot even you cannot even express the type of things you need to protect you need to protect against with those solutions, not talking about whether they are accurate or not. Just impossible. Again, think about customer trust. Let's say you're a healthcare provider and you want to make sure in the way that because you have a lot of PHI, a lot of interaction with your patient members, let's say, you want to make sure that whatever you communicate with them is solid, their data, and now no one else, not because of some training of a model, not copy and paste, not hallucination, etc. How do you make sure? How do you even express yourself in a traditional or modern deal? And saying something like, make sure that if I'm sending information to this patient that does not contain information for another patient, who know that? We do because of our technology. So the point is that we reduce a lot of false positives, we reduce significantly false negatives as well, right? The blind spots that every organization has in many of the data flows that they have.
SPEAKER_00:It's interesting what you said. You know, probably 80 to 90 percent of the data, the data issues or the data leaks, companies don't even know are taking place. Do you think they cannot even detect them? That's my point.
SPEAKER_01:It's not a they can't try to capability to the fullest extent, it's impossible to detect them.
SPEAKER_00:Right. Do you think that if let's assume, right, organizations are now able to detect 100% of it, do you think that like their breach disclosures would increase significantly?
SPEAKER_01:It's a great question. Sure, if they have more breaches, they need to disclose them. But remember that part of what solutions like ours can do is to make them avoid a breach. Right? Our solutions are not designed for detection only, though detection is a great starting point because it provides visibility that can result in better controls, better training, and a lot of avoidance of uh at least non-malicious acts at the minimum, right? So that's one. But a solution like ours, actually, you can use them for prevention or remediation. So you can so you can actually avoid breaches altogether. So think about again the example I mentioned earlier, but let's now take a mail example, right? Let's use some Gemini, ChatGPT, microscope pilot to help you write the email. Mistakenly or not, it contains sensitive data you should not have sent to the whoever the receiver is. We can actually detect it real time, block it along the way, or walk in detection mode as well. So we can actually support both detection, detect and respond type of apparatus, but also active prevention for serious, let's say high-risk situations. So the point is that risk can be mitigated, not just detected, but actually can be mitigated in multiple ways, both by using the right detection, visibility, which again result in secondary control, secondary action items like training, tighter access, and other stuff, and actually being in line or have a real-time remediation for actual prevention of risk as well.
SPEAKER_00:Yeah, this is a whole world that I feel like people try to avoid, honestly. Is that something that you see as well?
SPEAKER_01:Yeah, um, less and less. I know I'm saying something which is probably not going to be too politically correct, but got comfortable in the lack of abilities of the data security tools. Because it is what it is, right? For compliance, we buy a tool, maybe we turn on some basic function. That's what you can do with it, and that's it, right? It's almost uh fine, that's what's available for us. We cannot do much more than that. But I think they're changing. I think what we are seeing, and we're seeing a lot of interest in what we do because of AI, not because it's only problems that they are resulted, resulting from AI adoption, but I think a lot of organizations feel very uncomfortable, you know, or security professionals very uncomfortable in their seats with their lack of visibility, not talking about controls of their information flows or information systems. I think they as an industry, right? The industry got used to protect the plumbing, not what goes through the pipes. And with the adoption of AI, everyone understands what they need to do at the minimum. They understand visibility-wise, what's happening there so they can actually have a strategy of how to mitigate the risk. Now, we're seeing some organizations that are implementing DSPMs, right, as a way to feel better. Now, let's discover all of our 100 terabytes and classify them. But as I said earlier, it's fine. There's usefulness, right, in getting high-level visibility that now as a bank you have 100 terabytes of banking information, good job. But what do you do with that? How do you mitigate the risk of the bank or the healthcare provider or technology company or IP leakage? Can't, right? So I think that the awareness is growing very, very quickly, both for the old, let's call it old pre-gen AI type of use cases, because people understand, yeah, maybe it's about time to do something about it because solutions like ours, and I'm sure there will be other startups as well, make it possible to solve some of those issues that were never solved for properly. But definitely when you're looking on the AI user, regardless if you it's a sanctioned application, like let's use Microsoft 365 copilot, but make sure that we have some decent protection upstream and downstream, where there is none. Okay, and when organization are looking preview, a purview, sorry, they look at it like you did in your story, and they say, okay, great, maybe someone else can do the job. So that's that. There's the unsanctioned applications, right? So basically shadow AI, someone needs to inspect that as well, right? Those uh information flows that potentially can lead to unsanctioned use of AI, right? Shadow AI, and also to custom applications. So there's a there are so many moving parts that I think security professionals and organizations, they all understand they just need to take the data security significantly more seriously. And uh, good news, there are now good solutions can actually address it in very different concepts. If you try to do the same thing that you've the industry have done over the last 20 years, it will lead to the same results, even if you're using some smarter LLM along the way to classify, just not good enough.
SPEAKER_00:So for your solution, you said that it's learning the context of the business within the industry that they're in. Yeah. How long does that take?
SPEAKER_01:It takes hours to a day, two days, and of course continuous learning, et cetera. But you can have a great use of our product, and it's designed for that. It's not a coincidence you can imagine. How to bring visibility and provide useful visibility and start tracking information, understanding what's there and what kind of risk and potentially turning on some prevention can make it valuable for you in a couple of days.
SPEAKER_00:Yeah, it's interesting. And then uh just over time it becomes more and more accurate and more, I guess, in tune with your business, right?
SPEAKER_01:It's it does, but the accuracy is very high from from the first moment you use it because we are using for grounding our entity awareness, the business context, we're using corporate their the customers' corporate data for their benefit. So we're not guessing. So, for example, let's say we want to understand if you mix as a bank between one customer data to another, we know the customer base, right? That's how the system works. So we can identify them. It's not kind of like something that in the first week will be 70% correct, maybe in two months, 85%. As long as we have the data and we digest the contextual data, it can be extremely accurate from almost day one.
SPEAKER_00:So, how do you ensure that you're getting all of the data? Because you you also mentioned previously, you know, there's all these different, I just view them as like different, I don't know, use cases or people or entities within the environment. You know, someone's using Copilot, someone's using Chad GPT, another person's using SharePoint to pull data from and to and whatnot. There's so many different like data origination points. How do you ensure that you're that you have them all? Like, how does that even work?
SPEAKER_01:No, it's it's a great question. So of course, we cannot force anyone, right, to have all of the information, flows or systems available to BonFi, but uh but let's discuss our side for that and then our kind of practical implementation would look like. The platform, which we what you call BonFi ACS adaptive consecurity platform, is a multi-channel architecture data security, right, solution. So it's designed with one core content analysis engine that is using the business context and business logic to analyze and take actions as you as you define them, regardless of the source of data. It can be email, line, it can be file sharing for data address, it can be web traffic coming from your browser, can it can work in the same way? It can work on different information flows with the same system, same logic. So you can apply the same customer trust policy to different information channels. You don't need to do it seven different times with different systems, etc. Now, when we are onboarding customers, of course, they don't start in day one with everything. Boiling the ocean is typically a great recipe for a failing project, right? So typically what's what happens, they are starting with one or two connectors, let's say, to the most important systems. Between us, most of them start with Microsoft 65 or alike, because 80, 90% of the enterprises have it, and it has a lot of issues. Not blaming Microsoft or not, but but because it's a core data store of so much information, collaboration tools and emails and all of that stuff, naturally, naturally, it has a lot of sensitive data, and the controls around it typically are very rudimentary, even if customers bought per view, as we said, many of them have very little use. If you put on top of that co-pilot adoption, it's out of control. And we're seeing a lot of organizations are urgently looking for solutions like that. So many organizations start with that, they don't have to. Start with, let's say, connector to Microsoft 65, maybe yet another connector like Salesforce or another system that they wish. Get this, start to get visibility, getting automatically entity risk scoring, which is a byproduct of what we do because we quantify the risk of every piece of data which is sent or shared, then we can actually quantify the risk of the actors as well. So it supports, can support inside the risk manager program if they wish. They don't, the organization doesn't have to use it, but it comes out of the box as a byproduct of what we do. Tune policies, right? We have out-of-the-box policies for a lot of different topics like privacy, IP leakage, customer trust, toxicity, and many others. But they can tune them, turn them on, off, add their own policies, so tune them, start defining some watch it for a while, days, week, two, see what's there, and then start to define some automation rules like in what condition do we want to escalate to the SIM, to a SOC, let's say via SIM integration, when do we want to take active enforcement action because something looks like a mass PHI leakage that we're not going to take the risk, and we just want to block this type of sharing, etc. So then you can turn it on along the way. And the second dimension is say, okay, great, now it works great. I trust it, I know how to use it as an organization. Let's connect it to more information flow, maybe web traffic via browser extension, maybe to other SaaS applications like ServiceNow or Jira or Salesforce or alike where you have a lot of sensitive information with very little visibility as well. So it's typically gradual implementation once you get it going. Again, part of the benefit of having a multi-channel architecture, you have one policy plane, one business context, applies everywhere, so it saves a lot of effort and time for the organization on one end. Second, provides a lot of visibility for the actual risk. Think about whenever you have a user that, let's say, maliciously going to leave the company sales rep, let's say, and you see suddenly some strange use of some based on web traffic, and you see some maybe selling some information to some unknown Gmail, and maybe often opening a file share with Excel party, maybe all of it together looks like some pattern I need to notice. So the fact, of course, the solution along the way will tap into multiple channels can provide even better visibility for organizations to address, say, higher level risk, let's say.
SPEAKER_00:That makes sense. So as an end user, am I onboarding my domains into your platform and then activating all of the different integrations that like all the different integration points that I have?
SPEAKER_01:It's actually pretty simple. It depends, of course, on the information flow. But uh, for SaaS applications like Microsoft 365 for Salesforce OGR and all of them, we're just using their APIs. So very, very simple. Takes a few minutes on the platform end, on our end, it takes a couple of minutes to put the, let's say, service keys and the like, and then it's working. Seriously, as simple as that. We have also an embedded SMTP server. So if people want to send emails via us, right, to use as a relay with all of the remediation actions, we can do it as well as an alternative with APIs, so we can actually support web traffic, custom applications. We actually are going to introduce soon an MCP server interface. So basically different interfaces that can use the same engine, regardless if it stemmed from machine-human interaction with the world, let's say another system or another agent, using workspace like Microsoft or Google type of workspace type of solutions or SAS applications, all of them can flow to the same engine for the same type of risk analysis. So pretty powerful.
SPEAKER_00:Huh. Wow. I mean, legacy technologies, they're they're very rigid in this space. You know, like they're very rigid, not really working with, you know, where the organization is, where they may have their data, you know, 100%, right? And like the all of the other legacy solutions that I've seen, you know, it's kind of like, yeah, we can we can identify all of this stuff in SharePoint, but we're not going to be able to see, you know, co-pilot. We're not going to be able to see Chat GPT, right? Like all this other stuff. And even the ability, like you mentioned, the ability to potentially even like build your own web interface for it, you know, for like your internal team or whatever it might be. It's totally there and it's powered by your solution. I mean, that's that's incredibly powerful, incredibly useful to, I mean, big and small companies, but I feel like bigger companies would be using that feature a whole lot more because they're more, they're more around let's buy a product and then build it for the way that we use it and need it, right? And so you're you're turning your your product into the engine for someone else's, you know, web interface or whatnot. That's that's huge.
SPEAKER_01:Yeah. No, we agree. That's exactly why we do that. That's why we believe that you know a vision and it's vision is great, but needs to be translated to a real product, a real platform, which is designed to fulfill the re the vision. I think we're seeing the even startups, right, in the DLP or DSPNs, right, spaces that are doing better with the old concepts, right? So taking the old concepts and implementing them again with, let's say, better technology, better filtering, better architecture. But we believe that that won't solve anything. Does it just need to change the concepts of how data security is done? Because it's just too complex for organizations, right? To have some day data access governance in just a nicer interface when you have millions of files doesn't change anything, right? Won't solve any problem, right? So where the space has to be reinvented and reinvented in a way which provides both the visibility control along all of those channels that you just like the example you just mentioned, custom applications, let's call it for, for example, with the same engine, without the need for the security teams to be expert in everything, which is part of the issue, right? Think about today, let's say large enterprise, how many different information systems or information flows they have, and each one of them might have its own data loss filtering capability. It's crazy. And then you expect the security team to be able to be expert in all of that stuff, with consistent implementation of the policies they have, provide visibility for the insider risk or other entity risk in a holistic way, it's just impossible.
SPEAKER_00:Yeah. Yeah, no, it's completely impossible, especially with the shrinking team size of security. You know, organizations aren't spending on cybersecurity like they used to. And even the companies that that had giant security teams that I either I was a part of or I knew about, you know, those teams are shrinking as well. And they're they're huge companies in the environment. Everyone would know their name if I said them, right? It's becoming completely unfeasible, especially with with DLP and data, just data protection overall. Like as soon as I looked at purview, I mean, I was immediately, yeah, like you're gonna spend eight months on this thing minimum, and you're not gonna see any value until then. Yeah.
SPEAKER_01:And again, the issue is that it's not the engineering side, right? Uh whether developers are good or not, right? It's not about that. And I'm sure they are good and talented like in a lot of other companies. The issue that the concepts are just not feasible to implement between us, even in a small organization, not talking about large organizations, right? And that's if it's not feasible and you shove so much work on tight security teams, then it's not going to be used. Or it's going to be used to the minimum that the organization can actually spend time on, which is going to be way less than what's expected, even as a minimal control for any compliance requirement, not talking about AI governance or any common sense security apparatus.
SPEAKER_00:Yeah. Yeah, that makes a lot of sense. Well, Giddy, you know, we're we're at the top of our time here, and I really enjoyed our conversation. This is a fantastic conversation, very, very educational, honestly, for myself, because this is something, this is an area of security, one of the very few that I haven't spent that much time with, right? And as soon as I as soon as I like looked at it, I was like, oh my God, this is this is a giant issue. Like, this is so much bigger than what I had assumed before going into it, right? And so it's really helpful to like kind of build in the context that we did.
SPEAKER_01:Yeah, no, thank you. Thanks for the time on that. And maybe just to reiterate the the last point, all of the cybersecurity industry, right? Practitioners, everyone, is used so much to focus on the plumbing. While the only thing between us, almost the only thing they need to protect is actually the information, right? At the end of the day, that's why we exist, right? If there was no information in the systems, no security, you don't need the firewall, right? If everything is fine, right? You just don't need it. Okay. Or that's true for a lot of other types of solutions. So it's all about the information and the kind of the cybersecurity say industry just missed the mark, right? Yeah, the information is not well protected. And everyone is hoping that if the plumbing is going to be right and you'll put the right filters or the right routing or whatever, as the plumb, the different um the different pipes are connected to each other, maybe it will be good, but it's not.
SPEAKER_00:Yeah, yeah, no, that's a great point. Well, before I let you go, how about you tell my audience where they could find you if they wanted to connect with you and where they could find your company if they want to learn more and figure out what they can do for their own environment.
SPEAKER_01:Yes. So we yeah, it's super, super simple. Go to our website www.bonfi b o bnfy.ai, and you can learn a lot about the company, can raise your hand, fill the form. We'll be happy to contact you, to talk to any one of you, want to talk with me, put it in the subject, I'd be happy to talk with you as well personally. But I think the website is probably going to be the best uh place to look at. Also looked at LinkedIn where we have uh the company page, which we provide a lot of uh useful information about. So either way works for us. We'd love to talk with anyone that has interest in the data security domain and especially in the context of uh AI adoption.
SPEAKER_00:Yeah, absolutely. Well, thanks, Kitty. It was a great conversation. I really appreciate you coming on.
SPEAKER_01:Yeah, sure. Joe, thanks a lot for for the time and uh having me here.
SPEAKER_00:Yeah, absolutely. Well, thanks everyone. I really hope that you enjoyed this episode. I hope that you learned something about data security. I definitely did. There's a whole ocean of data out there that you know needs to be secured that is a huge issue. So, you know, if if this is a problem in your environment, which it probably is, make sure that you go check out their website. Feel free to contact either of us. I'll put all of their links in the description of this episode. Thanks, everyone.