Cyber Crime Junkies

Your Boss's Voice is an AI Clone Now | Latest Social Engineering Risks Exposed

Cyber Crime Junkies. Host David Mauro. Season 8 Episode 25

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 51:38

New Episode🔥We sit down with Kyle Ryan, Senior Manager of AI and Engineering at Dune Security, to break down their never-before-published Intel Report. We're talking deepfake voice calls, AI-generated spearfishing at scale, and attacks happening through Slack, Teams, WhatsApp—channels you thought were safe.

This isn't fear mongering. This is real data on how threat actors are weaponizing AI RIGHT NOW to target small business leaders, your team, and every communication platform you use daily.

The Cybercrime Junkies show dives into the world of cybercrime and cybersecurity, offering insights for cybersecurity for beginners and seasoned pros alike. Learn about the latest threats, including ransomware and malware, and the minds of the hackers behind them. Stay informed and protect yourself from cyber crime.

CHAPTERS
0:00 - Why Everyone Falls for AI Phishing (Even You)
2:19 - Meet Kyle Ryan: Dune Security's AI Threat Expert
3:58 - From Data Science to Cybersecurity Defense
5:40 - How AI Builds Perfect Phishing Emails in Seconds
7:02 - The LinkedIn Research Attack Vector
8:29 - Inside Dune's AI Spearfishing Simulation Platform
12:45 - Real Attack Scenarios: Salesforce & Academia Phishing
18:30 - Why Traditional Security Training Fails Against AI
24:15 - Deepfake Video Calls: The New Business Email Compromise
29:40 - 64% of Attacks Happen in "Safe" Apps (Slack, Teams, WhatsApp)
35:20 - How Hackers Infiltrate Your Internal Chat Channels
42:19 - The Slack Connect Attack Strategy Exposed
47:14 - Try It Yourself: Dune's Voice Cloning Playground
48:09 - Final Defense: What Actually Works in 2025
50:27 - Keep Your Guard Up (Even When You're Exhausted)

Questions? Text our Studio direct. We read these and when helpful we give a special shout out for those to contact us.

Growth without Interruption. Get peace of mind. Stay Competitive-Get NetGain. Contact NetGain today at 844-777-6278 or reach out online at www.NETGAINIT.com  
 

Support the show

🔥New Exclusive Offers for our Listeners! 🔥

Dive Deeper:
🔗 Website: https://cybercrimejunkies.com

📰 Chaos Newsletter: https://open.substack.com/pub/chaosbrief

✅ LinkedIn: https://www.linkedin.com/in/daviddmauro/
📸 Instagram: https://www.instagram.com/cybercrimejunkies/

===========================================================

Think you're too smart for phishing? AI just changed the game completely.

We sit down with Kyle Ryan, Senior Manager of AI and Engineering at Dune Security, to break down their never-before-published 2025 Intel Report. We're talking deepfake voice calls, AI-generated spearfishing at scale, and attacks happening through Slack, Teams, WhatsApp—channels you thought were safe.

This isn't fear mongering. This is real data on how threat actors are weaponizing AI RIGHT NOW to target small business leaders, your team, and every communication platform you use daily.

🔥 What You'll Learn:

→ How AI generates perfect phishing emails in minutes using your LinkedIn

→ Why 64% of attacks now happen through "trusted" apps like Slack and Teams  

→ The scary truth about deepfake video calls (it's already happening)

→ Real tactics hackers use to bypass MFA and security training

→ How to actually verify if that Teams message is from your real boss

Kyle walks us through Dune Security's AI spearfishing platform and their vishing playground where you can hear your own voice cloned. Once you see how legitimate these attacks look, you'll never trust a random message the same way again.

🎯 Perfect for: Business leaders, IT managers, security professionals, and anyone who uses email (so... everyone)

---

⚡ CONNECT WITH DEAN:

🔗 LinkedIn: https://www.linkedin.com/in/yourusername (28K+ followers)

📧 Chaos Brief Newsletter: 

🎙️ More Episodes: CyberCrimeJunkies.com

💼 NetGain Technologies: NetGainIT.com

 

🎤 GUEST:

Kyle Ryan | Senior Manager of AI & Engineering at Dune Security

Connect: [Kyle's LinkedIn if available]

Company: https://dunesecurity.com

 

---

 

🔖 RESOURCES MENTIONED:

- Dune Security 2025 Intel Report: [link if public]

- AI Spearfishing Platform Demo: [link]

- Vishing Playground Tool: [link]

 

---

 

📌 KEY TOPICS:

AI social engineering | Deepfake attacks | Voice cloning cybersecurity | Spearfishing tactics | Slack security threats | Teams phishing | SMB cybersecurity | AI-powered phishing | Business email compromise | MFA bypass techniques | Social engineering 2026 | Cybersecurity awareness training

 

---

 

#Cybersecurity #AIThreats #Phishing #SocialEngineering #DeepFakes #BusinessSecurity #InfoSec #CyberAwareness #AIRisk #DataBreach

 

---

 

🚨 DON'T GET CAUGHT WITH YOUR GUARD DOWN

Subscribe for weekly deep dives into the cyber threats actually targeting your business (not the theoretical BS everyone else talks about).

 

Hit that subscribe button and turn on notifications so you don't miss the next episode where we expose even more tactics hackers are using right now.

 

💬 COMMENT BELOW: Have you received a suspicious message on Slack or Teams lately? What made you second-guess it?

 

---

 

⚠️ DISCLAIMER: This content is for educational and awareness purposes. All demonstrations and discussions are conducted ethically and legally to help organizations improve their security posture.

 

© 2026 Cyber Crime Junkies | NetGain Technologies

 

 

CHAPTERS

0:00 - Why Everyone Falls for AI Phishing (Even You)

2:19 - Meet Kyle Ryan: Dune Security's AI Threat Expert

3:58 - From Data Science to Cybersecurity Defense

5:40 - How AI Builds Perfect Phishing Emails in Seconds

7:02 - The LinkedIn Research Attack Vector

8:29 - Inside Dune's AI Spearfishing Simulation Platform

12:45 - Real Attack Scenarios: Salesforce & Academia Phishing

18:30 - Why Traditional Security Training Fails Against AI

24:15 - Deepfake Video Calls: The New Business Email Compromise

29:40 - 64% of Attacks Happen in "Safe" Apps (Slack, Teams, WhatsApp)

35:20 - How Hackers Infiltrate Your Internal Chat Channels

42:19 - The Slack Connect Attack Strategy Exposed

47:14 - Try It Yourself: Dune's Voice Cloning Playground

48:09 - Final Defense: What Actually Works in 2025

50:27 - Keep Your Guard Up (Even When You're Exhausted)


speaker-0 (00:16.654)
You know, everyone thinks they're too smart to fall for a phishing email. But meanwhile, AI just generated a perfect clone of your boss's voice. Scraped your LinkedIn in minutes and is currently drafting a Teams chat that'll have Carolyn wiring money before lunch. Here's the thing nobody's telling you. Packers don't need to break in. Not anymore.

They just need to log in as you. They need you to let them in. And this year, more than ever before, AI is doing all the heavy lifting. We just got our hands on the Dune Security Intel report and sat down with their senior AI engineer. What's it showing us? It's not theory. It's not FUD, fear, uncertainty and doubt, or any fear mongering. Never before published.

data on exactly how AI is being weaponized right now to target small business leaders, your team, your family, your children, and every communication channel you think is safe. We're talking deep fake video calls, AI generated spearfishing at scale, beautifully built web pages that harvest your credentials like a Midwestern sunset, all built in minutes.

So enjoy these stories all the way through. Because once you see how this actually works and how fast it happens, you'll be shocked at how legitimate it all looks. And it'll have you second guessing every site you've visited in the last couple weeks. And maybe, just maybe, we could actually change how this industry thinks about human risks.

This is Cybercrime Junkies, and now the show.

speaker-0 (02:19.726)
All right. Well, welcome everybody to Cybercrime Junkies. I am your host, David Morrow. And in the studio today, I'm very excited. We've got Kyle Ryan, Senior Manager of AI and Engineering at Dune Security. And they just released the Dune Security Intel Report of 2025, really focusing on AI's application and social engineering, something we talk about regularly on the show. welcome to the studio, my friend.

Yeah, thanks so much for having me. I'm happy to talk about the many ways attackers are using AI in 2025.

Yeah, it's phenomenal. So a little background on you. Tell us how you broke in to cybersecurity. What's your origin story real briefly? And then give us a high level of Dune Security.

Yeah, totally. So I've always started my, or I started my career in data science. Um, and I've held a variety of different roles over the years involving just regular data scientists, ML ops engineer, data engineer, et cetera. And I didn't really make my way into cybersecurity until recently when I joined dune security. Initially as their founding engineer and first employee. And you know, now, you know, held many titles, done many different roles since then, but.

I always had a passion for the way AI could be used both for like for good and bad. then also seeing the dangers that those systems can do is kind of what inspired me to join Dune. Cause originally I was thinking about going the PhD route and considering focusing all of my research in the safety and trust of these systems.

speaker-0 (03:58.23)
So when you were, just a personal question, when you were considering doing all this, were your parents like, what the heck is a data scientist? Those degrees didn't exist when we were younger? Or did they see you develop and have a passion for it?

Yeah. My dad was always pretty tech. He, um, he, kinda, he saw the title and was, you know, knew that generally it's just kind of like another type of computer programmer, but yeah. So kind of, um, a little bit of mystery there, you know, you're just kind of crunching numbers in the background and then building programs that do that at scale. Um, and you know, you see the data go from that, that raw messy input that all the real world systems generate to.

Yeah, okay, he was.

speaker-1 (04:44.494)
Nice clean little items on a dashboard.

That's cool. Yeah, I can see why it's so fascinating to so many people and why it's in such demand.

Yeah, certainly.

So tell me about, tell me about Dune Security, great threat intel, leveraging AI to help defend and educate and raise awareness for organizations. know, walk us, walk us through it.

Yeah. So we've seen a huge evolution over the past couple of decades of phishing, you know, back in the early days of the internet, they were, you know, the standard scams of, Hey, it's your uncle. I some money. I'll pay you back handsomely. Right. Now what we're seeing is these extremely well tailored, very targeted spear phishing emails, which maybe even five years ago would be made by a threat actor, you know, personally with their own hands on keyboard.

speaker-0 (05:40.824)
Based on their human effort for research of the organization, the region, et cetera, now AI can pull all of that quickly.

Yeah, definitely. I mean, with just a few API calls, can, you know, grab information about anyone with a public profile using perplexity, get a nice little report on them, feed that in downstream to, you know, chat GPT or Claude and generate a perfect phishing email from end to end. And it would look, it look and feel like a normal email.

It uses the syntax and the language, the local regional flavor, right?

Yeah, definitely. And, know, one example I like to use is I'm also an adjunct professor at Fordham University and someone were trying to fish me or find information about me online. It would be pretty easy to do so and, you know, find information that I do have like some ties with academia still, you know, send a phishing email with a zip file that contains the payload. It could be, Hey, I've been working on this research. It's related to, you know, even be AI spearfishing or deep fakes.

I thought you might find it interesting if there's room to collaborate. Let's chat. And that could be end to end and you know, these systems like any of the guardrails on the big model providers, they're not going to pick that up as a social engineering attempt because it's going to look and feel exactly like normal business.

speaker-0 (07:02.862)
Right. Yeah, and that's what makes it so effective.

Yeah, definitely.

So how are you guys and you're a AI agent architect. So what does that mean for people that are still dabbling in the initial generative AI space and how is that being applied in raising awareness for people?

Yeah. So I think we want to give all of our users of Dune the most authentic thread experience, which is utilizing the strongest models and the best tools to be able to run a simulation against the users and have it be ultra tailored to them. And then give them that moment where they feel like they need to, you know, wake up to the threats in 2025. What that looks like in actuality is we have a few tools on our platform, which will use an LLM.

and we'll have standard prompts. We'll pull in information about the user, you know, using OSIN or things that like we'll collect during the onboarding process to generate a spear phishing email that's targeted to that person and their role. If it's somebody in sales, for example, we could have that decide to send something related to Salesforce. And now some of the internal tools that one of the AI engineers on my team at Porva developed was it could take that Salesforce asset that we generate.

speaker-1 (08:29.134)
You know, this is relevant with all the attacks on trying to harvest Salesforce credentials. We saw this past summer and where it takes that asset, we've plugged it into an agent and essentially we'll have these LLMs that are chained together using a variety of open source tooling that will pass input and progressively build one after another. There will be a agent for code generation, code quality, vulnerabilities, et cetera. The same tools that I'd use productively.

in my day-to-day workflow as a software engineer. But now we're saying, hey, we have this email. We want to generate a login portal for it, pass in the prompt, pass in the HTML for the email, and then it'll go through and it'll generate a phishing credential harvesting page based off of that initial asset. wow.

So it'll it'll it's all done through AI so yeah, not only the phishing email that is custom tailored right and appears Very spear phishing like meaning very specific to that organization, but it can be done at scale right and then when they click or they get directed to What they believe would be the trusted vendors site or whatever it is, right? Whatever the

the government agency said, whatever the phishing, whatever the context that's being discussed in the email, it'll automatically generate that. And then they will go, they will log in with their credentials and you'll be able to harvest them, which is what an attacker would do. But this is a simulation to show people exactly what is being done. That correct?

Yeah. Exactly. Yeah. You're spot on. Yeah. Yeah. We were essentially able to automate from end to end the generation of the phishing asset, the generation of the credential harvesting portal, and then the last mile is just wiring everything in. But we're able to spend and save the vast majority of our engineers' times by using an automated system. And the hackers will do the same thing where now they can scale up their operations.

speaker-0 (10:07.672)
Brilliant.

speaker-1 (10:33.77)
X, 100X, it's really just limited by the hardware that they're able to get access to.

Well, and it's really a perfect storm for cybercrime, right? mean, cybercrime is the third largest economy, right? There's the US, China cybercrime in terms of trillions of dollars in revenue. And then you add up all the European countries and everything else, and they don't even add up to cybercrime. It's scary. But with that comes that early adopter mentality where they are just willing to take massive risks, whereas

businesses trying to defend go slow and they're cautious, right? Rightly so, but it serves as a disadvantage,

Yeah, and we're always in that game of hackers and defenders playing cat and mouse trying to get the lead on one another. But the best way to protect your organization is to just start getting your users educated on that initial foothold of social engineering. And even just beyond spearfishing, we've seen a variety of novel attacks start appearing in 2025, like a series of pig butchering scams.

all taking place over encrypted messaging apps like WhatsApp, Signal, Viber.

speaker-0 (11:52.018)
that. Yeah, I saw that in the Intel threat report. So let's talk about that real quick. So first of all, let's define terms. You mentioned OSINT before. That's open source intelligence. That's like searching available things online. Some of them are paid, some of them are free, but the point is, it's all the available information. It's that intelligence. It's creating a dossier on somebody, right? Or an organization. Is that fair? that make sense? Okay. And

Yes, it's fun.

speaker-0 (12:22.54)
What was the other thing we were just talking about? the

think pig butchering.

Yeah, pig butchery. Sorry. I'm just starting my coffee. Yeah, so pig butchery. I explained to the listeners what that is. People hear that or they see that in posts or in reports and they don't necessarily know. It's not actually hurting an animal. It is really fattening up a victim, right? And then taking their savings or the sensitive information that you're trying to get, right? That's what the phrase means. Is that accurate?

Yeah.

speaker-1 (12:56.802)
Yeah, spot on. Yeah, essentially it's just prepping the victim to then have like one larger ask down the line. it's, know, easing your way in, getting them socially comfortable, getting them conversing with you. And then after a certain amount of time is then the adversary is going to, you know, make that initial ask. And maybe it's something, something big to kind of shock them a little bit. And then they'll follow up with something smaller, which is actually the real target. Or maybe they'll say, Hey, let's just do.

this one small ask, get this little bit of data out and we'll make it worth your while just to like build trust and show that we're serious about this. What real security engineer is going to send money my way? But from a business perspective, if your employee is sending payment information just to get that initial money sent over to build trust, in a lot of cases, that in of itself is fireable. So we'll run simulations as well.

that are emulating those threats to see who may be an insider threat at your organization willing to take a bribe to exchange data or release sensitive information about an upcoming &A or things of that age.

and in some, in some industries too, there's a lot more, there's a higher risk of insider information. When you think of a small business that's doing insurance or something, you know, very commoditized, you know, risk isn't all that big. It happens. It's, but when you think of, you know, the first thing that comes to mind is like the avid and backsters of the world, like pharmacy, like, like pharmaceuticals where like

that intellectual property, those formulas, they're worth billions of dollars. you know, being able to bribe an employee to give up information is a very serious threat.

speaker-1 (14:50.37)
Yeah, exactly. And we usually see our largest customers in organizations or ones in highly regulated industries or as you were saying, a lot of intellectual property that's very sensitive are the types of companies that want to simulate these types of threats and see who in their organization is vulnerable.

Yet your client list at Dune is phenomenal. You have like, Hulligan, Warner Brothers, Hugo Boss, I was just looking at it, I'm like, holy crap, you guys are really, really hitting it hard, that's great.

Yeah, it's been a long journey. Thank you. Yeah, it's been great seeing the growth as the founding engineer, seeing that, you know, initial zero customers, no product to now two and a half almost three years later servicing. That's fast.

Congratulations.

speaker-0 (15:43.886)
That's fast growth. That's phenomenal. That's outstanding. let's talk about, know, security awareness training over the years has really evolved. know, traditional boring PowerPoint to death, fear, uncertainty and doubt, like not very effective. And then, you know, there's the noble fours of the world which have their place. Like they've, they've got great resources, good, you know, good talent that works there, but there's limitations, right? They're evolving too.

But what makes Dune so interesting is your multi-channel approach, because that's really what cyber criminal gangs, and I call them gangs, it's not like they wear matching jackets and stuff, but like the scattered spiders of the world. they are doing massive amounts of intel up front. They're using people that are very familiar with Western.

They know what to say to get things done and they use all the different channels, right? They'll use phishing, smishing, voice solicitation, AI deepfakes. They'll use all of it, right? To get what they're going for. And you guys kind of address all of that in your platform.

Yeah, and if there is a Scattered Spider merch order, maybe they do get their matching jackets.

You can see what it would look like too. We can all envision it like yeah, and they're younger kids The one a couple of them that have been busted you're like, yeah these guys were working on logos and stuff you know

speaker-1 (17:18.766)
Definitely.

You absolutely know it.

Yeah, but you know, what we've seen is that if you want to go after the crown jewels of a company and get the, you know, the principal cloud engineer, the VP of revenue, if you want to get someone of that archetype, you know, just sending a regular phishing email or even just the AI phishing emails, likely not enough, know, they're, they're well-

pick up the phone. You've to pick up the phone. You've got to talk to them in the channels. First of all, you have to know what channels they're using. Are they using Slack? Are they using Teams? Are they, you know what I mean, are they using Salesforce? They can find that out easily. And then from there, how do they get to them? I mean, you can get all of that stuff. You can get somebody's personal cell number through legitimate channels. You know what I mean? Through Zoom info and Cognizant and all these like legitimate subscriptions will give you the people's

cell numbers and now you can attack them that way.

speaker-1 (18:18.446)
Yeah. And I even downloaded a few tools for sales, like web extensions on Chrome. And I can go to someone's LinkedIn profile, get their personal email, phone number, it's a free Chrome extension. Yeah. know. But you get all that.

Really, people don't know. It's really shocking how much data is out there.

Yeah, there's so much. And even if you fired something off that's relatively generalized to someone, you know, if it's a well-timed, here's an opt to reset your password email. I deliver that in someone's inbox simultaneously. I also call them, even if they pick up up or not, leave a voicemail. You can use a conversational agent to physically chat with them and say, Hey, I shot this email over to your inbox. Um, I just need you to reset your password. Um, you know,

my boss is all over me about this, can you just try to get it done by the end of the day?

And you can spoof the number so it's calling from the IT vendor or from Okta or from whoever you're trying to be from It's very believable

speaker-1 (19:19.714)
Yeah.

Yeah, it sounds like a real person. Yeah, entirely too. It's indistinguishable. Like I think there's a company out there that does text to speech and speech to text called Sesame AI. Yeah. You you pull up that website and talk to it. It sounds like a person. It'll say and like and pause and think and, you know, have all those strange little intonation. yeah.

And 11 Labs for voice is outstanding. Like 11 Labs can just, oh, it's incredible. You can do all of the little idiosyncrasies and inflections in the voice. You can modulate it. It's really quite.

Yeah. And you can wire it directly into Twilio too. So you could set it up to call people from numbers you have reserved over there.

Unbelievable. So what's happening in the real world now is you're not just getting that phishing email. You might also get a call, a text, other things, but then you could also get, correct me if I'm wrong, but you can also get a calendar invite to jump on a Teams video call or a Zoom video call and then be AI deep faked. Right. And so that's very surprising for a lot of people.

speaker-0 (20:37.71)
because it's a trusted vendor. They've met them before. Maybe they haven't seen them in the last week or so, but it looks exactly like them. The deep fakes now are remarkable. They're virtually undetectable by the human eye if they're using a deep fake generator. And in the deep fake detection platforms, in your experience, are they effective? know that Perry Carpenter over at Know Before wrote the book Fake in

Researches this he's been on the news kind of talking about some of the deep fake detectors showing that they they're they're just not there yet They're not perfect like, know, because they'll they'll generate some that they know are deep fakes They'll put it up there and the deep fake detector will say that it's real and it's like no, it's not I just made it So, what do you guys how are you guys educating you guys are raising awareness about deep fakes right showing them?

Yeah.

speaker-0 (21:33.314)
what they are, how they're being leveraged. then how are you, what's the Dune security approach to this?

Yeah. So I think companies doing great work in this space right now are reality defender. They've they've shown that they're able to detect the vast majority of deep fakes. It's always tough because, you know, defenders have to win every single time and the attackers only have to win once. So, you know, we're, constantly fighting an uphill battle. Models are being retrained to always, you know, try to get the most sophisticated models like covered and, you know, detected.

At Dune, we just focus primarily on education. If you talk with someone long enough, the deep fake likely will bleed through. There's some that are ultra high quality. The threat actor has access to a lot of compute. Maybe they have a gaming computer with a really strong Nvidia graphics card that they're using to use the deep fake.

but a lot of the times, you know, they're not running the top of the line hardware. It'll be, you know, only an okay GPU. It'll start to glitch out a bit. One thing that we recommend in our training videos is if you just hold your hand in front of your face and move it a bit, you know, it'll ruin the mesh that's getting applied to your face to put on, you know, somebody else's face and it'll start to glitch a bit. Or maybe if I put it up here, it'll start to follow my hand up.

because it just anchors onto some new points. know, little things like that, or asking them to like hold up a certain amount of like fingers or, know, right.

speaker-0 (23:12.92)
otherwise show that they're human, right? And plus, at the end of the day, before doing anything that would be vulnerable, verify through a legitimate channel, right? Like at the end of the day, doesn't it still come down to verification? Like human verification through a reliable channel. Meaning, if it's your boss on video telling you to wire transfer money, even if they get on a Teams call, walk down the hall and just check.

and just say, before I do this, because I could get fired if I actually do this and it's wrong, or we could get sued, or I just don't feel comfortable, right? Like people have to trust their gut. Like do that. Or if you're working remote, like text them at the number that you know that they are, right? Like actually pick up the phone and speak to them live at a number that you know that they're going to be at. Is that what you, does that make sense still?

Ask these obvious things. It's obvious to you, but I'm just trying to keep up so I can educate the audience and I'm part of the FBI's Infra Guard. So we do a lot of live security, awareness, trainings and demos. So I just want to like make sure I'm at least saying it.

Yeah, yeah, no everything you're saying is all the correct things to do always verify at least today

for today, like six months from now, I might have to go back and be like that and that video's out of date. Like, don't do that.

speaker-1 (24:40.3)
If you can physically access them in person, always verify because until there's cloning technology, hey, that'll be the best thing. But you never know if someone's email or their Teams account or Slack account could all be compromised. And then the threat actor is just like, hey, yeah, it's me on one of the other channels. People share passwords. Maybe they did a SIM swap and reset their MFA code to a new device. And then we're able to break in.

or something along those lines. I remember a couple of months ago, a friend of mine, his Instagram account got compromised. The hacker was messaging his friends and trying to get tackle or like get access to more Instagram accounts for whatever reason. So, you know, I had a suspicion just because, you know, my buddy, he'll usually message me on WhatsApp or signal. And I saw this request come in. I thought it was kind of weird. So I was like, Hey,

Right.

speaker-1 (25:37.71)
our friend that we just went rock climbing with the other weekend, he shaved his beard a new way. What did it look like? And then he didn't answer. And it was sort of a trick question where I was like, oh, the, I guess the joke here is that my friend that we were with is always completely clean shaven. So when the guy's like, Oh, he shaved it into a goatee. And I was like, okay, well, I know you weren't with me because you would remember that. And, know, you know, our friend who isn't really that active on social media either.

So just asking to verify even some like personal things that just the two of

That's exactly right. And what's so interesting, when you look back at history, theft and organized crime and scams long preceded the technology that we're using today, right? And the way to verify is still the same way today, right? You still have to ask something that only that person would know.

And what you did just demonstrated was I asked them something that if it really was my buddy, they're going to know what the answer is. They're going to be like, dude, what are you talking about? He's always clean shaven. That's your buddy, right? But this person was like, yeah, yeah, we shaved it into a goatee. Just taking a guess.

Yeah.

speaker-1 (26:55.256)
Yeah. And that point is just caught red handed. Cause you you throw out that trick question out there and then they're not able to answer it. And then if, you know, if there's something everyone's company has their own acronyms, I worked at a few places earlier in my career where, you know, we had 10 different departments each had their own acronym for it. And, know, I'd be there even in my first couple of months of that company, I would still be having to go back and check like a reference guide on what each one stood for.

a hacker isn't going to put in that same effort to be, this is how they refer to customer success at this company.

Right, so that's a really good point, right? Yeah, that's a good point. Like that's a big red flag. When somebody goes like, at my company, we call them client success managers, CSMs. So if somebody goes, this is Bruce, I'm the account manager, right? Like we don't use that phrase. That should stand out, right? Like that there is a red flag. You need to verify through a legitimate channel.

You guys all, you just mentioned SIM swapping. Let's talk about that real quick. So explain to the audience kind of what it is and how it gets done. Because it is, I mean, obviously in the crypto space, there've been massive thefts where people have access to their private keys, which is really what controls all the money at the end of the day on crypto investments. And through their phone, groups will go and

you know, take over their phone and then get those private keys. And there's been a lot of very high profile people that have had hundreds of millions of dollars stolen that way. Walk us through kind of like how that happens and how you educate.

speaker-1 (28:47.308)
Yeah. So there's a lot of cases now where someone's MFA code, multifactor authentication is linked to their phone and their text. So it'll be, you you signed into Amazon and it says, Hey, enter, you know, you have your phone number. It'll send you a code, six digits over text to you. It comes in as an SMS. You go, you plug that in then you're signed into the account. You know, there's data leaks pretty much daily nowadays, if not every week.

So the odds that there is a compromised password out there, most people share passwords as well. So the thought processes is that hackers likely have these and they don't have to even fish you for them. But if you're protected by MFA, then even if they sign into your account, they get blocked because they need that code. Where SIM swapping comes into play is that the SIM card in your phone is gonna be how they connect your physical device to the phone number with your cell carrier.

So using OSIN, if you know someone's phone number, which we just discussed earlier, how easy it is to find. You can also find the carrier. So if we find out that, you know, that someone, our target has T-Mobile, what these hackers will do and what, you know, is a pretty common and like least resistance way to do it is they'll actually break into something if they don't have any, you know, I guess other type of

access point into a telecom provider, they'll actually just break into the store and most of the attendees will have that iPad where, if you go and buy a new phone, you want to transfer your phone number to it, they can initiate that on the iPad or tablet that's in the store. So when these hackers go to SimSwap, they're actually taking crime into the real world, breaking into these stores.

They'll physically break in like burglars into the stores. I didn't even know they were doing that. That's news to me. So that's interesting. I know that there have been cases where they've bribed some like store employees for access to that iPad because an employee making 15 an hour, if you offer them 30 grand or 50 grand, they're going to do it. Right. And that's nothing compared to what you're going to be able to get as a criminal.

speaker-0 (31:09.502)
And then there's also like calling up and social engineering some of the like T-Mobile employees, right? Like I got a new phone. Can you help me set it up or my phone died or whatever? need to to set up my new sim.

Yeah, and it'll kind of depend on like the persona of the group. Like there are these small little hacking groups that just want to like you brought up crypto, for example, they just want to get access to someone's wallet and grab some money. And maybe it isn't even, you know, someone who has like a large amount of crypto in their account. But if you look at sophisticated threat actors, on the other hand, you know, they're they're aligned with the techniques that you were just mentioning, where they'll call and try to get someone to, you know,

social engineer them to swap it to a new device. And that's what we saw with the MGM hacks in 2023 is, you know, they called the help desk of their, um, their telecom provider and got them to issue that SIM swap just by social engineering them. But it kind of, it'll vary in scales. There's kind of the lowest hanging fruit, maybe the more like novice, less technical threat actors that'll do these smashing grabs on stores. And then there's the sophisticated.

threat actor using social engineering, potentially they're even, you know, they could be using a voice clone if there's voice verification as a fallback that they're able to trick and bypass now.

Yep, absolutely. Or you're seeing them, yeah, and you're seeing them contact like the outsourced IT departments, right? The MSPs out there and the large VARs and they're contacting them. mean, they're calling the help desk asking them to help them fix their MFA. And then with that, they're able to get into these systems. And I mean, the help desk people, there's a lot of organizations that don't really...

speaker-0 (33:03.15)
their help desk in that aspect, right? Because they're not used to being targets and that's really important.

Yeah. Yeah. And I would say it's new to this year at Dune as well, where we're now educating those frontline IT help desk workers, even if they're

Help is into the name. Like that's who they are. Right. They want to they want to help when you hear about some of the breaches. I thought it was the the Salesforce Zendex one or the MGM one were actually heard the help desk call like they were just they were just it was one of a thousand tickets they were working on and they just wanted to help the person.

Yeah, and it's part of their performance indicators too. mean, you know, they want to keep their jobs.

Five stars, like give me a smile back green face, will you please? Smiley face. That's what their goal is.

speaker-1 (33:54.402)
Yeah. Yeah, exactly. And you know that at that point, someone who's trained to make someone's life like help them out, like you said, and like make their life easier, help them out of a tricky situation. And a lot of times the it's, you know, when it's easy to social engineer people in this context is, you know, it's hard to distinguish somebody who, you know, legitimately lost their password is locked out and needs access to the system and is having a hard time.

in a rough day versus someone who's trying to social engineer them because they'll pull on all the heartstrings as well. They'll play on their iPhone the sounds of a baby crying and they're like, hey, just had, this is just adding more things onto my already tough day.

Yeah, it's one of my favorite videos. I've always shown the video from Def Con of the girl calling. I think it was Verizon or T-Mobile or whatever with the crying baby in the background. like, I can't do that. Within 30 seconds, she was in, changed the password, locked the guy out of the account. It's like, wow. So convincing. So convincing.

Yeah. And then what about insider threat? I see on Dune, you guys talk about really training people for insider threat. How do you approach that? Because that's a different element, right? I guess it's what we talked about earlier, the corporate espionage, the bribing of them, things like that. How do you raise awareness for an organization about that?

Yeah. And since it's a new threat, the entry point is just going to be direct experience. So what that looks like today in Dune technology wise is you may be familiar with the big bust on the SIM farm that was in New York City that happened a couple of weeks ago. So there's a bunch of these, you know, phones all on server racks plugged in SIM cards and the, you know, the big bust.

speaker-1 (35:53.664)
like devices that were all send it ready to send out, you know, tens of thousands of text messages. Right. In the dune office, we have something similar, but it's purely for that white hat, you know, testing and simulating. Yeah. So we'll have a variety of accounts on WhatsApp, signal, telegram, Viber. we'll even send out just regular SMS. And what we have wired into those is a conversational agent where

Right.

speaker-1 (36:23.756)
The way we've constructed the whole architecture is each conversation with a new user, it'll have memory and knowledge of who it is we're testing, what's their role, what organization are they a part of, what's the crown jewel that we want to go after. And then we have our own jailbroken models running on Dune Security GPUs. So we're able to have that fully jailbreak capability. if I go to chatgpt.com

Brilliant. That's brilliant. Good job. That's really brilliant. So you're going on encrypted messaging channels, which everybody assumes is safe. You're leveraging AI to extrapolate the data and then you have the jailbroken version to go after the flag that you're after.

Yeah. And it'll say at each step, you know, is this, how willing is this person just like a normal social engineer or hacker, it'll pivot its approach. If the person's not convinced, it'll say, Hey, how about I just, you know, send over your payment information, whether that's your PayPal crypto wallet address and just take a, take a small bit of payment. Give me a little bit of nugget of information.

And if there's alignment there, then, Hey, let's move forward. So we'll have a variety of agents that all have their own conversational style. They have knowledge of what exactly they want from the user in terms of the crown jewel, what the user likely knows, what function they're in. And then the AI fully steers the conversation across these, you know, tens of thousands of users that we're able to send out to and see if any of them are willing to accept a bribe or exfiltrate data as like, you know,

that initial show of like good faith and then we'll promise to send payment later. And then once we have it, or even, you know, in some of our clients as well, it's like unban this account or, you know, do you have the capability to do that? Are you willing? You know, I'll pay you for it. And then we can see in the organization who's willing to accept bribes in exchange to exfiltrate data, unban accounts, or, you know, whitelist an IP address on a, you know, inbound into one system.

speaker-1 (38:41.068)
It could be a variety of different attacks and we'll work with each of our clients to determine what the end goal is. And it's usually we just base it off of the threat intelligence and we can receive from their security team and then we'll work with them to develop this persona.

That's phenomenal. And the one thing that's cool, what Dune is doing is I was reading your, your threat intel stats here. And like, guys did a massive survey of chief information, security officers, CISOs, and like 0 % of them are simulating attacks on these encrypted messaging channels. And you guys provide that, which is brilliant. And then the stats are just crazy. Right? So

A major concern for a lot of people in charge of security at organizations is like leadership impersonation. Right. And you guys address that in here. Right. You guys show them all the different ways that a supervisor now would that be considered an inside threat or it's really an outside. It's an external threat, isn't it? The deep fake aspect is really an external threat. Yeah. Insider. Yeah. Insider threat is more

espionage or just bribery, criminality.

Yeah. Yeah. And then it's sort of one thing that can fall into like maybe an adjacent bucket is business email, compromise or simulate. We don't actually have the account taken over, but we are impersonating someone. And if it's just over messaging, just be like, Hey, like I'm a colleague or I'm a former colleague. And we want to see if they're willing to bite on that because, you know, they're just trying to find a foothold to build trust with the individual.

speaker-1 (40:25.806)
as what we've seen in the threat intelligence that we analyze from CISOs. we're just kind of building back up from what they're seeing. And who knows if even the messages in the threat intelligence, if it's actually someone who worked there previously or not, it could just be a hacker who's saying that to build trust. Maybe it was a former disgruntled employee who maybe wants access back in or to extort the company.

Unbelievable and yes some of the findings from your from your report will link the report and obviously dune security in the show notes I encourage everybody to go check it out. Like it's it's really Phenomenal like your your approach is excellent. So definitely want to promote it 91 % of M enterprises do not simulate attacks in collaboration platforms like slack teams zoom chat

And that's so it's happening all the time. And yet the traditional security awareness and raising of education for employees isn't even addressing that. They're still kind of focused on email.

Yeah, definitely. And it just boils down to that the industry needs to evolve. We've seen, you know, there's been breach after breach as a result of new uses of AI hackers going to places that they haven't before to reach out to employees and employees think they're safe. Someone messages me on Slack. yeah. You know, I might not even think twice. And if it's, you know, it's easy to be able to invite someone and get them into

like a Slack connect channel or a Teams connect channel where it's you and an outside organization. You know, you're able to exchange messages and they're pretending they're a legitimate vendor. They're just making, you know, a new channel because they're onboarding a few new employees or something along those lines.

speaker-0 (42:19.71)
Something very socially acceptable like something that is valid that that doesn't raise a flag in and of itself

Yeah, exactly. And it just doesn't even raise an alarm bell because you're not used to it. And that's why at Dune, we always want to be developing, know, red teaming, adversary emulation techniques to live as close to where the hackers are actually like what they're actually doing in the wild.

Let me ask you this, just for my own identification. How would an external attacker gain access to an organization's internal teams chat or Slack chat? How do they do that? Do they do that through a phishing email maybe, where somebody clicks on something and then they get access and then they build it out? Is that typically how it does?

Yeah, it could kind of go two different ways. One could be, have a target in mind. I want to compromise their Slack or their team's account. So they'll try to fish them for their login. But one thing that we've seen more recently is that the first initial like outreach could be an invite to, you know, connect your Slack channel with an external organizations. The hacker will set up like a sock puppet org.

Maybe it looks like a vendor or it's someone who could be a client if they're meeting with someone who, you know, maybe handles customer data. And that first initial email invite for to connect your Slack groups. It seems legitimate, you know, it could be a real client. It is going to you. Yeah. It's going to take you to a connect channel between your organization and theirs. At that point, you see the scary phishing email come in or what?

speaker-1 (44:06.284)
you think may be a phishing email, you realize it's actually from Slack and it will like create this external connection group and your guard immediately goes down. And then now that they have you in this channel, they, you you've built a little bit of trust. You already clicked on the first email. It was safe. Now you can chat with this hacker or maybe if it's like a, even an LLM and they have it automated now, you know,

We've seen that that's entirely possible. And then they can socially engineer you and be like, Hey, can you just send a little bit of this data? You know, my boss is all over me right now.

And they're inside, there's that feeling that they're inside your organization, not connected.

legitimate coming from slack? Would that be a legitimate connection and they're just in the background? Or is it just a phishing, like it looks legitimate coming from slack, but it's really not?

Yeah, it could be coming legitimately from Slack to invite them to this shared channel. mean, we've seen earlier in the year where hackers were able to use Google and Gmail to send real emails that, you know, everything looks, feels comes from Google, but in reality it is redirecting them.

speaker-0 (45:21.454)
Oh yeah, I remember that. I remember that. Oh, that's a good point. Wow. Yeah. It says 64 in your report. It says 64 % of respondents confirmed social engineering or solicitation attacks against their users via encrypted or informal apps like WhatsApp, Signal, Telegram, Slack, Teams, or SMS. Wow. That's a large percentage. Holy cow.

Yeah. Yeah. And if you just think about, you know, the on a week to week basis, how many spam, text calls, telegram messages, WhatsApp messages you get any, any of these channels that you're actively residing in, you maybe have been for some time, you know, you'll get these pings messages, links, send attempts to social engineer you several times a week. And it's only really scaled up and it's good that I think in the latest iOS update for iPhone.

They're blocking more calls and texts coming inbound. But for a long time, it's just the doors wide open to any of these types of attacks.

Unbelievable. Unbelievable. Well, hey, I want to thank you for your time. What is on the horizon for you and for Dune? Like what is coming up in the next year? Share? You've got public speaking events coming up. You guys have new offerings. What's what's on the horizon?

Yeah, I would say that some of our most recent technology is our AI spearfishing platform. And we also have a vishing playground that one of our engineers created where end users can go create a voice clone of themselves, listen to it, play it back and understand, you know what your own voice sounds like. You can hear what it's like when a hacker is going to mimic it. And you can have that experience fully on platform.

speaker-1 (47:14.838)
So I'd say from Dune, just expect more things along those lines of, know, all those novel threats will have these fun educational experiences where you can play with the technology that the hackers are using and really understand it. And then also if you're, you know, a security leader at your company, you'll be able to issue any of the attacks that we talked about today yourself on the platform.

That's phenomenal. That's a great, great educational tool. It's phenomenal. Well, I wish you guys all of the best and we will definitely stay in touch. We will have you guys on again. I would love to have you come on and kind of demonstrate this, walk us through it, like on a live stream, like certain simulations. That would be cool. All right, man. Thank you so much for your time. I really, really appreciate it.

I'd to.

speaker-0 (48:09.492)
Any any parting thoughts? I always like to get kind of based on everything you're seeing. What should employees be doing? Like what like? Is it really still the fundamentals? Because I've been doing this for decades and still boils down the same things we've been telling people for a long time that nobody does. So like, is it still just, you know, doing the fundamentals and actually applying them, actually doing them?

Yeah. I mean, even if you're tired, you you have a long day at work or even, you know, a long week, or you're an on-call engineer. It's up, you're up late into the night, whatever it may be. You know, if you get that strange request, phone call, you know, team's call with a deep fake, always have your guard up if you're not expecting it. And maybe even if you are expecting it now, just always make sure that, you know, you are thinking that this could be a threat actor, you know,

do your normal verification and assume that, you know, the account that this user is reaching out to you on, if it were compromised, how would you actually identify that the person on the other end of the line is actually the person who you intend to talk to? Because that line between, you know, reality and then some sort of deception is getting blurred more and more as we keep seeing these innovations in AI. So keep your guard up.

even when you're tired from work and it's been a long week, always be ready because that phishing email, that vishing voice call, the deep fake, it's always gonna, you'll get battered with these week over week and it's, one's eventually gonna catch you when your guard's down and you're feeling tired, so yeah.

That's great advice. Yeah, that's great advice. mean, because at the end of the day, today in light of AI, like they don't need to hack in with like advanced technical skills. They just need to log in as you right. They just need for us to let them in and they're looking for us. They're looking for the individual people and employees to be distracted, to be busy, to be tired. Right. And it's a great point. Brilliant.

speaker-0 (50:27.36)
So keep up the great work, man. Like I can't say enough about Dune. Great, great company, great organization. Encourage everybody to check you guys out. So we will talk again, my friend.

Yeah, thanks so much for having me. It's been a pleasure.

Thanks buddy.


Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

Breaching the Boardroom Artwork

Breaching the Boardroom

NetGain Technologies, LLC
Detrás de la pantalla Artwork

Detrás de la pantalla

Dr. Sergio E. Sanchez, el Dr. Qubit.