Cables2Clouds

An Honest Conversation About AI Security

Cables2Clouds Episode 72

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 52:18

Send a text

Ready for a reality check on AI security? We invited Cisco cybersecurity expert Katherine McNamara to dig into where large language models actually break: from prompt injection and over-permissioned plugins to reckless “vibe-coded” apps that leak IDs, photos, and entire backends. The stories are real, the stakes are high, and the fixes are concrete. We trace how AI sprawl mirrors the worst of early IoT—weak defaults, poor isolation, and a stampede to integrate models into billing, HR, and support without guardrails—only this time the blast radius includes your customer data and your legal exposure.

We talk through the human factor first. Written policies won’t stop someone from pasting a pen test report into a public chatbot. DLP helps, but hybrid work and BYOD stretch defenses thin. Then we move to the core threat model: public and private models are targets; datasets can be poisoned; plugins often ship with admin-level scopes; and a clever prompt can trick an LLM into disclosing chat histories, creating new accounts, or modifying orders. Courts have already treated chatbots as company representatives, binding businesses to their outputs—another reason to treat every integration like an untrusted user with strict least privilege.

It’s not all doom. Used well, AI gives security operations superpowers: correlating signals across dozens of tools, reducing alert fatigue, and surfacing lateral movement. The path forward is discipline, not denial. Fence models on the network. Prefer read-only to write. Gate plugins behind narrowly scoped APIs. Vet datasets for backdoors. Red-team prompts as seriously as you pen test code. And educate stakeholders with live demos so they see why these controls matter. We also unpack the shaky economics—GPU costs, rising consumer fatigue, hype-fueled projects with little ROI—and why that pressure can erode privacy if teams aren’t vigilant.

If you’re building with LLMs or trying to rein them in, this conversation gives you a practical map: what to allow, what to block, and how to make AI useful without turning your stack into an attack surface. Subscribe, share with a teammate who ships integrations, and drop a review with the one guardrail you’ll implement this quarter.


Connect with our Guest:
https://x.com/kmcnam1
https://www.linkedin.com/in/katherinermcnamara/

Purchase Chris and Tim's book on AWS Cloud Networking: https://www.amazon.com/Certified-Advanced-Networking-Certification-certification/dp/1835080839/

Check out the Monthly Cloud Networking News
https://docs.google.com/document/d/1fkBWCGwXDUX9OfZ9_MvSVup8tJJzJeqrauaE6VPT2b0/

Visit our website and subscribe: https://www.cables2clouds.com/
Follow us on BlueSky: https://bsky.app/profile/cables2clouds.com
Follow us on YouTube: https://www.youtube.com/@cables2clouds/
Follow us on TikTok: https://www.tiktok.com/@cables2clouds
Merch Store: https://store.cables2clouds.com/
Join the Discord Study group: https://artofneteng.com/iaatj

Meet Catherine And The Premise

Tim

Hey, and welcome back to another episode of the Cables to Clouds podcast. I am Tim, your host this week, and with me, as always, is my co-host, Chris Miles, at BGB Main at Twitter, x Groc, at Grok. What do you think, Grok?

Chris

At Grok, yeah. That's where you can find me.

Tim

And um, we actually have a new guest to the podcast, but not at all uh someone who is new to us. So a good friend of ours, uh Chris and mine from well, we go way back, don't we? Is uh Catherine McNamara. Uh go ahead and introduce yourself, Kat, and say hi.

Katherine

Hey, I'm Catherine McNamara. Um you can find me uh KMC N A M1 at uh on Twitter. And uh I've known Tim and BGB Main here for uh 10 plus years. We used to work together and then we didn't work together, and now we work together again. Um I've got a couple CCIEs. I work in uh cybersecurity at Cisco and just jumping in here to uh chime in on my thoughts on AI. Now I will preface this. Uh if you're looking for a podcast uh episode that's going to be nothing but praising AI and saying it's perfect, this episode's probably not going to be that one. So if you want to tune out on that one. But if you want to, you know, have more of a realistic, realist uh, you know, uh thoughts on AI, this is probably the episode for you.

Real-World AI Security Risks

Tim

Yeah, yeah. And specifically, because Catherine is in uh cybersecurity, we're actually this is something we've talked about lots on the podcast, but never really been able to bring in the expertise or haven't been able to up until this point really uh bring in the expertise to talk about this. Just about the kind of like the insecurity of AI or or the the push and pull of you know the the agility of developing all these AI, you know, agents and MCPs and stuff, and then like basically where where does security fit into that or or has security fit into that? So um, you know, Kat's got some really good opinions about that. And so maybe just like let's just go from there. Like what what do you think about security and AI?

Katherine

Well, uh to kind of steal page from your book, uh, the S in AI means uh security or stands for security. So, you know. Um, uh there's a lot of angles of security when it comes to AI models. Like we've got people who uh, you know, who you know upload things to public AI models, think you know, not understanding that those models are gonna be trained on that data. So they'll, you know, workers people in you know companies are uploading their proprietary data, their HR information, social security number, credit cards, without even really thinking about that that this is not a private session. It's not gonna necessarily keep all of your data private. Um, we also have people vibe coding coding apps that don't actually understand programming or or uh you know security or safeguards. And we're putting out you know apps now that are you know full of CVEs and holes that you know far beyond than you know what a manual programmer would have you know created. Um so there's just new angles to like the you know to uh to security that we're you know, or to new vectors that we're having to handle. And then there's the you know, these public models and private models people are are are using that are uh not necessarily secure just because they're hosted by a big company. Like uh, for example, like there are you know prompt injection attacks where you can get manipulate or essentially social engineer an uh uh AI tool to give you information that it's not supposed to tell you, or you know, act outside of its safeguards and do something bad. Um, then there's you know just AI behaving as normal and not people not trying to manipulate it, and it might do something bad anyways. Um right now I think open AI is being sued for uh for feeding into somebody who was schizophrenics. Uh I think he was schizophrenic or had some sort of mental health issue where he was delusional and and he it basically encouraged him to kill his family, and that's what happened. Um on top of that, there you know, there are people who are experimenting with private model uh models and downloading um data sets from things like Hugging Face and other places. Yeah. But if they don't understand that, you know, uh if if they don't, you know, change certain uh defaults in like Python Torch, for example, when they're in jet when it's uh processing those uh data sets, someone could actually implant a backdoor in the uh in that data set that basically gives a reverse shell to your back-end network. Um these are just some of the threat vectors. Uh and then people also forget that just like any other piece of infrastructure, uh, you know, AI is, you know, if it's using a SQL database, if you're actually integrating it with other tools or other like plugins or other parts of your network or or uh your systems, it you know, it has to be secured too. And if somebody's able to manipulate the LM input and you know, they might be able to actually access other systems. And so I I you know, as as we're watching like in real time the industry change and people shoving AI into everything from you know toilet cameras, uh hilarious story. I don't know if anyone saw this, but like there was a uh there's a I know it sounds ridiculous, toilet cameras, but there's a a tool that was being sold, like a camera that's supposed to like use it, that's supposed to be a fee that checks your your bowel movements, and it sends it back the data back uh to like uh an AI app that basically tells you like you know potential gastro problems you have. Well, those things uh that was found to have a vulnerability. Um and so we're we're shoving AI and uh into every single uh you know thing, it seems like. I I mean at this point, I'm sure there's an AI uh AI enhanced toothbrush, but we're not really securing it as we go, or at least a lot of the industry isn't. And we're seeing we're just kind of picking up as as the mess happens or the slop happens.

Chris

It's almost like it's almost like the the whole kind of Wi-Fi craze on every consumer product. But completely exacerbated and made 10 times worse, I imagine.

IoT Parallels And Integration Dangers

Tim

Yeah, I mean IoT is like actually has been struggling with this for for years, right? IoT specifically, because the people creating the IoT um, you know, the uh all of the tools, like the cameras, the sensors, whatever, those people aren't interested in security. So you have like this barely, you know, very, very low functionality uh device, some kind of IoT device that has like no security baked into it. It'll just talk to anything on the network. Yeah. Good, good, good uh analogy.

Katherine

Or hard-coded credentials.

Tim

Right. Or it's gotta be in a flat network or something, right?

Katherine

Yeah, like um I I think that uh and a lot of these things are unregulated for the most part. Like uh I f as far as like IoT, for example, when it first when it that first took off, like a lot of like there was a lot of uh lack of regulation. So, you know, you you're still picking up uh we're still picking up like issues with like old IoT devices that are just unpatched, hard-coded credentials, default credentials, default SNMP strings that you just can't change. Um, but AI is similar to that, except it's people are plugging it into everything. Like, oh, I'm I want to replace my call center, so I'm gonna go ahead and you or at least cut down like 75% headcount. So I'm gonna put a chat bot on my website and I'm gonna integrate it with the ordering system. I'm gonna integrate it with the billing system, I'm gonna uh with the uh shipping system, I'm gonna integrate with these different things like HR. And if somebody's able to manipulate that chat bot, they may be able to, you know, get other people, you know, uh uh enumerate other people's data, make changes to orders, give themselves a discount of 100%, things like that. And we have seen that actually start to pop up and happen.

Tim

Yeah, wasn't there uh one of the ver, and this has been over a year now, I bet I wasn't there a uh I remember reading a story about a car dealership in like Canada or something where the the chatbot was told essentially like you're gonna give me a free car, and then some court ruled that because the chatbot was acting as a representative of the company that it was liable, like for the there might have been one for a c for a car dealership, but I do remember a story specifically about that that went to court uh over a uh discount discounted uh airplane ticket.

Katherine

And that one they won. They were basically like, if you're gonna have it represent your company and put this in writing, then therefore you're bound by it. And so the case law uh in litigation actually has basically supported that um that uh if the chatbot says it and is acting as a representative of your company, you're you basically have to to honor those print uh honor it so far, at least in the US. I mean, it it might be different worldwide uh over time, but in this rush to kind of save a buck and also, you know, you know, uh appeal to uh to investors and the stock market of like we've got AI and everything, uh we're opening ourselves up to other other issues as and we're seeing that kind of play out in live in real time. I mean, if every customer I go into these days, like I ask them what their AI strategy is and uh or what they're doing to like prevent uh somebody from uploading something. And it's yeah, like I said in the beginning, it's not necessarily malicious, but like, you know, people just you know have started to kind of rely on these tools because it's an easy button for them. So what are you doing to stop them from uploading something really proprietary or extremely like sensitive to Chat GPT or Grok or whatever other tool they use? And they're like, well, we have a written policy.

Tim

Oh, okay.

Katherine

That's nice, but how many times have people been social engineered, phished, whatever, and you know, ignored the written policy because it's just easy and people don't tend to read the terms and conditions very well?

Tim

Not only that, but once you've uploaded it, who cares about your written policy, right? Your written policy doesn't do anything actually. It's not DLP, right? My written policy is not data loss prevention. So actually, this is a good question. Does DLP work? Like does the would a DLP strategy like save you in this situation? Because it's in this situation with chatbots.

Katherine

In certain ways, but you have to also remember like people take stuff home, take USB drives home, take their computers home, take uh, you know, do things on home computers, BYOD. So we've got like, even if you're trying, if you've got like everything locked down on the the the work computer to some degree, like um, you know, there's still potential ways, unless you're locking everything down, you're not allowing BYOD, you're not letting people VPN in, you're not letting, you know, like doing USB uh storage devices on any company machine. And you you know, and I don't see that level of control in most environments. And I don't think, yeah, you guys work with customers all the time. I doubt you guys see it as well as well, even with your employers.

Chris

Yeah, for sure. I mean, yeah, I think we're both probably in the in the camp where you know they're we have we have tools that are kind of evolving to to stop you from putting corporate data into some you know off-the-shelf model and things like that. But like you said, the thing is we've we've, you know, with COVID and everything, this this whole idea of the the kind of hybrid remote workforce, like the thing is everyone's using Office 365, everyone can access everything on any device they want to. Um, and it's just a matter of time before they put it into something. Because that's the thing. If you have a written policy, that's one thing, but if you leave it purely up to people to follow the policy, that's not gonna fucking happen. Like the people are gonna make errors, and uh, you know, not everyone is as you know security savvy as uh you know us in in the field, right?

Policies, DLP, And Human Weakness

Katherine

Well, even the people in the field I've seen make mistakes. So I'll I'll give you one anecdotal story. I have a friend who's a pen tester, really smart guy. Um he uh and he writes, you know, when if you understand how pen testing is done, like they do, they conduct the pen tests, they find the vulnerabilities, and they make a very detailed report at the end with our with uh an appendix with the artifacts that basically show all the different ways somebody could potentially compromise your network. Well, he had a customer who was like a direct like a CISO, uh very smart gentleman, um so you know, supposed to be like, you know, very security savvy, right? Well, he looked at this uh this gentleman, my my friend's report, and he thought that ChatGPT uh must have written it. So he uploaded the report to ChatGPT and asked ChatGP, did you did you write this? This looks, you know, seems too professional. And uh and then he went back to my friend and kind of you know casually mentioned what he did. And my friend was like, wait, you you you uploaded a pen testing report that details all the ways that your your network and your servers and your your assets could be compromised to a model that by default learns off of whatever you upload to it. And he's like, Yeah, because so in my point isn't to like call this gentleman out who's a CISO and say, like, say he's a bad person, because he's not. We're all human. People make mistakes and with new tools and not understanding like the defaults and you know how these tools work, it's very easy for even like somebody whose job is security to mess up. So if it's somebody like that can mess up, like just your average user, like they're not reading the fine print of what their new HR policy is. They're not they're not going to necessarily uh know like um, you know, like this is what I'm, you know, I I I I read it, I saw that update about Chat GPT and you know, public LL models. I know better now. It's just humans are weak are the weakest link. And as these even though these models are really popular and well known to be used, like they have been, you know, found to have security holes. Like I think a couple months ago uh people found out that uh that Google was crawling uh public uh Chat GPT uh chats. And so people were like Googling like a certain uh prefix, like a Google dork for a certain chat GPT uh prefix. And they were able to find like board meeting minute notes for private board meetings and all sorts of stuff for like a good couple weeks. They were just able to like grab all this proprietary, like see the chats that other people uh that other people shared within their own company, and it was just available on Google. Like, even though the, you know, even if you turn off all the settings to learn off of whatever you upload and you you lock it down, you know, you're still trusting this other company that's not very not necessarily very security focused to not have exploits. And so people are like using these tools as as therapists, they're using them as like to talk about their marriage issues, they're using them to do the help them do their jobs, they're completely becoming self-reliant for both personal and uh business related things without a thought of like what's gonna happen to that data or what would happen to me if you know that becomes public one day.

Chris

Yeah, for sure. That's the um yeah, I mean we've we've spent how many years now talking about how sensitive and how pro how much we need to protect all this data and and now it's just like people now have this open vessel to just like plug it into somewhere that goes into Netherland, like like with with in a fraction of a second, right? So it's like um I'm I'm curious, like from your perspective. Obviously, you've been in security for a long time. Um like what was the threat landscape like before all this popped off? And like I because if it felt like the the network threat landscape was not stagnant, but it was like like probably slower evolving than than other places, and now like the you know, like the the floodgates are now open with this with AI being integrated into everything.

Pre‑AI Landscape And New Vectors

Productive Uses Of AI In Security

Katherine

That's a good question. So like before all of this, I I wouldn't say it was stagnant because there's always like new CVEs and risks coming up, but like sure, you know, like um beforehand, like but let's say before we start before the cloud, but there was just the internal network and we protected you know things coming into the internal network, and there was this nice crunchy hard shell, and and then as you know, remote work took off and cloud took off, now we have to protect against things that are off-prem and people that are roaming. So we have to start enforcing controls for people remotely and BYOD. So things do change over time. Then there's like you brought up in the beginning of the call, IoT. Now we have to uh you know have everything connected and somehow protect with these antiquated, like, you know, tools that don't get updates very often or just were you know engineered without really security in mind. And now we have to protect that not only from outside threats, but inside threats as well, because you know they're very great, uh they're a very great uh uh target for lateral movement. Um so it it it it it's just kind of a new attack uh uh vector that you know is extremely large because again, it's it's really hard. It's uh easy button and human humans love easy buttons. Humans love something that just makes it easy for them uh to go and and and you know rely on. Um and at the same time, there's so many of those targets. I think the like last time I pulled up like a list of like like um uh just gen AI tools, it was like I I think my list was like a thousand, three hundred something, and it just grows all the time. So how do you secure every single thing except locking it down and pretend not letting anybody get on it if they're and not letting anyone take you know travel with data off-site? And even then, like if you lock down, lock everything down, lock your laptops down, lock BYOD down like Fort Knox, there's still always gonna be somewhat of a way uh for people to get this stuff out. So what you kind of have to do is like, you know, it's gonna be, you know, it is gonna be a bit of education, education for the end users, but also like understanding that like they're you know, um, either you're gonna have to build your own like convenient model for them to use internally. Um and I you we see customers doing that a lot too, like, you know, they get copilot licenses or they build their own tool inside. But then now you have this new, you know, this new situation where copilot or the LLM tool you built or Gen AI tool you built for your internal use, now you have to secure it as well and make sure that it's completely secure and keep you know up to date on that. Yeah, I wouldn't say like I I guess my long way-winded way of saying security wasn't stagnant stagnant before. There's just always new threat vectors that are kind of being added in. This is just the newest one that you're hearing about because it it's touching on everything. And before, like, you know, IT people heard about the cloud, but you didn't really like and like regular users, like people who don't work in tech, didn't really hear about the cloud as much or knew know what it was or understood what like IoT was as much for uh but like like AI, everybody like I can ask my my grandma what AI is, and and even though she has dementia, she'd still probably know like have hurt had heard of it. Like regular people who are not techie are using it every day for like personal and and and business related stuff. So, you know, it's it's it's something that touches every single end user. I I think even like the the the the oldest like anti-tech person that like you work with probably still like has touched it or or played with it in some way, shape, or form, whether they like it or not, because everything's being shoved AI is being shoved into everything. Now I I just want to actually kind of circle back for a second. You know, when I say like I'm talking about AI and I'm like kind of dissing on some of like the uses and how insecure like uh the field is in regard. We're just not mature yet as far as like uh security with AI yet. I I don't want to sound like I don't think that there are some good uses for AI. Like I think that there are some amazing uses for like like for that help our jobs. What people think of when they think of AI is a lot of those chatbots and LLMs and stuff like that. And that's not the best use of AI from in my opinion. Like if I'm trying to correlate like like the average enterprise, I think the last time I checked, like a large enterprise, like the amount of different like security vendors and tools they use is somewhere like on an average of 80. No, there's no human being that's able to look at 80 different products, logs and everything, and correlate like a needle in a haystack of an attack. It most of those tools will will alert you for like low-hanging fruit, or if you tune the the alerts right, you know, up uh and you make it up very sensitive, a human being is just just like alert fatigue happens. So I think tools that like help correlate massive amounts of data that we get are uh is amazing. Like I think that those kind of like AI-driven tools can help, you know, move past what a human is capable of doing in a short amount of time. I just don't think that we need it in every single thing, our toilet bowls, our bidets, our toothbrushes. Uh and I do think that the over-reliance on it and the lack of education from uh from a public standpoint of like how these tools are not necessarily private, that your data like it, you know, for example, it's unlike a therapist, you know, they they uh they can't people can subpoena those chat logs. Um, you know, people can eavesdrop on them. There are ways that they have leaked on the internet before. And you have to be just kind of aware and and have that awareness that it's it, yes, it's an easy button in certain things, but you should not over uh there, you know, you should not over-rely on it for all your personal stuff. And you should also understand that it's not always accurate. A lot of my AI security conversation with customers lately hasn't been about a certain product or like here's what you need to like um well, here's what you need to do to protect AI. Because a lot of the times when I go into a customer, a customer isn't aware of like the threats that kind of hide behind AI. So a lot of times I start out with a 16 minute presentation where I show them videos of like me owning different like LLM models, me show showing how you know downloading the wrong data set to a private model could you know give you a backdoor into like some. network. I do things like that to kind of show them like to make them aware of like the things that can happen so they understand why like why you need to have like kind of like IoT. You might need to fence off certain you know products or block certain products. And the ones that you do use, you still need to kind of treat it like uh like to some degree IoT, where you only allow it access to you know what it's supposed to have access to only. And you need to be very careful about what permissions, what plugins you give it access to, like read-only versus like you know full admin access to that backend ordering system. So uh you know I I think that a lot of times you you'll see a lot of vendors go out and say you need to have AI security um so here's our product but they don't really do a good job at level setting of like why you need AI security. Like these LLMs are necessary you know by by getting like you know a license with open AI and plugging them into all your back end systems for a custom customer service uh chat bot you know might be a bad idea if you're too generous with your permissions. There's ID uh IDOR uh uh vulnerabilities that happen all the time where where somebody's just like here give me a hundred percent discount okay sure no problem like things happen where they're just you know where those those plugins or those integrations or have too much too many permissions and all you have to do is you know social engineer that chatbot into giving it uh giving it or accidental disclosures happen all the time as well.

Tim

Um now I wanted to ask you uh this is all this is all excellent stuff I wanted to ask you what you think about um about the fact that like honestly the first time I saw like chat GPT and all this other thing the very first thing I was thinking of was oh good it's a new way for uh script kiddies to uh to to to become you know hackers because that's all I could think of is like the old scripts right that people would share and then script kitties would just take them and use them in IRC and all this other stuff to just you know launch attacks having no idea basically what the hell they were doing. So what's your what's your thought there?

Vibe Coding And High-Profile Fails

Katherine

I've got an anecdotal story where somebody who was very act actually very accredited and had a a strong security background was led to uh led into temptation by Chat GPT um so this is a kind of a famous story if if any anyone has ever heard of uh uh it was there's even a song about it it's called Slop That Work just go on uh on uh uh Twitter and do a search for U underground UW U underscore underground and then uh the word slop that work um it's a great song by the way but it's based on a on a real story so there's a gentleman that uh that turned in papers for a DEF CON talk got a talk on the main stage and he was going to talk about uh eBPF vulnerabilities and a tool that he created that was supposed to exploit like 70% of ePPF uh implementations out there real a lot of hype it was really excited people like went to the talk I I watched a recording of it it's actually still on YouTube on DEF CON it was at Justice left's death DEF CON uh in August and he was talking about like he was giving numbers like you know I've I found in the wild that 80% or 70 to 80% of uh of uh VMs were still vulnerable to this blah blah blah blah blah and then after DEF CON he released the repo with the tool and it turns out that uh as people like red teamers and programmers were like looking through this it was very obvious chat GPT generated code like references to like if this vulnerable like if if this was a variable this would meet be this like very like it wasn't an actual working application. It never it was just chat GPT slot code that was like referencing like what a real working program would be like you know and and it was called out it just didn't work. And this person who's um ironically his uh his handle he used at DEF CON was vanished vanished from the industry. His like websites went down he just ghosted uh and he was like somebody who had a uh uh the you know GSE like this that's sans highest uh uh security uh certification he had dozens and dozens of like really high uh certification like credentials a really strong job history like he he ran the B sides Romania he had uh a mountain of like like you know industry experience behind him but somehow he decided to use chat GPT or some LLM to help him generate a idea for a talk and then a app for that talk and it just ended up being slop. So you know it it it became a like a news story after DEF CON and people are just like how did this happen? Like how how do you go and like quote all these statistics about a vulnerability that just isn't there and a tool that does doesn't work and uh and uh so confidently do it. Um but yeah even you're you're seeing people kind of use it it fall back as like kind of a a lazy tool thinking that it can't steer them wrong. And you know oftentimes you you know we see AI hallucinations or or untested you know uh vibe coded apps get into production and you know people are like I'm just gonna fix or patch later um or you know if it look works halfway or whatever else maybe you know I'll just be able to repair it on the on the on the fly and it's it's becoming a real problem. Like uh I understand and that that's the that's an anecdote about DEF CON but like I've heard other stories about other uh conferences where the same thing has happened somebody vibe vibed a uh you know through an LLM a a talk and a concept and then it just crashed hard when in uh in you know in reality after yeah it and um so even even people in our industry are are falling kind of victim to like using the easy button instead of like actually doing like thorough testing and QA and making sure what what they actually I mean it's there's nothing wrong with getting an assist to help script something or create an application but know what you're doing and and test it before you go on a conference stage in front of like uh you know a conference of 35,000 people and present this and put your own reputation at risk.

Tim

Yeah I mean you I hear vibe coding all the time and it seems like the definition is over I I don't know it seems like it's all over the place but generally what you what I hear is this idea that like vibe coding specifically is just I talk to the LLM and I you know tell it what I want and then I kind of use the code that it gives me and I don't necessarily need to know how it all works. It just it just works which is so dangerous and so full of holes like the giant freaking security holes that must be in anything that gets shipped out of that.

Katherine

And I there vibe coding can mean yeah you're right it it's a it's a range it's a spectrum um like I know some some of the best programmers I know will utilize like an LM because they have gotten better over time too you know on vibe coding like Claude and other tools like Gemini they will utilize it to like for low-hanging fruit like little aspects of like you know code and and they they also like double check and stuff but it it saves them some time. But then you've got people who are like bragging on LinkedIn about how they know nothing about coding they don't need any programmers and they just vibe coded this app and put it out there and they're selling it already on like you know Google Marketplace place or Apple store and like that's that's where I think it's a little bit dangerous. If you're not doing any QA, if you don't understand what you're doing if you're not having somebody who is who understands you know programming enough to like to to actually spot check this thing uh you know you get things like the stories like the T app. I don't know if you remember uh the T app that just happened a few months ago. So T app is this uh this app that's supposed to make things safer for women so if a woman experiences like domestic violence or you know uh essay or something bad like or just somebody who made them they went on a date and they made that person was made them feel unsafe and they felt that this person's a danger the T app is supposed to be an app where you can upload an anonymous like a review of this person and say hey um this person beat me up or they did this. Obviously you know like there's there's situations where you shouldn't defame somebody and you know they they're still open to like like lawsuit if if they defame someone but yeah uh it's supposed to be a place that if it if you can still post anonymously uh short of a lawsuit they wouldn't be able to reveal the information but in order to register for that app women needed to share share their uh like driver's license and take a a picture of themselves well uh it turns out like a few month like I think like less than six months ago uh you know it that app was vibe coded or discovered to be vibe coded and all of the women's photos or driver's license everything was in a sitting in like a uh a uh a AWS share somewhere that was like full of T bucket or something yeah like basically available to the entire public and uh and it was downloaded and reshared by like uh all over the place and so terrible an app that was supposed to be essentially like to help promote women's safety and and I mean people watching this guys are watching this I understand how this could be abused and you know if you're not if you're just like if you weren't an abuser that's why like they identify the women privately and you know because if it does rise to the level of a lawsuit then you know the men there is some accountability but for other women a lot of these women they were victims of SA domestic violence other other bad situations and they were just simply trying to keep other women safe from falling into that and unfortunately they were all put at risk because of a vibe coded app.

Chris

Yeah so I mean the app that was meant literally for protection did the exact opposite because of vulnerabilities right so it's yeah it's uh it's a shitty sh shitty way to reach that outcome. I guess like I I don't know like uh like obviously we're in the tech space so we're very close to this and we hear about all the time I'm sure like like you said at the beginning of the episode I feel like we're all a bit um experiencing AI fatigue just because it's in everything and it's not just because it's in everything that we deal with this is like expanded into the consumer market like to like way beyond what what we do right we're talking about you know the Apple just announced they're gonna be using Gemini and all the iPhones and things like that. So like the the number one consumer product on the planet is going to be having some kind of integration with an AI model and we're talking about making art and movies and shit like that. It's like no one fucking wants this slop. Like I like so I think we're all like you said there's there's use cases. Yes it's it's like there's there's use cases for it but like it like it is not this magic button that I think it's been kind of portrayed as at this point, right? It's there's gotta be um some some very strict guardrails put on this stuff or else it's just gonna get out of hand.

Katherine

Well it did you hear about like how like uh copilot and PayPal are now like partnering to like like why why do you need copilot with PayPal or or um another story I just heard was how um uh Pete Hegsworth uh Hegseth is going to allow grok into the Pentagon network because that was like actually I was just like why why do we need like I mean the extra has nothing to do with money I'm sure but but like my my point is that like it it's being introduced like I if you open a Windows 11 like uh like laptop or a computer now and open notepad like notepad just basic notepad right it's it's integrated with Copilot now. Why? I saw that I saw that when I got the latest update yeah and I completely agree with you not only our industry but like everyone I think to some degree is feeling some degree of like AI fatigue. I think that like uh the the CEO of uh of Microsoft uh like recently was just confused like why he was uh there's actual headlines he was just really struggling with why people are not excited about copilot and AI being put into every aspect of Windows 11 and then he politely asked everybody in a separate headline uh to stop calling AI generated slop slop. Yeah and of course of course everybody the internet responds because the internet is hilarious sometimes and they just basically changed the the Microsoft logo to micro slop and it's just been kind of going viral for like the last month. It's it's wonderful. I was like that sounds like a Susan's joke on its own like the stop calling it slop I yeah it's so like I understand like your viewers are probably going to watch this watch this in the beginning and be like oh god not another AI thing but I mean I I see like I'm gonna be honest with you I see AI there there are some applications of AI that are really powerful but I don't see it as something it needs to be in everything. And and I if I mean as I working in tech I I ex and having you know to use it every single day I'm you know I still see it I'm still like real a realist about it. I don't get excited about it being on my bidet or my toothbrush or being in notepad. I rather would like to I I like I like to see a AI or LM tools as something that I want to use when I want to use it's when I find it's going to be something that's convenient for me. I don't want to have it shoved into everything and forced on me because at that point like it's you're sacrificing customer user privacy and introducing other you know holes um instead of actually like having a natural and you know and holistic uh uh user base.

Chris

You almost want explicit consent every time you want to use it right and that's why I switched to Linux on all of my computers at home.

Consumer Fatigue And AI Everywhere

Tim

Talking to you on Linux impressive that's impressive I have to say that's uh yeah that's that's impressive.

Katherine

Um although I'm sure I yeah I was gonna say to be fair like I think uh um I think Mac uh like Apple has done a little bit better about like a uh asking you if you want to turn on Apple intelligence or giving you the option. Microsoft it's like a registry change a couple of like five different uh uh you know options to turn off and then the update might turn it back on and all over the place. Sorry to all my Microsoft friends who work there. I I I adore you guys still it's not your fault.

Tim

I saw something on uh Twitter the other day about uh saying line it linus Torvalds is now vibe coding I don't know if I believe that or not but uh yeah so maybe it's it maybe it's coming.

Katherine

No he's vibe coding like unimportant stuff. He's he's to be fair he he's been very explicit that like you know it it vibe coding it can be useful for things that are not like critical kernel components. Yeah but you know so it's a good time saver. See the cat the cat knows that she's on video. She's like she positioned herself specifically in the one space. Yeah like so he's very like upfront about it. Yeah but like you know like other companies are like we're vibe coding 30% of everything and then they're having patch after patch breaking uh the actual like underlying operating system so it's it's not going well for them but if there's one place we need if there's one place we need AI it's in the kernel.

Chris

I think that's that's the place for it.

Katherine

It's uh so somebody just made a a OS actually it was it was vibe OS go look it up I'm I'm not joking. Somebody was like somebody was like eff it I'm going to make a completely vibe coded OS and put it on GitHub and it's out there. It looks like it actually kind of reminds me of like an Apple personal 2 computer like that old OS like with the it looks a lot like that. He's like some things may work and some other things definitely don't work but it's 100% vibe coded give it a try.

Tim

Give it a try everybody 100% vibe coded give it a try um he definitely admits that most things are not working. Brought to you about true by our sponsor VibeOS but if you look at the code it looks really pretty yeah four thousand forty four million lines of raw Fortran Comments galore very explicit comments. No this is good. I just I I mean obviously this is not something we'll be able to solve well not I say solve but certainly not able to fully explore in the time that we've allotted but the one thing I've I keep coming back to is is how the arc of AI closely follows the same arc and I think it's probably if we go back far enough it'll be the same arc for pretty much any new technology in the tech like this follows cloud almost perfectly right cloud came out was extremely agile there were no security not really much in the way of security guardrails in place. Developers could just spin up EC2 instances or whatever S3 buckets and just and just go and just do whatever they want and there you go. And then after the fact you know it was on the network engineers and the security people to try to make it you know secure and and resilient um I think we're still very far from that piece of that happening on the AI side it's just changing daily it's changing too much and even the way that AI is um being uh integrated right like you know we you've got MCP you've got now you've got fully autonomous agents and like just every time we abstract another layer on top of it uh it just you could just see like the holes the security holes just just stacking up um but I I I think we're still I we can't be even so far from making that happen right like where somebody's gotta I don't know who it is but at some point you've got to create some kind of security framework around these agentic AI or MCPs or whatever.

Katherine

There's certainly uh there's certainly a few uh uh frameworks that are out there but it's you know it's a moving target right now like you know like MCP for example is only been out for like about a year um so it's you know the there's new tools and new uh new ai uh you know new AI frameworks or like new AI hot things coming out like AG uh AI MCPs things like that so it's it's kind of a bit of a moving target. Um I I think what you know I think we're kind of sitting on I think we all kind of sense this from a tech industry we're sitting on a bubble and uh unlike unlike cloud I don't feel like cloud was quite like this kind of hype bubble. I mean yes there was there was like a lot of like high hype around uh cloud where people were like oh are we never not gonna need network engineers anymore. Well how are you gonna collect connect to the cloud if there's no network? Yeah but um but at the same time like uh you know it it they didn't shove cloud into every single thing and every single aspect of our life because there just wasn't the ability to these days though like people are finding any way to shove AI in there just to get more VC money funding on people or a uh investor boost and um and it's it's going so fast but like you know some some companies are kind of starting to clue in that like the the consumer appetite really isn't there at the same time like uh I mean I think Dell just recently uh kind of admitted that they're not gonna prioritize putting AI first into their uh laptops and their their uh computers because CEO kind of admitted that like the consumer the consumer base there wasn't really a big appetite as much as they thought there would be um but like when that bubble pops it's gonna be interesting to see where we go from there. Like um I s I think like AI is here to stay in certain things but like I think that like kind of like the dot-com boom like where there was like a retraction of like you know of of like this this fast growth there I think we're gonna see a little bit of that like um there are certain things that like have been somewhat disastrous like uh you know and that has destroyed industries and stuff like like customer service industry for example like trying to get rid of customer service or tech support in favor of a chat bot has just increased customer dissatisfaction and it ha I I doubt it's actually helped uh you know help businesses in the long run accept well they've it's probably saved them opex for you know in the short term but you know it's probably also pissed off their consumer base and now if people start just going towards companies that actually have real customer service or feel like they can get through to a person and their money starts to shift around or you know people lose like the actual cost of these AI tools go up because RAM GPUs other things like our are I mean I think 18 months out their their uh all like future ram RAM production is like paid is uh you know paid for and allocated yeah it's so it's gonna be interesting when like these companies that were like fired all these people to save a buck can't afford uh you know afford the licenses because these AI tool the companies need to start making money like open AI I think they're how much money billions of how many billions of dollars are they losing yearly um at this point?

The Bubble, Costs, And Viability

Tim

I think it's like well there's they're they're playing a debt game but yeah like they're but that yeah like every single subscription loses them like several thousand dollars like every $20 subscription costs them like two thousand dollars or something like that.

Chris

But like you said that the at some point there's gonna be ads injected into this thing. It just depends on who's who's gonna be the first Domino to fall and how bad are they gonna fuck it up. Like it's like that's gonna happen and then the other mod the other companies are going to learn from that and probably do ads a different way and you know eventually it's gonna that's the only way it's gonna make any fucking money and it's gonna like like like you said with customer dissatisfaction I think that's gonna sour even more and I'm hoping that that kind of levels it out and and it brings it to kind of a plateau. But like like you said like it's it sucks because as much as I hate seeing it in things like art and things like that, there are use cases like the like you said the customer service chatbot literally I used the chatbot on my uh home ISP website the other day and I was I wanted to get a static IP address for my house and I was like oh this is gonna be Fucking nightmare. Like, I'm gonna have to like ask it for an agent, I'm gonna have to call somebody, blah blah blah. It literally allocated me a public IP, assigned it to me. I rebooted my modem, and I had it within like an hour. Like, and I and I didn't have to do anything. I was I was amazed, I was genuinely amazed. Um, so it's getting better, but like I don't know, the sprawl is just so exhausting.

Katherine

And when yeah, it's I will say that every single time I've tried to do it, use a chat bot almost like it's like I don't understand what you're asking. I don't understand what you're asking. It's usually something pretty easy, like hey, I need to cancel my service, or hey, I need to change my plan, or something like that. And it's just like um, so you got really lucky. It was just like a technical question that they could do. Cool. Like I I've seen some pretty bad failures of like if it's especially if it's like any any question that's just not like a like a basic like technical pre-canned, short, succinct uh technical like ask that I can get, it it it gets hard. I was also uh uh there's something else that was like uh tip of my tongue. Um oh yeah, I also wanted to mention like there's a really good MIT study. Uh I know Tim has seen it, but like it came out like last year where uh where MIT went to like uh a number of different companies to ask them if they've made money on their like they basically asked a series of questions and found that like something like 90 something percent of companies that are like implementing AI tools or AI uh projects or you know, some kind of AI service were losing money on it. Um so so far this industry, like we're jumping on it and we're making money in like from like investor hype at this point, but from the actual AI itself, most companies AI uh AI um AI projects have actually failed to yield any like in uh any actual direct um uh any uh direct uh profit. And so I think that's gonna also eventually catch up.

Tim

I mean it has to, right? This whole thing, like I said, this whole thing right now is a debt game. I was reading something today that like the AI bubble is not on um future profits, but on the debt that's being incurred to like create all these data centers and like all the the materials and everything. And that's the the real the real bubble is actually like kind of inverted where it's all based on debt and not on on profit, which is makes sense because that's all they've done is run a huge deficit. The entire AI industry is just running on a huge deficit that they're just kind of passing the buck around to each other, and uh that that'll be the bubble that pops is the debt bubble.

Katherine

Yeah, and it kind of scares me at the same time because I don't want to see like you know, I don't I hope, you know, I I genuinely hope when that bubble pops, it's not as impactful to you know the uh economy as I think it will be. Yeah, it's one of those things that I hope I'm wrong about because you know it does worry me that like you know, we we already are seeing in like in the tech industry like a lot of instability and uh and it being really hard out there. And when that bubble pops, I don't think it's going to blow back in a way that's gonna make it you know better for us, uh you know, people who are working out there. Uh I think it's gonna hurt the economy as a whole.

Tim

Yeah, I mean yeah, there's there's a lot of math shown that uh the the tech stocks in general, but like this rally is actually propping up the stock market right now when the rest of the stock market is going to the red. So yeah, there's a lot of potentially bad things that will happen when the uh inevitable comes, but I'm not here to spread doom and gloom.

Katherine

Yeah, I hope it's not as bad. Like I I'm I will be delighted if I am wrong that it will be as bad as I think it is. But you know, I'm then again, I'm not an economist. I'm not, you know, a financial person. I I I read a lot of stuff by people who you're sort of smarter than me that are worried about this bubble as well. But you know, hopefully they're wrong too.

Tim

Yeah. Well, I mean, something that'll make it crash a hell of a lot faster would be a lack of would be the first, you know, cyber state sponsored or or otherwise cyber attacks involving AI, which I think we've actually seen a couple already utilizing AI or directed at AI, and uh, you know, whatever that looks like. So stuff like that will shake confidence, you know, faster than anything else. So I think again, I think ultimately what we were here to talk about is that you know AI needs some kind of security framework for the amount of data that they have and the type of data that keeps ending up in these models and whatnot, whether private or public. Um, we've got to have some kind of security framework if this thing is gonna stand any kind of test of time. I think.

Frameworks, Threat Models, And Demos

Katherine

I mean, uh the security frameworks are being built, or like there are some out there currently. Um what I worry more about is that I don't think the the the actual companies that should be adopting these frameworks truly understand what the threats are. Like that's why that's why when I go and talk to customers, I I you know, I instead of talking about like here's a product that you can help help you, here's a product, here's a product. And I wouldn't want to come on here and do it either. Um what that's why I would usually start with like different talking about here's prompt injections, here's you know how I can ex, you know, how somebody can exploit uh the back end, here's you know, SQL injection using your like chatbot or LLM. You know, like here's here let me show you some videos of like cross-site scripting attacks where I can attack another user like that that's on that LLM to give you an idea of why like you should be prioritizing like you know you know AI security in general. If you're using it, you know, in your internal applications, if you're allowing users to access public models, you know, you should be thinking about security more than just like what they're uploading to it, but how it's being applied, what permissions it's being granted, uh, and what you're downloading and adding to it as far as training data. And so I I try to, you know, kind of, you know, explain to them what the problem, like the problems I've seen in the industry are so they at least understand why, you know, they might want to adopt some of these frameworks or adopt, you know, products or tools that will help them protect it. Because at the end of the day, I don't think that's been communicated enough. I don't think people understand that like you could potentially do a SQL injection attack, you know, against your C, you know, the SQL database you gave that you know chatbot access to. Yeah. And created like a I one of the videos I show is like me using a chat bot to like create an admin account at like admin a username and password for like the back-end server using a basic like prompt injection attack or getting access to other people's chat mess history by you know basically social engineering the LLM to thinking I'm an admin.

Tim

Nice. Oh man, yeah, there's so much more to to to uh yeah, there's so much more to dig in on this, but I think we need to probably wrap it. Um as much as I would love to keep going, is this this is very this is honestly, I think the AI's the security piece actually is what I'm most interested in about AI and less much less about vibe coding and all the rest of it. Um but anyway, all right. Well, thanks for joining us. We'll definitely have to have uh Catherine back on. Um and I I I was about to say where can we find you, but you we covered that I have all the way at the beginning, I think. So uh if you guys don't know. You can remind them.

Katherine

Yeah, if you want to remind them, that's fine. Yeah, I'll just say it again. If you uh want to see all my AI meming glory, uh it's I'm on Twitter at KMCNAM1. That's uh just my Twitter handle. Feel free to follow. I will probably post uh memes that laugh at the insecurity of AI at times. But again, I I mean I I'm not anti-AI at the same same time. I use it every day in my day job. I use it, you know, for tools uh for for studying at night, but I'm also very realistic about that there's huge gaps in security uh with with AI, both public models and private, and the uses of where they're jamming into every tool right now.

Tim

Yeah. All right, excellent. Well, we'll go ahead and uh cut it there. If you uh enjoyed this episode, please uh like, subscribe. I'm just kidding, you probably already subscribed at this point. Uh share with a friend, uh tell everybody how awesome Catherine is that she'll she she does not hear that enough, so make sure you let her know. And uh we'll see you guys next time on Cable 12 podcast.