Cables2Clouds

When AI Deletes Production: Guardrails, MCP Risks, And The Surveillance Creep

Cables2Clouds Episode 48

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 42:15

Send a text

What happens when an AI agent decides the “best” fix is to delete production? We unpack the AWS outage tied to an over‑permitted agent and zoom out to a bigger pattern: systems built for maximum utility and minimum restraint. From MCP’s connective promise to its post‑auth sprawl, we break down how agent toolchains turn small mistakes into big blast radii—and how to fix that with real guardrails, least privilege, and human‑in‑the‑loop at destructive boundaries.

The conversation widens to public deployments where abstractions fail loudly. A military nutrition assistant built on Grok reportedly ran with minimal safety constraints and instantly entertained unsafe prompts. That’s not a funny glitch; it’s a policy failure. We talk about what genuine safety layers look like in high‑stakes settings: capability firewalls, explicit refusal policies, robust logging, and escalation paths for sensitive actions. Ethics, compliance, and operational discipline are not speed bumps; they are the steering wheel.

Privacy takes center stage with a Ring twist: footage stored in the cloud despite no subscription. Helpful for a kidnapping investigation, yes—but also a wake‑up call for anyone who assumed “local” meant private. We offer practical steps for home security that actually secures the home: VLAN segmentation, strict egress controls, and device choices that still function offline. Then we turn to Discord’s plan to gate “mature” spaces behind global face and ID checks via Persona, the security research that raised red flags, and how user pressure pushed a rollback. If regulation demands verification, the right answer is minimal disclosure, not maximal identity.

We close with a rare combo: a zero‑day disclosure delivered as a catchy music video calling out Malwarebytes for hard‑coded creds and privilege issues—followed by a commendable vendor response. It’s a model for the culture we want: researchers spotlighting flaws, companies fixing fast, and users gaining safer software. Throughout, we keep returning to one principle that ties AI, identity, and devices together: trust is a permission. Design for refusal, constrain by default, and say clearly what your systems must never do.

If this resonates, follow the show, share it with a friend, and leave a quick review—what guardrail would you never ship without?

Purchase Chris and Tim's book on AWS Cloud Networking: https://www.amazon.com/Certified-Advanced-Networking-Certification-certification/dp/1835080839/

Check out the Monthly Cloud Networking News
https://docs.google.com/document/d/1fkBWCGwXDUX9OfZ9_MvSVup8tJJzJeqrauaE6VPT2b0/

Visit our website and subscribe: https://www.cables2clouds.com/
Follow us on BlueSky: https://bsky.app/profile/cables2clouds.com
Follow us on YouTube: https://www.youtube.com/@cables2clouds/
Follow us on TikTok: https://www.tiktok.com/@cables2clouds
Merch Store: https://store.cables2clouds.com/
Join the Discord Study group: https://artofneteng.com/iaatj

Tim:

Hello and welcome to another monthly episode of the Cables to Clouds newscast. And right on time, my cat is on is is here. She's here right on time. I told you this was gonna happen. She's very good at this. Um anyway, I'm your uh host uh Tim for this week. Chris is nowhere to be found. He's busy doing Chris things, and so I was uh we have our good friend, both of ours actually good friend Katherine McNamara, who's joined us to do the news with us, and uh maybe more. So um, Catherine, hello. How you doing? Hey, nice to see you guys. Excellent. And your cat. Yeah. Don't worry.

Katherine McNamara:

By the end of this episode, mine will will uh come uh dancing around as well.

Tim:

Well, she's yeah, I see Luna in the background there, so yeah, Zelda's she literally waited until I hit the record button and then jumped up here. Anyway, uh okay, so let's jump right into the news. So we have some fun, we have some really interesting and some very fun episode uh um articles for us tonight, and Catherine's gonna help us go through them. So I'm gonna start off with uh one that probably has been everybody's probably heard about this by now, but just in case, Amazon has had a second uh sorry, Amazon, AWS has had a second outage now caused by Kiro, their um their kind of code coding agent thing. And what's interesting about this is this is now the second time that Kiro has essentially brought everything down by deleting deleting an environment and rebuilding it, deciding that that was the best way to solve the problem that that it was tasked with. But what I find particularly interesting about this time is that now remember that Kiro is a uh a product, right? So so AWS is selling this product, this Kiro product to its customers is like, hey, you can use this for your production, you know, uh systems and everything. So in order to basically solve the problem of how do I how do I give a uh an after-action report about this outage being caused by Kiro, but make sure that Kiro is not to blame, they've decided that what actually happened was that Kiro was given too much uh leeway, too many permissions to so so it was able it was able to do this deletion and recreation when it basically shouldn't have been able to. And that's not a problem with the AI, that's a problem with uh permissions. So nothing to see here, folks. Um I don't know. You you you took a look at this uh as well, uh Catherine, I think. What what what do you think of this ridiculous story?

Katherine McNamara:

Yeah, so I I think uh my understanding is basically the AI agent was like, I don't like this code, it's bad. So it deleted it, it thought that the solution was let's just delete it all and we'll work on uh nuke it and work on it later or recreate. And obviously uh it didn't bring the services right back up. Um I know that a lot of companies right now, like Microsoft, AWS, like any major tech company is trying to look for a massive like use case and like you know, something that they can consumerize and sell uh for you know AI agents, uh AI in general. Um but you know, in this case where you know you give it too much, you know, this is one of those cautionary tales where if you give it too much permissions and it's not perfect you know perfected or ready for prime time, these kind of things can happen. And this isn't just the only time it happened specifically with this Ciro agent, it's the second time. Um AWS is kind of that place where everyone's paying, you know, uh, you know, a large amount of money. You know, they're they're paying paying for you know the five nines because it's you know having it on pr off-prem, it's supposed to be reliable, and then you get a third a 13-hour outage that's you know pretty damaging for the brand, but not only one outage, but two outages that were completely preventable. Uh we always make a joke that it's always you know, these big outages for like a cloud provider, it's always DNS or uh or BGP. This time around, it's I guess the the three we're gonna have to start looking for is is it AI, DNS, or BGP?

Tim:

AI has entered the chat. Yeah.

Katherine McNamara:

AI has entered the outage, uh outages, that's for sure. I and that's not me to disparage all AIs um by any means, but you know, it it's costing a lot of money to uh and development to create these these tools. And I think that right now we're trying to put the cart in front of the horse with like giving it so many responsibilities because we want like large all these large companies are trying to get some money back from these projects that they're doing. I think it's amazing. These tools are amazing, like Claude, Kiro, all that stuff. But um, you know, I don't think it's it, you know, you know, it's one of those things where you can give it all all the permissions and give it the you know, take the wheel and drive yet. It we're still far off from that, I think. But we're getting there. It just uh the these these kind of you know stories though remind us that you know we need to kind of ramp up into it. We can't just give it full control yet.

Tim:

Yeah. Reminds me of the uh the ClaudeBot thing from the from our last episode, like this idea of taking an agent, giving it access to all of your life, like your calendar and your and all this other stuff, and then how it kind of went crazy and did things did all sorts of crazy stuff. But yeah, no, I couldn't agree more. The the whole giving AI too many permissions and then you know, shocker think bad things happen. Uh I think this one's particularly funny because again, this is a product. Now, I guess kudos to Amazon for eating their own dog food on this one. Um, but yeah, I couldn't agree more that it's it's not quite a prime time ready to be doing whatever they gave it to do.

Katherine McNamara:

So this kind of actually reminds me of another story that um I'm sorry if I'm gonna take this a little in a little bit of a different direction, but um, one that we just saw yesterday. Uh it's very similar. Like even smart people can fall and be susceptible to this. So um the person who's uh the director of like uh in charge of Meta's AI safety uh recently gave access to ClaudeBot to her, you know, gave her uh her ClaudeBot full access to her mailbox. And it was really it was advertised uh just yesterday that like it wiped her whole mailbox and you know completely without like she was like ClaudeBot, stop like the whole chat was like published. And this is somebody who is entire like career is AI and AI safety and and putting guardrails, and even she was susceptible to you know giving too many permissions to an AI, uh, an AI uh essentially an AI uh uh agent, and then you know, fell victim to it very quickly.

Tim:

Yeah, I mean even the people that are are supposedly good at this or supposed to be responsible for this stuff can very quickly fall into the trap, right? Because uh I've noticed, you know, I've been using clawed code for something that I'm building for Cisco Live, and uh it's very easy after the first few successes that like, oh, it's doing what I want to do, and it's it's it's things are going well uh to let it kind of snowball. But yeah, these things can very, very quickly get away from you. So in the case of the case.

Katherine McNamara:

I mean, these are cautionary tales not to say don't use AI, but to understand that you should have guardrails and checks in place and pr you know permissions restricted to you know, essentially at least privilege, um, and always always be prepared for an AI failure or AI doing something that you don't expect and have a contingency plan in place.

Tim:

Um, one last thing before we move on, that what was particularly interesting about that one with the uh uh I for the meta uh safety person, you look at the chat, and in the chat apparently she actually said like look at my email and and well no, but even before that, she said like her instructions were look at my email and then let me know what you think should be you what you think you should be deleted, right? Don't delete the email, but like let me know. And it just still just like yeah, okay, delete it all, right? So even even the guardrails of just telling it what you want it to do may not be enough. So definitely you should be you should be treating data as uh a very uh ephemeral thing when you were we're dealing with AI.

Katherine McNamara:

So anyway, um okay, so let's uh and that's not to say that we won't get there at one point in the future. We won't get there to the point where we will be able to trust these tools a little bit more. But you know, I I think most of the security researchers out there and people in InfoSec in general are are quick to say don't don't you know ClaudeBot or any of these like age, you know, AGNI AI tools are great, but don't give it too many permissions. Don't give it, you know, don't give it the full run of the house.

Tim:

Yep. Or Moltbot, I think it is now. Is it is it Multbot this year or is it? It's open claw now.

Katherine McNamara:

It's open claw.

Tim:

It's open flaw. Open claw, open flaw, something like that. Right. Yeah, yeah.

Katherine McNamara:

I'll just call it open flaw for now.

Tim:

Excellent. All right. Speaking of uh speaking of uh cybersec and uh you know other AI tools. I promise the whole show is not about this, but this just happens to go with it. Um so Cisco has kind of gone on uh recently weighed in on this thing about uh the AI's quote, what it's calling connective tissue, but we're really talking about like MCP model context protocol. And we've talked at at length on the show about MCP, you know, the S the S and MCP stands for security. Um it's interesting to see uh Cisco itself weigh in, and Cisco is a big pretty big proponent of you know the use of and and advancement of AI tooling, so uh, you know, around agentic ops and other stuff. So it's interesting to see that Cisco actually weighs in on this as Cisco saying basically, hey, MCP is vulnerable for many of the reasons we've speak spoken about on this uh show before, but it specifically uh you know points out that MCP is while it's become kind of a de facto standard, and this is now remember the MCP now is is being supplanted or at least not supplanted, sorry, it's the wrong word, it's being extended or or kind of like made to be more of a tooling now for agents, now that agentic is now the new darling of of of how we we do AI. Uh, but MCP is still there because it's there as a as a tool extension now for agents. So, but it's still MCP and it's still insecure, which is kind of the point that Cisco is making here, saying organizations should treat MCP servers, agent tools, context brokers, treat it just like you would with an API gateway or a database or or any other tool that you would be accessing. You know, hard figure out how you can harden it to the best of your knowledge. Um, I think so this is interesting because I wonder if MCP so MCP is a quote unquote standard. I mean it's it is a standard, I guess, but the security piece of it really hasn't been standardized or or even really pushed in. You know, we they have basic security like OAuth to, I think, or so just like really, really basic, almost.

Katherine McNamara:

Yeah, just getting the the front the authentication in the front door essentially.

Tim:

Exactly.

Katherine McNamara:

I I think the the other issue which uh uh as uh a pretty good uh like an awesome security researcher brought up on uh Twitter is the other issue is once you're fra passed the front door, which there's ways to social engineer o authentication out of you know like out of people or or servers, but you know, let's say even if it's a legitimate one, after it's past the front door, by default it has you know the the that connection has too many permissions and it's pretty much you've opened the whole house to them. And so like there's not really a concept yet of like least privilege access when you're sharing via MCP. Um eventually they will hopefully get there and be able to secure it a little bit more. But as you said, it's there's kind of like you know, the industry is moving in other directions as well, like AGN, you know, AG AI and stuff. So we will see, but like um, you know, it right now it's it's it's still uh it's a cool, you know, it's a cool concept, and I you know, I think that there are uses for it, but there's a lot that still needs to be worked out uh worked out and secured. Um we're at this weird pivotal point in the in you know tech where the uh the protocols, the tools are advancing at a rate where security hasn't caught up yet for them. Yep. So we're like deploying this stuff. Like I think when MCP first came out, like there was really no authentication, really. It's like it was it was really easy to just bypass, but people are pushing it out so fast because it was the new hot thing. Um same with OpenClaw. Like OpenClaw still has a lot of like uh uh you know old BenClaw, mold bot clawed bot, whatever you you say, but like uh you know it it's a new hot thing, and people are like installing it root on their actual work computers and stuff.

Tim:

Um it's nuts.

Katherine McNamara:

But uh, you know, the security still hasn't caught up. Like, you know, there's malicious skills that people can download off of these uh, you know, skills.sh, uh you know, uh and uh there's uh you know, like even OpenCloud itself, like if you don't lock it down, like you have to read a really thick security guide, like online security guide, to be able to like lock it down. It's not security by default when most commercial products or enterprise products are are the exact opposite. They're built for secure, you know, security defaults are to lock everything down first and then allow access to the code.

Tim:

And then allow a little bit at a time. As you can figure.

Katherine McNamara:

But what you know, a lot of these products are you know, as they're being code vibe-coded or or put out, they're not being put out by people who have security in mind or not or aren't really security-centered. So we're having to go like they take off, these tools take off, and then we're running to catch up, like secure, you know, to secure them.

Tim:

Yeah, I mean, well, and with the you know, not to get too far off the MCP thing that this article's about, but because it is the same thing ultimately, which is you know, you're it's that it's it's the uh hardened outside Chewy Center, barely hardened outside, to be honest with you. You know, like everything MCP provides access to, or, or in this case, you know, agentic, like with uh Moltbot Claw Open Claw, whatever the hell they're calling it this week. Um, you know, the the whole thing's predicated on giving it making it maximally useful by and in order to do that, you have to give it pretty much all of your logins and all of your permissions and everything. And so API keys to everything. API API keys, right, exactly. Exactly.

Katherine McNamara:

Yeah, that's so there's a couple problems. Obviously that one, it doesn't decrypt it doesn't encrypt it locally. A lot of these tools, uh they um they also use you know potentially can use an exorbitant amount of tokens and stuff, uh just running through sit like all tasks through it. Just yeah, there's a lot of little issues there. Yeah.

Tim:

Absolutely. And and so these tools, like you said, they're they're they're I don't know, created however they're created at the end of the day, whether they're vibe coded or otherwise, they're being created in a way that maximizes utility but minimizes any kind of guardrail for the users of that product. Because and then, you know, oh well it's a it's a hobby product or it's a you know uh it's a ba basic uh you know sandbox type product, but then you got people out there not treating it like a sandbox. What what should have happened is that the all this stuff should have been sandboxed, you know. Hey, this works, it's great, it's a good proof of concept type of thing, you know, but instead people are just like rushing out and buying MacBooks and plugging all their all their accounts and API keys into it, you know what I mean?

Katherine McNamara:

Yeah, and but even like MCP, like people took off like running with it and like integrated with everything they could without really like before like even OAuth was like introduced, and uh you know, like s you know, people were like testing out like malicious MCP servers and able to connect them and do crazy stuff because there really wasn't like security in mind for like with the creation of the lateral authentication of any kind.

Tim:

Yeah.

Katherine McNamara:

Yeah, and once you have access, you have full access. It's it didn't have any concept of least privilege access. Once you're connected, you can start you know getting access to everything. And that I mean that's the point of it, but at the same time, like you know, generally speaking, when you have data sharing or tools, you you try to lock it down, you know, as much as you can to provide least privilege access. Again, goes back to our tools are f are are being thrown out so fast and and pushed so fast by the industry that security is running to catch up. Yep. And I'm glad that Cisco has like said something about like MCC CP servers and and uh and I think a lot of other security researchers have also spoken up about it, but people don't tend to listen sadly until uh all the big companies kind of shout out.

Tim:

Absolutely. Um and so one last thing on this on this specific article, uh they they basically and this is a great analogy, I think. They liken MCP to like a supply chain, like the supply chain attack that uh got solar winds, right? Because I mean MCP is kind of like a supply chain type of thing where it's you know it's not that necessarily the thing you're doing that's being uh that that ends up being exploited, it's the underlying packages and tooling and and whatnot. So it's a really good, I think it's a good analogy to say it's you know, MCP, this MC lack of security with MCP is similar to like the way that SolarWinds was exploited with uh with the their supply chain attack. So pretty cool. Uh we've got uh I've got one more and Catherine has a few, but uh this one's pretty much I think I only have two. Uh yeah, that's true. Well, we'll see. Well, actually, no, I've I've got three.

Katherine McNamara:

I've got like a like an ad hoc one that's not really a news on the news story, but it's kind of fun internet news.

Tim:

Yeah. Yes, okay. Uh okay, so this one is uh the government. So the govern the government now is using Grok, specifically the military is now using Grok, and they deployed it as a nutrition advisor like bot, I guess. And so I guess the idea is that if you're in the military, you can ask Grok questions about nutrition and it will give you answers. And it took exactly zero seconds for someone to just ruin the day for everybody, I guess, and just ask Grok uh, well, I I I'm an acetarian, I eat food by shoving food up like vegetables are good for this. And Grok just like completely went with the bit. Like it just no no questions asked. Uh, and it started giving advice on the best vegetables to um insert into your rectum. So obviously, I mean, I don't know if you would call this specifically uh a failure or an attack or anything like that on the part of AI, but I mean, come on, we're talking about like we're giving this to the official, you know, US military for the purposes of you would expect there to be a little bit of rapper around this thing, but there's absolutely not. Like you just ask Brock whatever you could ask it, I guess.

Katherine McNamara:

Well, it's actually funny. Um, I actually think I understand why that happened. Um, I there was actually a news story that came out today, uh, specifically today, um, uh about um what's his name? Uh Hegs uh Hegzor's Hex Hegseth.

Tim:

Pig Heggseth beat Hegseth, yes.

Katherine McNamara:

Yeah, Hegseth. So apparently he's threatening to con uh so it seems like what he's doing is wanting to have all of these AI tools without guardrails. So he's threatening to cancel the contract for Claude unless the military gets it fully without any guardrails. So it's very possible, and and this is me just speculating, um, but it's very possible that he basically said install Grok without any sort of guardrails at all. And that could be why it's kind of taken this wild turn. Because I mean, that that's very similar to what the like what they're trying to do with uh Claude right now, and you know, for the US's uh instance of it, US military's instance of it. So if I had to guess, and this is again speculation, I I'm you know, this is just speculatory, but the reason why probably Grok uh went in this direction is if it was similar and it didn't imply, you know, they didn't uh give guardrails to it like they're trying to do with Claude, uh it's very possible that it can do whatever they ask them to do. Hopefully not with too many permissions. Hopefully not without too many permissions. Well, I don't know if I I necessarily necessarily trust the uh uh government to be uh you know to do to deploy internal AI correctly. So who knows? We may have like this whole you know all all of our social security data deleted tomorrow.

Tim:

Well, I'm just kidding, but Doge or Doge already has all that anyway. So backup.

Katherine McNamara:

So I think it's a decentralized backup at this point.

Tim:

Yeah, uh probably yeah, I would that we'll call it that. Um so yeah, no, the uh I did see that story. There was a yeah, so Pete Hegseth wants anthrop wanted anthropic to supply for Claude to be supplied to the US military, specifically with no guardrails around um using it for combat or um surveillance, if if I remember those are the two big ones.

Katherine McNamara:

Uh yeah, because one of the things that recently got reported is apparently when we went to get Maduro, uh they use claw uh clods to help it. Uh so it's very possible that like everything they're deploying in the military, they're just like, we're gonna be you know badasses, we're gonna have nothing with guardrails. You're all adults.

Tim:

Uh it's terrifying. Truly, truly terrifying, if I'm being honest. And then oh, it was uh about the other one was about they want it to have uh no guardrails against um autonomous action. So like if they want a drone to kill somebody, like they didn't want to have to they didn't want the AI to have guardrails around that or something. It was pretty crazy. It was honestly pretty, pretty terrifying. This article I was reading about what what Hagseth wants from Anthropic. Uh and and like you said, they're using the the power of the Government purse to try to force uh Anthropic to remove these guardrails and somebody'll do it. Probably I'm sure Elon Musk will do it all day. That's probably exactly what happened with Grok. So yeah, terrifying.

Katherine McNamara:

Yeah, but I I think that Claude is kind of the state what people Claude and Gemini are p are tend to be the favorites for like code-based stuff. So I I'm that's true. I can't speak to like Grok because I haven't used it as much for code, but like I it doesn't seem like it's the favorite there, which is why they probably want Claude to do whatever they want.

Tim:

Yeah, for sure. For sure. Okay. So yeah, I'll uh hand that over to you, Catherine. I think uh it's a great time. Great time to hand that over to you and uh take us home.

Katherine McNamara:

So I have a couple news stories. Um again, this is kind of uh on the like the uh a little bit of uh government uh uh for a couple of them, government uh line and surveillance and security. So um I I'm sure if anyone has been on the internet or have read the news, they've heard about the Nancy Guthrie case. Um Nancy Guthrie case is, you know, for those who don't understand know about it, it's basically an elderly woman who was kidnapped uh in air in air, like in Arizona, I think it was Tucson. And uh it's been this ransom like kidnapping going on for over 20 days now. It's a really sad story. It's 84-year-old woman. That, you know, nothing like, yeah, I don't want to disparage or uh minimize that terrible thing that is going on. But um one of the things that we discovered during this case that kind of got public is um so this woman, uh Nancy Guthrie, had a ring camera outside of her door, but she didn't have a subscription. For anyone who uses ring, um, you know, they should know that they they have the ability to have like a subscription model where the it's saved in the cloud and they can access anytime. And there's local, like where you're just not paying for the subscription and you're able to view it locally. So the thought was that, you know, since she wasn't paying her subscription, you know, it was probably, you know, she probably didn't believe anything was being recorded. But apparently uh there is some terms and conditions that uh basically says that uh if you uh you if you don't have a subscription, you still may uh it still may connect to the cloud servers and may store uh footage. It doesn't actually confirm whether it does or not, it just says it may. Well it turns out it did, and we were able to get a good picture of the um of the actual kidnapper, which uh good. I I hope they're caught, but uh it has raised concerns among privacy advocates because uh you know everyone has kind of uh a lot of people probably heard about the uh the uh you know rings uh back end deal with the government and palantir uh palantir. And um, and uh, you know, people have been canceling their ring ring subscriptions or or canceling or throwing out ring because they don't want to be sharing information with the US government or ICE or uh other places. So uh it's really brought a spotlight to um to the privacy concerns around ring. People thought that if they didn't have a subscription, they'd be fine. It'd be like you know, you know, it'd be just on local, uh, local mode and they didn't have to worry about their information being shared. And they assumed that it wouldn't be recording. And and obviously there was something vague in the terms and conditions, so it's not like they were doing anything you know illegal, but now people know for sure that their stuff is being recorded in the cloud and they have to, you know, you know, protect their own their own privacy because some people use these not just for the front door, but for actual security, like cameras inside. So you know, God knows what's in the cloud, and you know, there's no way that they have to access it and delete it if they don't have a subscription.

Tim:

Yeah.

Katherine McNamara:

So that that's that's one uh one of the puzzling news stories that uh that came up recently. And you know, again, I don't want to minimize this whole um this whole uh kidnapping of Nancy Groot 3. Like obviously something good came out of it here, but we would have never known that this was happening unless, you know, this is one of the very corner cases where something good came out of it, but you know, we basically found out that ring has been collecting all this data and all these these streams that are connected to the internet potentially, uh, without the consumer knowing.

Tim:

Yeah. I I was at some point I was gonna get a ring camera, and then I and then I had that exact fear, basically, that like, oh well, yeah, okay, I can see who's at my door, but so can everyone else, basically, the government or whoever this gets shared with. Um the other thing was, and uh uh if you install the ring or any other Amazon devices, uh there was there was an opt-out feature a few years ago uh that was basically, oh well, we can use your Wi-Fi to to basically build like a you know uh a throttled Wi-Fi uh service, you know, so people walking by can connect to the Wi-Fi. It was like the weirdest thing, but I quickly opted out of that because I have echoes. I don't have the I don't have the ring, I have echo. So I have the I have the voice version of it of a ring, I guess. Uh I'm sure it's listening to me and recording everything I say uh, you know, about about things, so uh not much better. But yeah, I mean this is this is this is it. Like all my neighbors, if I go up and down my street, I would say seven of ten neighbors have a ring camera as their as their doorbell or whatever. And you know, when you think about police, that that's good and bad, like you just pointed out, right? Like, you know, it could help police identify vandals or or crime being committed, but just as easily it could be used by ICE to track people's movements or whether they're you know at a certain location or the you know, or not just ICE, but the government in general. So yeah, I I don't know.

Katherine McNamara:

I think that's true.

Tim:

Yeah, yeah.

Katherine McNamara:

I think that it's something like we we're recorded, you know, without our knowledge, somewhere between 70 to 300 times a week. You know, just walk like you said, walking down the street. Um now I don't have I've never had a ring camera. I almost got one like years ago, but I never did, thank God. Um but you know, I wonder if like you know, you could do true, like yeah, I I don't know if it will function or not without any connection to the internet because you know, you and I we've both been network engineers in the past. I wonder if you can completely segment it, you know, where it's all local in terms of like it only is able to access a local VLAN that's like sandboxed, only like your phone or something can connect to. That would be like the only way I could see it potentially being safer, but the average consumer isn't gonna know to how to like put a like a segmented VLAN and you know, and block it. And that's even if the camera the ring cam will allow it to uh to do it. For all we know, it may require an internet connection even without a subscription. So someone with the ring cam will have to test that out. Um but yeah, it's it's it's it's uh I mean and I think that ring has tried, and since that came out, ring has been trying to like do PR to like fix uh you know their perceived issues, like uh the like their back end deals with the government and stuff. So they're like, hey, now there's a new feature. We can find your dog if you have a missing dog and stuff like that by using your neighbor's cam. But it made everyone even more paranoid. And I think uh at this point, uh Amazon recently announced that they're ending their partnership with uh with uh ring. So that's that pressure did work.

Tim:

Okay, cool. That's good to know.

Katherine McNamara:

And then speak and that's a good segue into my next story, actually, about how pressure works.

Tim:

Yeah, I love it.

Katherine McNamara:

So I I think about I'm trying to think, like what today's the fourth 24th. So about 15 days ago, uh Discord uh almost nuked itself um by announcing that uh it would launch a teen by default setting for its global audience. So uh basically uh anyone who was, you know, uh, you know, going to any mature content or assumed can mature content, it was very vague, uh, you would have to identify yourself. Like uh, you know, basically show your face and potentially an ID to verify your age. And so um, and this was you know, there's certain states or uh or countries that require like ID now. So like, you know, we understand that those those countries or those states may require that, and that's that's part of the law there. But doing it globally and forcing everyone to identify, show their face, have that information taken was uh was uh disturbing to a lot of people. And a lot of people got angry and were like, uh, well, I guess we gotta find a Discord alternative back to IRC. There's a lot of jokes about IRC. Um, it really pissed a lot of people off. And it made a lot of people even angrier to know that um the company that was doing it, uh that was gonna be doing backending the the facial recognition and the the age verification was a company called Persona. Uh Persona uh is a Peter Thiel uh funded company, uh, from my understanding. So a lot of people were like, wait, what? It's it's a startup uh in out of San Francisco. It had ties to Peter Thiel, uh, and a lot of people uh and so security researchers decided, okay, I'm gonna look into this company because like what the heck? Uh so uh security researcher, her name is Celeste. Uh, I believe her username uh uh online is VM Funk. Uh so VMF U N C. Uh uh, she started digging into this and um and started looking, and they found a bunch of open APIs on on uh with persona. They saw like there was a parallel instance with uh the US government potentially giving access to the government, and people were just like, wait, what the heck? Uh like it, you know, there was a you know immediately spurred fears that uh that um you know that this information was gonna be shared with ICE with the government. And it also didn't make sense why you would want to verify everyone, regardless of the law, if you knew that people are getting uncomfortable with with online ID laws or online ID uh IDs being forced in in most locations. So uh, you know, there was a lot of rage about that. That security researcher uh published their data without they didn't hack into anything. They were able to like find uh date, like I think they found um 2,456 publicly accessible files, uh uh extensive surveillance uh software platforms that parallel like uh a government platform that you know seems to imply that there might be some information sharing with the US government and potentially ICE. So she took that to Twitter and published a blog with other security researchers. And um the uh CEO of the company, Rick Song, uh actually went to Twitter and also tried and you know engaged with Celeste via email. There was some there's a little bit of tension at first, but Rick is seems like Rick is uh you know kind of an end of the story. Uh Rick is now willing to meet with the EFF thanks to some uh awesome anonymous security researchers who kind of uh uh hook them up online. And we'll see where that goes. But since that that the public outrage, this, you know, finding out about persona and their their gut potential government uh contacts and how just insecure their back end was, Discord has kind of backed off of their uh their uh plans for you know you know global uh identification and definitely like sounds like they're also not planning on working with persona any further. And this is where you know it comes in, where like uh public pressure does work uh sometimes. If it hits their bottom line and people are willing to leave a platform en masse, uh at that point, you know, they they have to, you know, they'll they either like the company crumbles or or you do the best thing for uh for your business. So I mean it was a goodish end of the story. It sounds like the persona CEO might work with the EFF on some transparency and help. Uh it doesn't sound like Discord is necessarily going to move forward with this global like ID ID thing, you know, knock on wood. They definitely does sound like that they are backing off of using persona for that if they do move forward.

Tim:

Yeah. So I mean it's it's always the case. And it's in shittification, and and but now with the extra added flavor of you know the the surveillance state that's being built um right around our notices, it's right in front of our faces, really.

Katherine McNamara:

It's amazing how quickly this has kind of started to sp you know snowball into like uh more and more like private companies sharing consumer data with with uh governments or taking stuff that might be a little too uh, you know, seems a little too invasive and and uh you know, and and changing their policies unilaterally with re no real you know reason or rhyme.

Tim:

Well, I mean there's always a reason. I'm sure there was a boat boatload of money in in the pot somewhere like that with not being disclosed, right? Usually that's where where it starts. Um people are okay. So well, go ahead.

Katherine McNamara:

I was gonna say, and and the other thing is that like people were like the way that Discord was originally gonna roll this out as far as what's adult, they're really vague on what they defined as adult. Like, is it adult if it's like you know, rated X, you know, adult like pornographic material, or is it like if it's a you know hacking or tech uh you know like uh Discord uh that might talk about security vulnerabilities that could be potentially adult, not child, you know, for children. So no one really knew what was gonna what was really gonna be considered as adult uh for and requiring identification or not. It was just written so vague that they could unilaterally enforce it at any time.

Tim:

I'm sure that was you know on purpose as well to to keep it vague so they could do that. But um So we are coming up on time. Did you have any was there another one? Was there anything else you want to talk about? I think so, right?

Katherine McNamara:

Okay, so this this one's was just more of a fun story. Probably the hardest, uh probably the hardest zero-day disclosure I've ever seen.

Tim:

That's great.

Katherine McNamara:

Am I allowed to cuss on this? Like a little like this ASS.

Tim:

Fuck yeah, fuck yeah, you can.

Katherine McNamara:

Okay, cool. So we've heard about people irresponsibly disclosing before. We've I've heard about people disclosing zero-day vulnerabilities via meme. I don't think I've ever heard about people disclosing via throwing ass in a song. But that happened this last Friday. Uh so a uh Twitter group of like a non-researchers named UNU Underground. Uh so if you go search for UW underscore underground, they published a song that was written by them with like an AI-generated video that was really actually good, where they walk through like they basically annihilated. I lost count of how many zero-day vulnerabilities they found in uh Mauerbytes uh software. Um, they talked about hard-coded credentials and they showed them. They talked about uh memory leaks, uh you know, uh misconfigured permissions, uh hard-coded drive uh uh driver issues. Like it just it went on and it was a really catchy song. It went and got so much traction. The C the CEO of uh Malware Bytes actually reached out to them and they actually prepared a formal report for them. But it is really funny to think that uh that it was probably the hardest zero-day disclosure I've ever seen of just like a catchy, amazing song uh throwing out and like you know, their employees probably had to like watch like this video of like these anime you know waifus like throwing ass to like catch it all and slow it down. And it was really hilarious, but uh, you know, it it they wrote they wrote an actual like formal report for them. Uh they didn't put enough, they put enough in the song to like where they knew there was real stuff, but they actually prepared a formal report that actually like was very specific so they can improve their product. And I will give props to Malware Bytes for actually like taking the feedback and asking for the report and and uh and working with them to actually get this corrected. Because at the at the end of the day, funniness aside, I think everybody like responsible, like everybody wants to see product these products get better. And so uh the fact that they were able to make a catchy song and make the product better and do the responsible thing was pretty awesome. So if anyone wants to actually go and look up the uh the song, it's uh it's uh uh I think it's pinned on their Twitter right now. It's uh uh it's it's the song title is Malware uh Bytes uh front machine.

Tim:

Yeah. No, I think it's great. And and what's good about this is that uh you know, malware, so the whole point of malware bytes from years, everybody should know probably about this, but just in case they never did, malware bytes was like this kind of um malware removal tool, kind of like a free malware from years ago. It's been out for an A B tool.

Katherine McNamara:

I mean it it's still on the Yeah, okay. It's still I think it got it's still on Ninite.com, like back when I was doing like uh like tech repair and stuff like that, way like over 15 years ago. Uh one of the easiest ways to download a bunch of tools at once was to go to nineite.com and that's where it kind of got a lot of popularity because it was just like the first AB you can uh anti-malware you can grab. Yeah and it it it went from an anti-malware to like trying or antivirus to trying to be like an anti-malware, and I think it's kind of trying to dig into the EDR space. It's not it's like more like small business and consumer still though.

Tim:

Yeah, yeah, yeah. More like a person running it on their computer than any kind of like an enterprise product for certain still or like a small business, from my understanding. Okay, that's fair. But yeah, it's I mean, but but good for them for because you think about the who's the user of this product, and they're they're not they're not gonna be a team. Nobody's using malware bytes that has a team of security researchers at their own organization, right? The people using this product are home users, small business, like you said. These people are not gonna know any better. So if malware bytes uh, you know, any of these zero days were used or a supply chain attack or whatever, and the it was compromised, you know, that now you have all these people running essentially malware on their on their machines, uh trying to as well. Or bypassing in a scent, yeah. Right.

Katherine McNamara:

Yeah, it was basically like, or you could basically take over the malware with all those, you know, uh those hard-coded credentials. Yeah, yeah. Malware malware bites becomes your, you know, you're you're now the god of malware bytes. Um yeah. So I mean it's a good thing. Uh whenever these like things are discovered, sometimes there's two routes I tend to see like companies go. Uh, you know, one outrage and threatening people because like they, you know, they're like, how dare you? Or there's the ones who are responsible and and try to fix it. Um and and so I will give malware bites credit for actually like you know taking the uh gracious approach and uh and doing so. And hopefully they take the bug bounty for it and donate it to something good. I think uh I think last I checked they were uh hoping that if they're if they choose to do you know uh pay out the bug bounty, just donate to to a charity or something.

Tim:

Yeah, okay. Excellent. Yeah, yeah, that's bug bounty. Uh I mean I hopefully yeah, I don't know what the numbers on that are, but uh something like 30,000 for the amount of uh of zero days that were in there. Oh wow, that's incriminally low, but alright. That's uh it's not like they got lots of money. So all right, well um yeah, I guess we'll we'll cut it there. Um thanks for joining uh thanks for joining me on this uh wonderful journey, Catherine. It's been it's been a pleasure as always. As always. Any as always, and uh we will um yeah, we'll stop right there and uh we'll have all the links in the in the show notes, including the ones, the stories we didn't get to cover. So check that out. And uh we'll see you next time on the uh on the podcast.

Katherine McNamara:

Yep. See ya