Cables2Clouds
Join Chris and Tim as they delve into the Cloud Networking world! The goal of this podcast is to help Network Engineers with their Cloud journey. Follow us on Twitter @Cables2Clouds | Co-Hosts Twitter Handles: Chris - @bgp_mane | Tim - @juangolbez
Cables2Clouds
Can You Fly With Glass Wings? - Monthly News Update (with a Surprise)
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
“Too dangerous to release” is a bold claim in cybersecurity, so we treat it like any other security headline: we interrogate it. We kick off our monthly news round-up by welcoming Catherine McNamara as a permanent co-host, then dig into Anthropic’s Mythos preview model and Project Glasswing, positioned as an AI security and threat intelligence leap that can allegedly find zero-day vulnerabilities at a level the public shouldn’t have yet. We ask the uncomfortable questions: where’s the independent evidence, what does high-fidelity vulnerability discovery actually look like, and how do we avoid drowning in AI-generated noise?
From there, the discussion gets messier in the way real security always is. We talk about tokens, paid code security reviews, and how incentives change when AI companies chase growth, IPO pressure, and government contracts. We also unpack why “ethical” restrictions are hard to enforce in practice and how rumors of source code leaks and fast rewrites complicate any promise of controlled access. If powerful agencies can use AI to speed up exploit discovery, even lower-severity bugs can become dangerous when chained into real attacks.
Then we pivot to a concrete lesson every org can use: the Vercel breach. A supply chain compromise plus a single OAuth “Allow All” moment shows how identity and SaaS permissions failures can open the door to data exfiltration. We break down least privilege, blocking risky OAuth grants, shadow SaaS, and why a CASB can be the difference between a contained incident and a headline.
We close by connecting AI layoffs to social and economic pressure, including CEO security fears, surprising UBI rhetoric, and Oracle laying off 30,000 people by email. If you care about AI, cloud security, appsec, and what these incentives are doing to the world, this one’s for you. Subscribe, share the episode with a friend, and leave a review with your take: is the AI security boom helping defenders more than attackers?
Purchase Chris and Tim's book on AWS Cloud Networking: https://www.amazon.com/Certified-Advanced-Networking-Certification-certification/dp/1835080839/
Check out the Monthly Cloud Networking News
https://docs.google.com/document/d/1fkBWCGwXDUX9OfZ9_MvSVup8tJJzJeqrauaE6VPT2b0/
Visit our website and subscribe: https://www.cables2clouds.com/
Follow us on BlueSky: https://bsky.app/profile/cables2clouds.com
Follow us on YouTube: https://www.youtube.com/@cables2clouds/
Follow us on TikTok: https://www.tiktok.com/@cables2clouds
Merch Store: https://store.cables2clouds.com/
Join the Discord Study group: https://artofneteng.com/iaatj
Welcome And New Co-Host
ChrisHello everyone, welcome back to another episode of the Cables to Clouds podcast. Today we'll be diving into uh another monthly segment, uh, another monthly installment, I guess I should say, of our uh news podcast where we kind of go through the uh news articles that have come out in the last uh four weeks and dive into them and give you all the juicy details and uh some hot takes, etc. But before we get into that, um we do have a bit of surprise. Um so maybe if you remember back, golly, three years ago now when we started this thing, uh we were running a three-person operation. We had uh another co-host on here, Alex Perkins, um, who um eventually made the decision to step away. Um and me and Tim have been running it as a two-man operation for a little while now, which has been great. But we feel like we wanted to uh get some new blood back in here and get some um someone with a lot of industry experience in the specific, you know, uh sector that Tim and I don't actually uh get to interact with on a daily basis. So uh you've probably noticed our good friend uh Catherine McNamare has been joining us on um a few of the news episodes as well as we did a uh a full uh episode with her recently. But we've uh she's uh for some reason we've twisted her arm hard enough and she has agreed to come on as a as a permanent co-host. So um yeah, welcome. Welcome, Catherine.
Katherine McNamaraUh thank you. Um for those who don't know me, I'm Catherine McMara. Um, I have a couple blogs, mostly because I'm transitioning to a new one. That I helped write the most recent ice book. I have a couple CCIs, if that's important, and uh almost uh 20 years of experience in tech. Um I'm super nerdy and love all things security.
ChrisI can't confirm you are a nerd. You're a big fucking nerd, that's for sure.
Katherine McNamaraI am a big nerd, that is true.
ChrisYeah, yeah, yeah. Fit right in, so it's fine.
Katherine McNamaraRight before we started recording, I was showing them my hack five page. Or like, I was like, this is what we use for hacking into the Wi-Fi.
Anthropic Mythos And Glasswing
Marketing Claims Versus Proof
ChrisYeah, so um we're we're looking for uh that level of detail from Catherine uh moving on forward. So I'm sure she'll have uh plenty to add and uh talk quite a bit about the hacking uh experience that would that Tim and I probably don't uh don't touch very much. Tim, I've never seen Tim hack much uh aside from just the Gibson. Yeah, there you go. There you go. Um but yeah, so welcome Catherine. And uh if you want to um reach out to us and uh things like that, the contacts and everything will remain exactly the same. She's very prominent on social media, so we'll uh make sure to follow her on Twitter or X, whatever the hell we're calling it, and uh other sources because I'll deadname it. Yeah, deadname it. It's still Twitter to me, damn it. Um and then uh we'll go from there. So with that out of the way, let's go ahead and hop into the news. So unless you've been living under uh, you know, probably the biggest rock you could possibly find, uh, you would probably have an idea that uh Anthropic has had a pretty busy month over the last four weeks. Um even some of these things coming in at the 11th hour. Um probably, you know, by the time you hear this podcast, we're recording it on uh Tuesday, April 21st for me, but uh Monday for you guys. Uh some more shit might hit the by the time we put this out. So um so bear with us, please. But probably the biggest um announcement that we've seen come out is the uh detail from Anthropic about their new mythos preview model um that they've released specifically for the sake of cybersecurity. And with that, they've paired um a uh project which they're calling Project Glasswing. And you know, the 50,000-foot view of this is that they they developed this mythos model specifically for uh assisting with things like cybersecurity and um threat intelligence, etc. And you we'll get into this a bit. I don't know if this is pure marketing or if this is actually a real danger element to it, but they've essentially come out and said this thing is too good. Um, it finds zero days too well, um, so much so that we cannot release it to the general public yet. Um, so what they've done is launched what they're called Project Glasswing, which is a very elite group of uh large organizations, which they've given preview access to the model, and they've basically given them direction hey, use this, help, help patch all your zero days with this model before it has uh gone out to the general public, um, and hopefully strengthening their um you know their perimeter um and and software, etc., uh, before the quote unquote bad guys have access to this this level of detail. Uh there's plenty of details in the specific release that they put out about the performance of mythos. So, you know, there's talks in there about it finding 27-year-old exploits in like FreeBSD uh uh patches and things like that. So there's there's tons of stuff in there. Um but I guess but before we go too much further on this, what uh I'll throw it over to my co-host to uh give some feedback on this. Do you do you guys think this is this is really as powerful as it is, or is this just the the best marketing scheme ever?
Katherine McNamaraI suspect it's a bit of marketing, to be honest. Um, I I'm sure it finds some stuff, but um uh one of like some of the stuff, like the um things that they have revealed, they weren't really that high priority of CVEs um that they they claimed. Um you know, this the uh N CVE number was not that high. Um the other thing I was gonna say is that um, you know, we've had people write their own models or use models to try to find X like exploits and vulnerabilities in software before. And it's actually turned into a real problem where you know AI is submitting like a mass number of PRs on GitHub uh GitHub repos, and they end up like the majority of them end up being uh kind of trash. So I I feel like until I actually see hard evidence that it's like finding like high-level CVEs and it's incredibly accurate, um, it's hard to see you know the gap between where we are now publicly with both, you know, with both anthropic and other models and what they're claiming. It's it's so vast because you'd have to suddenly be finding like high fidelity CVEs in mass and high CVEs. And we're there's just such a gap between the reality of what we have with every model and what they're claiming right now that it's hard to see that we just made that big jump in, you know, essentially one release.
TimYeah.
Katherine McNamaraBut I'm opened, I'm open to changing my mind once I see some evidence of it. It's just right now, like what they've revealed has kind of been lackluster. But I I again I would love to see something that helps helps us close those C those hidden CBs out there. I just haven't seen the evidence yet.
TimI think the real reason they're not releasing it to the public yet is because you'll with this one you probably get a third halfway through writing a prompt and you're out of tokens.
Katherine McNamaraYou know, I I think it was a couple months ago that kind of uh like a month or two ago, um, they announced that like, hey, we're you know, uh obviously they have their app that's writing code and and apps for people, but they also uh released a new product that you would use more tokens for to do a security review on the code that you actually wrote, which begs you know, yeah, which which begs the question, why can't you write secure code initially? Why do I have to buy another more spend more tokens and spend more money? Um so like the fact that they can't even write at least unless it's a gimmick and they're you know doing it on purpose, if they can't write secure cloak code in the the first place and you have to buy another product and it's kind of like you know, the the reviews I've heard about it are okay, but not great. But somehow they have this product that combines vulnerabilities in code that it didn't even generate and it's just incredibly accurate. I I I I'm skeptical until I see evidence. Again, I'm I'm more than happy to uh to admit, you know, admit if, you know, hey, it's it's as great as they say it is once I see it in it at work.
ChrisYeah. I mean, I think that's I don't know, it's it's in the nature of security to be speculative and you know critique things until they uh uh until they prove that they're you know worthy or have merit in that regard. So I don't think that's outside the um the nature of the business that we're in in general, right? But it I mean if the uh Tim, I know you're making a joke about the tokens, but uh it's like we're we're we're stampeding towards this economy of tokens and tokenization type model where you know there's probably gonna be some allocation coming up soon where like every employee in a company gets a certain level of tokens that they they have per day, per month, whatever, and that's gonna be a new like employee benefit at some point. Yeah.
Katherine McNamaraUm, tokens, software as a service, subscription, everything. Um that's where the stock market is going. And unfortunately, I feel like we're gonna eventually have blowback from that because we're gonna eventually get to the point where people are gonna wanna just host everything on their own and you know, because it just becomes too cost. Yeah. Um, but I was also gonna say, uh, you know, uh the other thing that keeps me skeptical is not just because I'm in security, but but that there's been so many big promises made by all these big AI companies that they fell short of. Um that I am skeptical because of that. Like, you know, uh I I need to see evidence before I'm like, oh my god, this is the holy grail. It's gonna kill the uh it's gonna kill the uh bug hunting uh industry and you know, make it, you know, make software so much better. Just the fact that there's like I I would hope that they would have beta testers and other people that like, you know, are well known and in you know in the industry that are independent can verify that, and I haven't seen that yet.
Ethics, IPO Pressure, Government Deals
ChrisYeah. Um yeah, and I guess you know, as I mentioned uh at the start of this, uh Anthropic has a very busy month. So this is uh one thing that also came out of this is now we've seen details about uh the CEO of Anthropic has now been invited basically back into the West Wing uh the White House to discuss uh, you know, potential deals with the U.S. government and Department of Defense and things like that. Whereas previously, uh, you know, old Donnie 45 uh Trump was saying that uh that they would never be doing business again with Anthropic. So, I mean, not to talk about the fragility of promises made by the president, but um that's not really a surprise. But it's if mythos is the thing that opens the door for that, um, and you know, we were talking about this before we hit record, but they've kind of readjusted their stance on their ethical um place in the market of where their AI can be used and things like that, and and then their stance on that. Um I think this kind of it not necessarily poses a direct threat, but it's really like it's like, dude, is this really what we're heading? Are you just are you folding on your on your uh uh your strong uh morals that you said you had recently already just because of the the opportunity for more money? But um, I don't know. I are you guys are you guys feeling that as well?
Katherine McNamaraUm yeah, I mean, every company uh that makes it big eventually, like look, remember when Google was their their primary mission was to do no evil, and you know, obviously things change. Um my understanding of like the rumors about anthropic is you know it's 2016 and there's rumors that they want to go IPO uh this year, and it's you know, this is a good way to uh to go public uh eventually is to hype it up, hype these products up that are so you know so good at what they do, they uh they uh can't be released to the public yet. And so um it's good press for them. It's good press for them to look moral, but also go in in the back door and you know, potentially get those big uh government deals as well and claim it's on their own terms, but you know, we're never gonna know they're like a hundred percent of their you know top secret contract with the government.
TimYeah, I can't wait until AI companies start going public and then have a fiduciary duty to maximize value for shareholders over every other possible option.
Katherine McNamaraI mean they still have that duty pre-IPO, but n the thing with IPO is that now they're gonna have SEC scrutiny and they have they can't like they can't lie or puff up to like uh sharehold uh you know to the SEC. They have to actually be pretty transparent about like how much they're earning if they're public. But uh prior to that, they can you know they can't be uh completely lie about stuff, but they you know what uh w what is really the standard for mythos is so good we can't release it to the public. It's that's a very subjective statement. So, you know, you know, you it good could be we found a couple like 5.0 CBEs and that wouldn't wouldn't have been found otherwise because no one found them before. You could still claim that's you know too dangerous and somebody could use that for like a chain to exploit. But um, it's a very subjective statement. Once it gets to the public, uh, you know, a publicly traded company, you have to be a bit more specific about earnings, profit, uh, other things like that. But you always have a like as long as you're running a private company, you always have a fiduciary responsibility to your investors, whether you're a publicly traded company or not.
ChrisYeah. I guess.
Katherine McNamaraYou just have more oversight as a public company, that's it.
Source Code Leak And NSA Rumors
ChrisYeah. I've also seen in relation to kind of their their stance on, you know, kind of when they, you know, arced up and basically stood against the US government and saying, like, look, if if you want to use our products, you can't use it for, you know, the the kind of building war machines and killing people and and surveillance and things like that. But I don't know, like uh I've heard a lot of people come in and be like, well, what does that mean from the kind of free market perspective about like how much a company should be able to control how a consumer uses their product? Um, which I know this is probably a much bigger can of worms than than what we're talking about today, but it's I don't know. I mean, I don't know like how much does that differ just from like a standard like terms of service type thing um or like right to use operation, right? Like I don't I don't I don't know how much that differs.
Katherine McNamaraWell the other question is with their source code leak accidentally leaked, how much can you really keep the government from duplicating your source code? Uh that's another story I guess we didn't cover, but their source code was accidentally leaked or um earlier last month. And um one of the things that uh, you know, uh and what they used a lot of DCMA requests to get it taken down, but basically somebody used AI to rewrite it in Rust, Python, other things. And since it became a new creative art work, it could no longer be taken bound down by DCMA. Now, obviously the source code is not the same as their data sets, but with the source code and the right data sets, you could possibly rebuild a version of CLOD. So that also could lead to some sort of compromise with the US government, I would think.
ChrisAlso, apparently the code wasn't that great. It looked like it was written by AI.
Katherine McNamaraSo um I'm sure a lot of its updates and stuff are by AI after like the initial. A lot of these bigger companies like to eat their own dog food, so to speak.
ChrisAlso, yeah, probably worth mentioning. Uh, we just saw an article also come in saying that apparently the NSA is already using uh anthropic uh specifically the mythos brevy model as well. So um, yeah, that's just the the waters are getting ever more murky with this. Um, and I imagine next month maybe we'll have even more details on it uh that are not not so great, but we'll see.
Katherine McNamaraUh that's not surprising. Um, if you remember the time uh you know back in the Snowden days and stuff like that, there were a lot of leaks about the NSA having pretty much like an arsenal of zero days. And I'm sure that's continued. The national security agency is actually technically supposed to be focused on uh national security internally, but it also you know aimed towards foreign uh security and aligns with the CIA as well. So I mean, i i if mythos was even half as good as uh what Anthropic says it is, I mean, they're just they'll just use it to create uh uh you know a catalog of new zero days that they could put possibly chain, even if a C zero uh even if a vulnerability is like a C V E5 that's undetected, you know, it uh most of the hack like the really good hacks are not just like you execute one thing and then it's you know, you're in. Uh a lot of the really complex good hacks are usually a chain of exploits. So if you get like a chain of them that you can use together that like might be individually CV2, three, five, you could potentially do something and compromise something. Uh like like um the one, like if you guys ever heard about the uh NSO group using a zero touch uh uh Mac uh iOS hack to launch Pegasus onto like journalists' uh phones, they basically use a chain exploit between um WhatsApp and I uh Mac iOS uh to basically be able to launch malware onto a device without somebody even having to pick up their phone, open it, or uh or click on anything. But those are chained exploits that can be really dangerous when combined with you know combined together. So if mythos is even halfway good, even if it's just churning out you know 5.0 or under CVEs, those still can possibly be used by NSA or other security agencies if they're used in the right combination.
ChrisEspecially if they're in mass as well.
Katherine McNamaraAnd it saves if it saves them a little bit of time and money to have to go find them themselves, they're happy to do so. And another way, like you were talking about before, you might have a rule that says uh you can't use this to kill people, but you could also say, I'm using this to fight against or detect terrorism or save lives. And that's just another way of pit rebranding and pitching the same thing. There's how is an anthropic going to know if the NSA used it to bomb somebody or to stop a terrorist attack? You wouldn't really know.
ChrisRight, right.
Vercel OAuth Breach And CASB Lessons
Katherine McNamaraUm, so I guess I'll I'll uh pivot into my news stories to kind of uh talk about. Um, so just this last weekend, uh uh Vercel uh ended up uh uh admitting that they had a supply chain attack that uh or uh uh or a hack that basically revealed a limited set of you know customer information and other things. We don't exactly know what data was released, but essentially everybody uh if if you guys don't know who Vercel is, it's a cloud provider that does hosting, that does like uh like dev environments, uh other thing, like code hosting, things like that. It's still a startup company that's in its uh in uh that's still private, it's not like uh traded yet, uh publicly traded, but it's you know valued at nine billion dollars. And it's uh it recently got a lot of good hype because it was hosting Claude, like a Claude Skills uh or open claw uh catalog. Sorry, uh Claude Mod, I was remembering the first name that I saw it under. And so it got a good, a lot of good uh press from that. And um and um they have it's been an interesting few days with that because apparently the way that uh it they were hacked is um so this is the important this stresses the importance of having like a CASB and limiting you know, having least privilege access. So essentially uh what was really hacked was a third-party app or third-party service that Versal wasn't even a customer of, but a random user decided to log into that app and use OAuth authentication. And for those of you who, you know, you you probably do it without thinking, but like if you ever gotten an app and you know it it popped up a uh uh or if you like signed into AliExpress or something else and it asked for permission, like permission to see your email, permission to like see your name, essentially that's OAuth. You know, you're using another login, like your Google, your Gmail login or something else to get access to another service or an app. And depending on what how many, like a lot of people just click allow to get in and don't think about the permissions they get. So in this case, a versel employee essentially logged into an unapproved app and was able to give certain domain permissions. And um again, this stresses the need for something like a CASB or limiting permissions. It's it's actually pretty easy and like side of a Google workspace, which is what they granted permission to, to like uh to block non-admins from granting access. But apparently whoever was running it uh didn't do that. And so when this third-party app uh or service was was hacked, basically they got access to a backdoor into Vercel and they exfiltrated a ton of stuff. So right now in like the dark web breach forums and stuff like that, it's being sold, like all these customer like like API keys, other thing, uh, you know, data, uh customer data is being sold at $2 million. Now, I don't know if anyone's taken that data, but uh one thing that uh I was reading into is that apparently. The IDF, like Israel's uh military, apparently uses Versel for something. And so given the state, uh the geopolitical state of the Middle East right now, that might be like I would say that $2 million usually would be a lot of money for a random person to come up with on the dark web. But given today's geopolitical climate, somebody might actually pay for that if it, especially if it has any IDF information, like API keys or usernames and passwords or whatever, or or certain data. So uh, you know, I I I jumped online. Um here, let me see the actual uh I could pull up the actual thing that I read on it because it was really interesting. Um, it's just it's one of those things that like so if you guys don't know a cloud, uh uh like a CASB basically blocks uh like monitors your cloud access, uh like to SaaS applications, stuff like that, and prevent like because it can be very complex if you have multiple, like Google workspace, Office 365, other things. It can be pretty complex if you're trying to manually adjust or approve applications. So what a CASB does is it it looks for those shadow applications and sharing and other things like that, and it allows you to block things. But apparently Vercel didn't have something that was either locked down or uh like a CASB. So it says uh so it was Contex, a company called Context that was hacked. And uh we learned that the unauthorized actor appears to have used a compromise OAuth token to access Vercel's Google workspace. Vercel is not a Context customer, but appears at least one Vercel employee signed up for the AI office suite using their Vercel Enterprise account and a granted allow all permissions. So basically, just that allow all is you're trying to access the app. They didn't look review it because not necessarily every developer or every employee is very security-centric. And that's that's a human nature, I think. Um, it's not, you know, outside of security, I see a lot of people making that mistake. Vercell's internal auth configurations appear to have allowed this action to grant broad permissions inside of Vercel's enterprise Google workspace. And from there, they were able to grant access to other systems and exfiltrate data. Now, this again, this is one of those things where, you know, it's Vercel's not a tiny company, it's not a big company either, but $9 billion, 500, somewhere between 550 and 800 employees. Well, I understand that, like, um, that it, you know, being a in a kind of startup mode where you're still within your first 10 years and you're not a publicly traded company, you may not, you know, want to feel like you want to restrict things. But this is why, like these situations are why it's important to restrict access to that least privilege access. And if you need to have like test applications or whatever else, set up a dev environment that's separate from your main Google workspace or somewhere else to have those apps tested or have it on a white, like allow list uh only, like and get approval. Because at the end of the day, one single employee having two broad permissions and clicking allow all could give, you know, like like if you was a this was non a non-SaaS application, and let's say you were going to create a VPN into your enterprise private network, you would have to get approval first and you know make sure that whatever company is connecting, like extra net on an extranet, has uh is you know appropriately audited and have security. Well, when when it comes to these SaaS applications where our data or our information may live, you know, there's not as much scrutiny uh sometimes with these like newer companies and people are not as security-minded because we're so used to clicking allow all and just getting access to the app we needed. So it's a good lesson in some in CASB and and making sure and and clout limiting cloud permissions if you're not using a CASB. Um any comments on this so far?
ChrisNo, I mean I think you've pretty much covered it, but it's just funny that even companies that are as like as much in their infancy and born ultimately in the cloud um still might not have these kind of uh I'd say like normal uh cloud security pieces in place, right? Just to restrict things like simple SaaS access, right? Um so I mean it's I mean it's just kind of reinforces the point, like anyone can make the mistake and everyone's going to make a mistake, right? So it's just like um, you know, worth uh worth reviewing, I guess.
Katherine McNamaraYeah, it it's interesting. That like if you look at their uh like their website and stuff like that, they they boast boast the ability to have multi-tendency for their customers, security for their customers, and things like that. And they it seems like I don't know their inner workings, but it seems like they focused all of their security to their customer-facing stuff and didn't really secure the backdoor, the in the the the actual employee access, um at least in this situation. So yeah, it was a it was interesting. Uh it's definitely something that they can fix and and prevent in the future, but it's it's one of those things that you know, we have to remember that these big companies that uh that um are less than 10 years old but are getting like you know, runaway popularity because and taking off sometimes have you know, still have new new company problems, still have new security problem, like you know, no-brainer security problems because they're so focused on multi-tenancy and security in the front end for the customers and not thinking about how much privileges are giving uh and allowing their employees.
ChrisYeah.
Katherine McNamaraThey might still they might have a mature front end, but uh, you know, the the actual access they grant their employees might not be as mature.
ChrisYeah. Employees still do bad things, and more often than not, it's on accident.
Katherine McNamaraSo the other thing I was gonna bring up, since we're kind of on an AI um an AI uh uh spiel, uh I don't know if you guys heard about this, but uh Sam Altman, who lives in the Bay Area, has had a series of of uh very uh explosive slash spicy things happening. So uh a lot of people have apparently attacked his house and are very angry at uh at him for all of the uh job. Like obviously he lives in the Bay Area where there's a lot, you know, historically a lot of developers, a lot of tech people, and and with all the layoffs and craziness happening, there's also a lot of angry people because even though not everyone uses open AI, open AI kind of spun started the trend of a uh open AI uh open, you know, LLM models and stuff like that, and kind of start kicked off what we've seen in terms of layoffs over the last uh five years. So uh in the last month and a half, there's been uh from what I was seeing in the news stories, two drive-by shootings at his house, uh people just firing at his house, and somebody threw a Moltav cocktail at his house. Um obviously, uh, you know, between that and people setting fires to warehouses very angry that they're no longer getting a living wage, um, you know, a lot of CEOs uh, you know, are starting to get higher security details, uh, getting a little bit more paranoid. Uh and I can understand that. I I I mean, obviously I don't want to see harm or violence befall anyone, but I, you know, you lay off a good part of the economy and high earners, you get a lot of angry people. And really inter uh interesting is that in this last week, uh Elon Musk has come out in a very surprising position, which I'm sure is uh, you know, I I have a feeling is somewhat related. He uh, you know, as as people typically know, Elon Musk is very much against government spending, very much against uh uh taxing and all that stuff. But out of the blue, uh he tweeted that uh, yeah, let me pull up the tweet because it was it was again very interesting. Uh suddenly he is in uh favor of UBI, uh universal, not only universal uh basic income, he's in favor of what he calls universal high income to cover people for uh you know the AI job loss in the market, losses in the market. Now it's very interesting because when AI first started taking hold and people, you know, were getting a little nervous, and you know, people like the CEO of Anthropic was saying things like, oh, we're going to uh yeah, we lost Chris. Yeah, so when when we first started this whole AI explosion, we had a lot of people who are like uh like the CEOs are saying, Oh, we're we're we're not gonna lose jobs, we're just gonna replace them with AI jobs. People are gonna become AI experts. And now I think um we're getting to a point where like the CEOs are starting to acknowledge that some of these jobs are not coming back or might not for a while. So Elon uh Elon's surprising tweet after all this crap happened to uh Sam Altman was surprising. Universal high income via checks issued by the federal government is the best way to deal with unemployment caused by AI. AI slash robotics will produce goods and services far far in excess of the increase in money supply, so there will not be any inflation. So suddenly uh Elon Musk went from government shouldn't spend any money, shouldn't spend any excess money, we need a cut, cut, cut, cut to I want universal high income for people who've lost their jobs uh due to AI. And that's a you know, it's it's a turnaround based on the philosophy he's been preaching for a while now, and also uh the fact that it's kind of a gentle, like a not so uh hidden acknowledgement that some of these jobs are never going to be made back. Um, I still think that there will be a AI bubble pop, um, especially since there are certain things it's just not good at. But um as far as uh, or it's not seemingly getting great at, um, but as far as like the CEOs, they're kind of starting to acknowledge that there is going to be some shrinkage just to save money and um and that they don't want violence to happen to them over it. So they're kind of like trying to like be like, Look, I'm in support of helping you. Don't don't shoot me, don't throw a maltov cocktail at my house, which is again interesting.
TimThere was a story I was listening to earlier this week, and it was uh apparently a bunch of billionaires were on a Zoom call or something, and and this was recorded, and uh one of the billionaires was like, Oh, well, you know, things are getting to the point now where we might actually have to start taxing the wealthy and paying more into the system to avoid civil unrest. It's like, well, congratulations, I guess you're starting to figure it out.
Katherine McNamaraYeah, at the end of the day, like we have a market that is consumer-based. Like, if I mean, we come up with we sell a lot of stuff that's in excess. Our phones are, you know, people buy like trying to entice people to buy the next new iPhone, the new laptop, new TVs. Like, we we don't just sell people uh, you know, uh, you know, we don't sell people a bunch of um uh just the need what they need, and that's it. We s our entire economy uh pretty much relies on excess, but you can't have consumers or excess if every if a good chunk of high earners are losing their jobs.
TimYeah, consuming.
ChrisYeah, there's been I've heard um some kind of uh detail that kind of latches on to what you're just saying about like what this does for kind of the the regular consumer market, right? Like if there's uh a you know a significantly less amount of higher income people, um, like you know, Frank, I mean to put it lightly, like who buys Louis Vuitton and shit like that. It's all you know, kind of high-earning white-collar workers, and if there's less of them in the world, then you know Louis Vuitton is a bad example. There's gonna there's gonna be other things that just you know slowly start to eat shit because there's no consumer base that can afford their products anymore.
Katherine McNamaraYeah, you can s turn everything to a script subscription model, you can come up with the best apps or laptops, but if there's nobody with money who can actually consistently like a not a large amount of people who can pay for it, then and it's only companies that can pay for it, but the companies rely on the consumer to be able to buy their products. At the end of the day, at some point, like this like the we it breaks under the strain. And I think I I mentioned that in my in the um uh episode I did on AI and the AI pop is like there's two ways that this AI bubble pops. It's the quality just doesn't increase enough to actually replace people, um, not consistently enough. Or two, you ruin you tank the economy because there's not enough consumers being able to pay for uh for the product that's being output by AI, and we break it that way too. But one of those two ways, there is going to be a bubble pop because, you know, as far as the economy is concerned, not necessarily just AI. Because it you need to have consumers who can pay for stuff. You can only have like if you have AI doing all the coding and managing your infrastructure and stuff like that, like you only need a certain amount of people to manage that. And if you're not replacing those jobs with something equally high paying, then doesn't matter how much how seamless AI is or how much how many apps it's pumping out, if there's no one to pay for it, you you have to have customers.
TimYeah. There's there's a prisoner's dilemma with this whole thing where every company's so busy maximizing their profits by trying to get rid of people and maximize AI and just make everything, you know, like Oracle to laying off 30,000 people and all this other stuff. You know, companies are doing this over and over and over and over. And every one company is like, well, I have to do what I have to do to maximize my profits and make my shareholders happy and you know, make my business good, but every other company's doing it. So there's this whole prisoner's dilemma thing where if everybody does it, then everybody loses, but everybody's gonna do what's right for them individually, right?
Katherine McNamaraI'm sure that some c some CEOs are probably looking at it as that's the next CEO's problem. I'm gonna get my stock uh uh stock buybacks and everything. Oh, 100%. I'm gonna cut while it's high and let the next guy deal with it.
TimYeah, and that's and to be fair, that's been going on for a long time. The AI is just accelerating that now, but but it is, it's a big prisoner's dilemma, and I don't think any of us have the answer there. But yeah, it's like you said, it's gonna pop one of two ways. My money is honestly on the the first one. I think there is a a a ceiling that's gonna show up, but if not that one, then definitely the second one. Which one comes first, really, is really the question. Yeah, yeah.
Katherine McNamaraOne of them we're gonna speed run towards, or it might be just two both hitting both walls at the same time.
ChrisYeah, there you go. Um pound it.
Katherine McNamaraLike if the economy falls apart uh and we don't have enough consumers before AI is able to actually achieve the you know quality to replace people in like yeah, there's only so much, and we're already seeing people pulling back on data centers and stuff. But yeah, that's a good segue to your next story, Tim. Yeah, take it away.
TimYeah, yeah. So, all right, so we'll finish off with this uh exciting, wonderful, happy story to you know, really keep the theme going of the of the happy news. Uh so you know, again, this is this is in retrospect, so everyone already has heard this story. But Oracle laid off 30,000 more people um recently by email at 6 a.m. So we didn't we don't even have the con the decency to you know have managers inform people anymore or or talk person to person or even even get on one of those wonderful Zoom calls where the CEO talks about how how hard this is for him or her to to fire all these people who are doing it anyway. So we're just now doing it by email. We're really, really leaning into the efficiency uh of of this whole process. Um and yeah, so 30,000 more people. Um, I forget what the grand total is up to up to this year alone. It's in the hundreds of thousands at this point, and it's still still going. So uh yeah.
Katherine McNamaraThe best part about all of these layoffs is like the uh finding out afterwards that AI selected the people. One, and two, uh a lot of these companies are realizing some of these people shouldn't have been laid off because they have some domain knowledge, they do something very specialized, and then they have to hire them back. And I think that happened with some uh some of the Oracle layoffs.
TimYeah, I I had read something about how some of the people were well, I'm sure some of the people were just hired back because they shouldn't have been let go. Uh, but uh several more, a lot more uh of the ones that got offers got like here's 60% of your salary and now you're a contractor, so no benefits. So take it or leave it.
Katherine McNamaraIf I was gonna be laid off in like as a mass layoff, I don't know what I would prefer. Like the there was we've heard about the uh emails at six in the morning and access immediately cut off. We've heard about uh a company-wide call where uh like basically they let them know that if you're in this call, you're laid off, which is again like okay. Um, or the one-on-one HR script call where everyone gets a 30-minute calendar appointment with some HR outsourced drone and they basically read a script. I I don't know, like I guess I the one-on-one call would probably be better, but um at least less humiliating, but uh not for the company, yeah, for the company as well. But uh yeah, like it's just it's getting to like crazy levels of impersonal and just and a lot of like uh when I was reading into the Oracle one, it was a lot, a lot of it was uh to pay for their AI investments that haven't paid off yet. That was the crazier part, is like they're replacing people with AI, but their AI investments haven't paid off. So they had to cut off like they had to cut their opex really quickly just to try to struggle to make that couple billion dollars back that they were in the water for for every year operationally. And it's it's happened a lot where a lot of these companies are like laying off, not because these jobs are necessarily always redundant with AI do AI doing them, but because they're trying to cut the opex because the opex of AI is so expensive that they need to cut somewhere and they're just desperately trying to find anyone to cut to to justify that investment or to continue on so they can hopefully turn a profit out of it one day. A lot of these AI investments are turning into like a hope and a dream that it becomes profitable. Yeah. But I think like the MIT study that was done last year was showing that like 90 something percent of companies' AI projects have not turned out to be successful or profitable yet.
ChrisSo what you're saying is assume uh a large amount of these big organizations are gonna be layoff maxing. Um so you need to be focusing on looks maxing on your resume as much as you can. Jesus bone smashing or whatever the fucking other term is.
Katherine McNamaraI'm gonna waterboard you. You deserve it.
TimWhy? Just kidding. Everybody focus everybody focus on your aura farming from now on. Yeah, yeah, dude.
Katherine McNamaraOkay, what was his name? Cap Clavicle? What it tells me.
ChrisYeah, he just OD'd. I hope he's doing okay. But anyways, uh well, I hope hopefully that changes his ways, you know. I don't I don't wish death upon him, but you know, maybe maybe somebody would.
Katherine McNamaraGiven how he treats people, I mean a little bit of small things fine.
ChrisYeah, yeah. Uh there you go. I'll take that.
Katherine McNamaraWell, a little bit of humility.
ChrisYes, yeah, maybe a bit of uh yeah, I won't say it. I was gonna say a bit of diarrhea, but that sounds probably a little bit too vulgar for the podcast.
TimBut you know, too vulgar for this podcast.
ChrisYeah, it's it's on record now. I'm gonna say it.
Katherine McNamaraUh maybe wish a migraine on him or yeah, there you go.
ChrisThere you go. I guarantee he has one. He's been hitting his face with a hammer for a little while, so he's probably got something going on with it.
TimOh, that's a good one.
ChrisYeah.
Katherine McNamaraYeah, well, yeah. He thinks that if you do micro bone fractures, your bones uh uh grow back stronger and form.
TimSo he gets like cheekbones or something, like big cheekbones by like smacking the shit.
Katherine McNamaraHe doesn't actually like full break.
Wrap-Up And Listener Shout-Out
ChrisYou got so much to learn, Tim. You got so much to really do. Yeah. We'll get you. It's I know it's late for you, so we'll we'll cut your break, but we'll give you some we'll give plenty of uh homework to take away from this. But with that, we will wrap up. Uh thank you again for joining us for another uh edition of the monthly news. Uh give us a shout out on social media. Say a big welcome to Catherine, say how happy you are that she's here and how you've been growing it. So sick, so sick of me and Tim. Uh it's good to have some uh a new voice here. So uh we will see you later and we'll talk to you in a couple weeks. Bye.