Cables2Clouds

Ep 21 - Speed of Innovation: Networking vs Compute

November 29, 2023 The Art of Network Engineering Episode 21
Cables2Clouds
Ep 21 - Speed of Innovation: Networking vs Compute
Show Notes Transcript Chapter Markers

Get ready for a tech adventure like no other in our latest episode! Join industry veteran Pete Lumbis and buckle up as we zoom through the history of compute and networking—from the days of mainframes and 6509s to the thrilling era of overlays and network virtualization.

But here's where the fun kicks in: we're on a high-speed chase through the compute industry, racing alongside breakthroughs like Kubernetes in the Cloud. It's a game-changer, and the pace of innovation is so fast, you might need a pit stop to catch your breath!

Now, let's shift gears to the networking world—it's a bit like navigating a winding road. We're exploring the challenges where the pace of innovation might be more of a leisurely drive than a speed race. But fear not, because we're revving up the conversation and finding the fun in the clash between these different speeds of innovation.

Join us for an episode that's not just tech talk; it's a thrill ride through the twists and turns of the ever-evolving world of networking and computing. It's fast, it's fun, and it's packed with insights that'll keep you at the edge of your seat!

Check out the Fortnightly Cloud Networking News

Visit our website and subscribe: https://www.cables2clouds.com/
Follow us on Twitter: https://twitter.com/cables2clouds
Follow us on YouTube: https://www.youtube.com/@cables2clouds/
Follow us on TikTok: https://www.tiktok.com/@cables2clouds
Merch Store: https://store.cables2clouds.com/
Join the Discord Study group: https://artofneteng.com/iaatj
Art of Network Engineering (AONE): https://artofnetworkengineering.com

Chris Miles:

Welcome to the Cables to Clouds podcast. Cloud adoption is on the rise and many network infrastructure professionals are being asked to adopt a hybrid approach as individuals who have already started this journey. We would like to empower those professionals with the tools and the knowledge to bridge the gap.

Tim McConnaughy:

Okay and welcome back to Cables to Clouds. This is Tim, and today we had a great discussion with Pete Lumbus about innovation and the networking space versus some of the other technology spaces. But before we get into that, let's go ahead and do the news this week. Honestly, the last couple of weeks been a little bit slow, minus the days of our lives for OpenAI, which we'll cover here in a minute. But by the time you're hearing this, aws re-invent is happening right now, so I'm sure there's a lot that's already come out. You guys are already hearing and, don't worry, our next episode will be covering a recap of the whole AWS re-invent and what went on and give our takes on it. So stay tuned for that. Now let's get into the days of our AI lives here.

Tim McConnaughy:

I'm sure most of the listeners probably have seen some version of this or some amount of this by this point, but I got to say this one can matter nowhere. So, for those who are unfamiliar, sam Altman, the CEO of OpenAI, that's the company that developed ChatDeepGPT, an AI company. The board, the OpenAI board just randomly and pretty much out of nowhere dismissed him within 24 hours. They just said, hey, you're out, and that kicked off. Just such a shit show. Before I run through it, I think we just have to stop and check in a little bit. Chris, alex, I know we were talking about this when this first happened. It just came out of nowhere. What did you guys think? Was it your first thought when you first heard this?

Chris Miles:

happen. Yeah, I figured there was going to be. I figured at least by this point. So it's about a week old at this point. I figured by now we would have a little bit more information about why the board wanted to dismiss him. Still very vague, Still very unclear as to why he was dismissed in the first place. I'm sure there's something on the back end, but I think the most interesting thing about this entire ordeal is how much happened in such a short span of time, Like I feel like if we walk through the timeline this sounds like something that could be in a documentary that took place over three months, but this was like four days, which is just crazy.

Alex Perkins:

There's going to be a documentary, I'm sure. I'm sure there will be a Netflix special in the next few years.

Chris Miles:

Netflix is already cut deals dude.

Alex Perkins:

You know it Right, exactly, yeah, it was crazy, man. The whole weekend there was just constant flow of news and people chiming in with their opinions and trying to get a beat on everything.

Tim McConnaughy:

Twitter is nuts man.

Alex Perkins:

Oh yeah. Well, like Chris said, though, I'm very surprised it's a week later and we still don't know what actually happened. The board never actually gave a reason other than just some vague wording on the announcement and then some cryptic tweets here and there.

Tim McConnaughy:

I saw a lot of rumors flying. There was a lot that came out about the first of all, just why. That's still the big question mark. Why did it happen? And then the suddenness of it. I guess and I don't know if this is the problem with the story is that there's so much out there that you don't know what's true and what's not, because there's so much of it as close to the chest. But there's a lot of it out there saying open AI is open AI. It was made as a nonprofit like AI development firm. Since chat GBT just absolutely exploded in the stratosphere and kicked off the Gen AI. I don't know what you'd call it at this point. Tech boom, whatever the newest flag to plant, there's a lot of money. We did the story just a few weeks ago about how many billions of dollars they were estimating to be in Gen AI. So there's some rumors flying around that Sam Altman, greg Brockman he was the CEO. Oh, I think I have to go look at that.

Alex Perkins:

Greg was. Yeah, he was president of the board.

Tim McConnaughy:

Yeah, that's what it was Right. Yeah, so Greg left right after Sam was. He tendered his resignation as soon as the board fired Sam, and we'll get into that in a second. But, like I said, a lot of the rumors have to do with Sam and Greg and a lot of the employees being kind of for-profit focused, but the board being the other way around. Since open AI was founded on the idea of open development and ethics, privacy, it's almost like a classic good versus evil type of corporate scenario, if any of this is to be believed. But all right, so that happened. What Friday afternoon? Wasn't it that the firing happened.

Chris Miles:

Yeah, do you want to just run through the timeline, because it's pretty hectic?

Tim McConnaughy:

And then we'll roll back to it, we'll roll through it and then we'll roll back to discuss it because it's nuts. So yeah, so he was dismissed. What Friday afternoon? Nobody said why. The board didn't explain why he posted a Twitter. Sam Altman posted Twitter saying like I'm out. Basically, greg Brockman, pretty soon after, said the same, I think the very next day, or I don't even know if it waited for the next day. I think it did. They were already in talks to bring him back, like the board was already in talks to bring him back. If members are here, which is nuts, right, I don't even know what to say about that, like what do you?

Alex Perkins:

say the board thing, though that happened because of all the employees, because they built that letter that had. They only have 770 employees and over 700 of them signed the letter to the design.

Chris Miles:

I thought it was over 500. I thought I saw numbers. That was over 500.

Tim McConnaughy:

Maybe it was like 690 or something like that somewhere on there. It was a large amount. It was more than 90% of the total organization had signed this letter. That one already has me wondering. It makes you wonder about the whole good versus evil thing there. How could it, if every employee supports this guy? What does that say about the board? Anyway, so there's that. I was reading this article about it. Anyway, let's finish the timeline here because it's still pretty nuts. So yeah, that was Saturday. Still on Saturday. Was it the next day or was it still Saturday, that he basically got the offer from is Microsoft among others.

Alex Perkins:

I think that was announced Monday. That was announced Monday morning or something like early morning.

Tim McConnaughy:

Yeah, I'm trying to actually refer to the timeline now, because it's so chaotic that it's really hard to remember every single piece of data. The fact that it happened in the serial fashion ends so quickly.

Chris Miles:

Monday was the day that he went back to open AI with the guest badge and posted to Twitter. It was the first time I'm going to wear one of these, that's right, the first day and last time I'm going to wear one of these, but then they didn't close on the deal to bring him back, so then they brought in as the interim.

Chris Miles:

CEO, the previous CEO of Twitch, Emma Shear. That's right. Then on Monday, Microsoft was announcing that Samuel was joining as an AI head of AI researcher or something like that, and Greg Brockman as well.

Tim McConnaughy:

Yep and Brockman too. All the employees were saying that, basically that they were going to drop open AI and move over to Microsoft as well. Yeah, it's absolutely nuts. I think within the weekend and Monday, open AI had three different CEOs.

Alex Perkins:

Yeah, because they promoted Mira Murati. She was the interim CEO, but then she was the first signature on that letter to have the board resign and bring Samuel and Greg back. And then Emmett came in and then Emmett got replaced with Sam again. Anyway, yeah.

Tim McConnaughy:

So all of that, basically. He basically said hey, or Microsoft said that Sam was joining Microsoft as their new head of AI development along with Greg Brockman, but apparently the ink wasn't dry on that or something. He did some takebacks because before that even actually happened, sam was back at open AI. So here's the thing Microsoft pledged to invest like $14 billion, like $15.9 billion or something. It wouldn't surprise me, honestly, if, essentially, sam is bought and paid for by Microsoft anyway, no matter which company he works at.

Alex Perkins:

You know what I mean. They also own 49% of the company, or something.

Tim McConnaughy:

Yeah, that's what I mean. So my guess is that, hey, they're going to bring Sam over, or maybe they never were. Maybe it was always a pressure tactic to get the board to resign. This is what we were talking about. There's so much bullshit out there that nobody. There's too many rumors, there's too many. Nobody knows why. A week later we have no explanations, and it wouldn't surprise me if, basically, microsoft was. It was all a power play by Microsoft to get the board to either fall in line or resign and then put Sam back where he was anyway, because he's probably doing more good over there for Microsoft than necessarily under the Microsoft umbrella. Maybe there's a lot out there that we don't know yet.

Chris Miles:

I think the point that you made, tim earlier, is probably the most concerning thing is that, while we don't know why he was dismissed initially, the reports say that there was a misalignment between their nonprofit and for-profit offerings and Sam leads the obviously the for-profit. And while I would think in the public eye, sam Altman has straddled the ethics line relatively well as far as, like you know, advocating for ethical use of AI and protection of data and privacy and things like that, the fact that the board dismisses it because of misalignment with that part is a little concerning, right, right, yeah, it's really hard to tell what's going on behind closed doors with this one, but, yeah, I think I don't know. I don't know if there's any real winners here. It sounds like things may be worse off If you're looking at from that lens of ethics and privacy, et cetera.

Tim McConnaughy:

Right, Well, the board's charter was at odds with the idea of privatization and monetization at the end of the day. So could this have all been a power play to essentially change the charter of the board in a way that kind of a hostile takeover type of way? I don't know Right. It's terrifying to think that that could be the case. What do you think, Alex?

Alex Perkins:

Yeah, like you said, all this is speculation for all around, but it's just. The whole situation is so crazy and it did end up that the entire board got replaced. None of the original members are there anymore. They brought in new people to head the new board.

Chris Miles:

Right, and do they have the same charter? They brought in Larry? Women Aren't Good at Science Summers as well, so that's really the chair at the top. Yeah, I'm sure that'll work out well.

Tim McConnaughy:

So yeah, I mean, if you wanted to change the board's charter from nonprofit to for-profit in a way that doesn't require the kind of I don't know what the word is. I'm looking for politics and almost like a homeowner's association type of thing, where you need like certain number of people and like all sorts of stuff on paper and the legality and all of that right, like what better way to get around all that than? If you were going to do something like that.

Chris Miles:

You can leverage the publicity that you have to do it.

Alex Perkins:

I think what's weird is they should have a board that oversees the safety and the ethics and the privacy and stuff. Somebody should be but those are not the same. You can't have that board and the board that's trying to run the company and I think because that was like combined to me, that's why there were so many clashes, because there's two different missions there. They're completely different right, and Sam's job as CEO you could look at it like he was doing his job, right. So there's so many sides to the argument here.

Tim McConnaughy:

Well, Microsoft wasn't investing $14 billion to protect safety and ethics of everybody?

Chris Miles:

Oh, come on, dude, you don't think Microsoft has your back?

Tim McConnaughy:

I'm sure they do in a database somewhere. No, I mean, look, honestly, the quicker this freight train begins to roll, it is like a freight train, like all technologies, right. You get the first couple wheels rolling and then, before anybody knows, the train's left the station. Who the hell is driving the train anymore, right? So it's a little scary, not because I think hey, that's going to show up and terminators and going to destroy us and shit, but there are a lot of real, real world concerns that I think are valid, and a for-profit method, if you will, of AI development is a little concerning. So for me personally. But anyway, I don't know if you guys have any other closing thoughts on that. Otherwise, let's talk about the episode for a little bit.

Alex Perkins:

One last quick thing, because people are going to mention the argument that they're a capped for-profit company. But I saw a breakdown of this, I think, earlier today. Maybe, while they're capped for profit, they are making so much money that it's like more than any company has ever even made. So it's like the cap doesn't even matter. You can argue it does, but it just doesn't really make a difference to say that.

Tim McConnaughy:

Yeah, the truth is, we're not going to know until the Netflix documentary comes out.

Chris Miles:

I mean to be honest. Probably by the time this episode is out, who knows what's happened? Big Bird could be CEO at OpenAI.

Tim McConnaughy:

This could be old news by the time anyone's hearing it. That's a very good point. Okay, so let's talk about the episode with Pete. Pete Lombas is a really, really great guy to talk to. I think, personally, this is probably going to be one of our most entertaining, interesting, hopefully thought-provoking episodes that we've done. What do you guys think, alex? What do you think?

Alex Perkins:

Yeah, I mean I've been thinking about this a lot. I've already re-listened to it a couple times. Pete made a bunch of really good points. You know one of the main things that is a good chunk of the episode. We talk about kind of dpu's solving networking the way that it probably should, as in bringing the edge of the network not to like top of rack but actually to the compute host. I've just been thinking about this a lot lately, like there's so much here to unpack and kind of compare how and why CSPs can do what they do and why enterprises aren't quite there yet. So I think, like we mentioned in the episode, we definitely will have a part two and might even need a part three, depending on how much more Pete has to say. But it was a really great episode, yeah.

Chris Miles:

I think it was really fun to have someone as well seasoned and, you know, with accolades such as Pete's, to come in and talk about this. You know someone that has a pure networking background, who's kind of gone into the compute side of things, you know, working for Nvidia and now at upbound right. So it was really. He had some really good points and some fairly hot takes, which is, I think, pretty normal for Pete. So, yeah, it was definitely entertaining. So, yeah, let's get into it.

Alex Perkins:

Hello and welcome back to the Cables to Clouds podcast. My name is Alex Perkins and I am at bumps in the wire on socials. I will be your host for tonight's episode. As always, I'm joined by my two lovely co hosts, chris Miles at BGP main and Tim McConaughey at one goal base. Today we're joined by a special guest that seems to have traveled to every nook and cranny of our industry and maybe even somewhat recently into a newer one, but we'll get to that. So I'd like to welcome Pete Lundis at Pete CCDE. How's it going, pete?

Pete Lumbis:

Hey, thanks for having me y'all. Thanks, alex, doing well, doing well Good.

Alex Perkins:

All right, glad to hear it. Why don't we do a quick introduction and kind of rundown of your career, pete? Basically, you know who are you, what have you done, what are some of the places you've worked and where do you find yourselves these days?

Pete Lumbis:

Yeah, so I've been in the industry for a couple of years. I started as an intern actually, if we want to dial our way back at Cisco but before that I actually had the worst interview of my life where I failed my interview at Cisco for an internship, swatted through my suit and then ended up doing help desk as an internship instead of working at Cisco. So for all of you who have ever bombed an interview, there is hope. I did a couple internships at Cisco. I went and worked in for a managed service provider in New York for a hot minute before going back to Cisco and TAC. Did a little work working in TAC on the firewall side, realized my passion is actually like data networking. Jumped over to the routing team, worked there for a bunch of years, became the escalation engineer, so one of the kind of four people globally responsible for the worst, most awful garbage, most broken, stressful things, which I like to tell people that I used to have hair and then I started doing that. Got a CCDE on the way, kind of accidentally got a CCDE, which is a weird thing to say. I never really like intended to do the CCDE, but I accidentally passed, like I passed it to research. And then they were offering the test like half a mile from the office and so Sheltzily did not take it.

Pete Lumbis:

And then I spent a huge chunk of my most recent career at Cumulus Networks. I started as an SE kind of like post sales consultant for a name to count, moved into technical marketing, ended up running technical marketing and documentation, really helping a lot of like product development and where we're going. What are we building? Who are our customers? How do we talk to them? How do we reach out to them? How do we architect networks? How do we build those networks? How do we manage those networks? All of that stuff we got acquired by NVIDIA. That is a much more complicated, nuanced story that you can buy me beers and I will gladly tell you. And then about a year and a half ago I left to join a company called Upbound and Upbound does Kubernetes for the cloud. It's a much longer story, but if you're familiar with Terraform, it's a lot like that. But we just do it all on Kubernetes instead of using some other third party DSLR tool.

Alex Perkins:

All right, so just a couple of things then.

Pete Lumbis:

All over the place.

Alex Perkins:

Okay, so that is quite the journey and I'm sure we're going to get into more of this as we go along, but we figured with that journey right. You'd be a really good person for this topic. So the theme of today's episode is basically we're going to kind of compare and contrast the pace of innovation between, like, the compute industry versus kind of the computer networking industry, and specifically as it relates to cloud networking at least from the outside, I think, to most people it kind of seems like the compute industry moves very fast and the networking industry might not move as fast. So we're going to dive into that and we'll try not to turn this into too much of a ramp fest as we go.

Pete Lumbis:

I will do my best, Alex. The big challenge here is I not only have a soapbox, I have an entire bar soap factory to stand on top of for this conversation.

Alex Perkins:

Well, feel free, let loose if you need to, all right. So let's start it off with, I guess, a little bit of historical perspective. So I thought it'd be kind of cool to talk about basically our the four of us kind of where how we've seen the industries evolve in each area. So you know, we got everything in compute. We'll start with compute. We got everything from, like you know, starting with mainframes all the way up to like microservices, I guess, starting with you, pete, what. Where did you kind of come into the industry, like what kind of phase were we in where we already in like virtual machines? Like where did you enter into this, this area?

Pete Lumbis:

I think I got really lucky because I came in when the 6509s ruled the earth like the Tyrannosaurus Rex of old and I would say like half of customers were virtualized and the other half weren't. The idea of like a routed access layer was like super revolutionary and like sounded great on paper but nobody was going to do it. There was no such thing as an overlay. There was no network virtualization. Nysera, which became NSX, had been invented. There were still a lot of like software driven pieces of garbage, network devices like your old classic ISR 2600 or 2800s I mean workhorses but not fancy.

Pete Lumbis:

And I was at Cisco when they came out with their like CRS line, like the CRS1 first came out and that was the big MAMA JAMA. And so I really feel lucky to have watched a lot of that evolution, both from a networking technology and hardware, but also from a customer implementation and things like that. I mean I can tell you when they came out with the Nexus 3000, which was the first non Cisco chip in a switch Number one, that was huge, like it was like I can't even begin to tell you the hand wringing that I had witnessed in TAC, and TAC isn't even a very important place to witness product development right. So for me to see it means that it was way worse on the inside. But there was this talk about like automation and like you can run puppet with it and I was like I don't know what puppet is like. That sounds made up. So I feel very lucky to have kind of watched a lot of that rapid evolution.

Alex Perkins:

Okay, yeah, that makes sense. What about Tim? Where did you? You know you might be a little older than some of us, so where did you start off?

Tim McConnaughy:

Really, really, dude, wow, yeah, no, I was there when we started by painting on the fucking cave walls and no, I mean dude. Okay, so my first real tech job was at an ISP, a dial-up ISP, so, but this was even. This was the year 2000,. Right, so it wasn't that long ago. I mean, we were using 56K modems by that point. And if you want to know what I actually cut my teeth on, that's a whole other discussion, right, but by the time I was working in ISP, we were using 56K modems and, like you know, we had DSL and we did hosting, so it was like a hosted. So on the compute side, it was all physical still, like you could, they were blades, like they're the pizza box blades, right, but it was all. You build the blade and install Red Hat on it, put it in the rack, connect it and, bam, now your customer can, you know, log into it and deploy their website or e-commerce or whatever it is. But it was still very much physical. At that time we also had, like I said, we had a DSLAM and we had some kind of ATM. I don't even remember what it was now it's been so long, but I mean, I wasn't even in the knock.

Tim McConnaughy:

At that time I was actually working phone support and then on the weekends I was building servers and stuff. So that was so I guess if that puts some some idea of, oh, this is good. We had we got the nasty grant from Aaron, so the registry for internet numbers, and they were like going to take all the public IPs back if we didn't justify them. So I had to actually go inside all of our folders for all of our customers and like fill out these forms for Aaron to justify our public IP address allocation. That was lots of fun. And then you know, we had like fractional T1s and frame relays and stuff like that, so that if that dates me, then I guess I'm, I'm dated.

Alex Perkins:

Awesome. Well, what about you, Chris?

Chris Miles:

Sorry, just can you reframe the question? What exactly are we? What are we answering here?

Alex Perkins:

Yeah, it's really just like where you came into the industry, like what kind of phase was compute in, you know, was it still all physical? Was it VMs already networking? Was it like three tier topology, you know, like the traditional 6509s, like like Pete was saying just kind of where, where did you come into everything? Yeah, I, got you.

Chris Miles:

I can say when I came in which was honestly would not not too long ago, I definitely think network, the realm of network virtualization was almost non-existent. That was. That was pretty much not a thing. I was working for a MPLS and unified communications provider. So I remember like we've talked about it on this pod before, but like MPLS and L3VPN, just like blew my mind once I finally got it, but I had little to no exposure to the compute side, but I will say it was definitely early VMware days, so there was a heck of a lot of virtualization already going on in those elements. You know UCS, things like that. So yeah, that's about where I came in.

Alex Perkins:

Okay, yeah, I think me and you must have probably started right around the same time in the industry, because I think that's.

Alex Perkins:

I'm basically right around the same area. I know NSX NYSERA had already been bought by VMware and become NSX, like Pete was mentioning. I still worked with a lot of like three tier networking topologies, right, so it wasn't like all the clove fabrics and everything that we see these days and these large hyper scale data center networks that are just insane to think about like 10 years ago. So, okay, cool, that was just kind of setting the stage for everything. All right, so let's dive into some of the factors specifically driving compute innovation. And you know, one of the first things that you hear about a lot with compute is Moore's law, and you want to give us a quick summary of what that means.

Chris Miles:

Sure, as the resident expert and recent Googler of Moore's law, I can say that it basically boils down to that the processing speeds of computer processors will basically double every two years. So obviously there's a rapid increase there, almost every interval, right.

Alex Perkins:

Yeah, do you guys think, does this seem true in reality? I mean, I know it's seen. I don't know if it's every two years, right, exactly, but it does seem like you see a lot of innovation in the chip space. And I don't know, pete, I don't know if you're probably like the closest to this space and can speak on it. Do you have any thoughts there?

Pete Lumbis:

I think if you draw, you know you draw the linear line, you know the. Their years were below and years were above. But on on net it's been absolutely true, it's getting harder and harder. The main thing is it's becoming a power and space problem because it's, I think, moore's Laws is about the number of resistors and you think of a resistor as just a heat generator and so you're just getting more and more and more heat to the point where you know we, you can't dissipate the heat with air and we're not at the point where we're really willing to adopt liquid cooling.

Pete Lumbis:

And I think this is actually like Nvidia's whole play, like beyond GPUs, but like saying like look, general purpose compute has these limitations and we're about to run into them. Like, let's start doing specific limitation. And like specific compute, like if anybody's old enough to have owned a dedicated sound card, you know, like what's old is new. Again, like you know, there's RSC, 1925 rule, whatever. We used to have dedicated sound cards and then processors got fast enough that we could just do all the sound of the processor. That's a stupid idea. Why have a sound card? And that was the same thing with graphics. And now we're back to. You know I need a high graphics card and I'm sure in the future. Well, there are these other offload engines that will be added, because we can build us a chip or a set of chips that do that function more efficiently than a general purpose CPU.

Tim McConnaughy:

You probably don't, you probably, you guys probably don't remember it, but at the dawn of gaming, of computer of PC gaming, we had a, we had a pass through card like a graphics, not like a, not like a truly, truly dedicated graphics card, but it was called like a 3D accelerator, and so, if you like, I remember getting one and using it for Doom 2. It was Doom 2 was the game that I bought that for, and I'm really, I know I'm dating myself now, but yeah, it was exactly what you said, Pete, and what's old is new again, right, is that you'd, it was just a card you'd put in the machine and you literally, just, you know, cabled your graphics card to it and then the other end would go into your monitor, right, and it was just, it did exactly what it said it was an accelerator, it was an offloaded processing card.

Chris Miles:

I never got to experience sound cards, but by by Pete's theory, all I gotta do is watch the clock. Eventually they're going to come back, so I'll be waiting for the next, the next sound card interval to come through.

Pete Lumbis:

I'm long on Sound Blaster.

Tim McConnaughy:

That's, that's sound blaster yeah.

Pete Lumbis:

This is not an investment podcast and I'm probably not allowed to give investment advice, but Sound Blaster is the next GameStop.

Tim McConnaughy:

Wait, pre-covid or post-COVID GameStop? Yes, good stuff.

Alex Perkins:

It is interesting, though we're talking about all these special purpose cards. You mentioned NVIDIA. You got all these new things too, like DPUs, right? Didn't DPUs just come out pretty recently? There's all the hype around SmartNICs and all these other special purpose kind of things. It is interesting that they're being broken out like that too, into all these special purpose cards.

Pete Lumbis:

I glossed over my time at NVIDIA. I started at NVIDIA doing Ethernet switching stuff, kind of cumulus, continued, but actually moved into be the director of technical marketing for DPUs, for the DPU space. To take a step back for some of your listeners when you have a SmartNIC, really what you have is not just a fancy network card that hands off packet processing, but there's a lot of work that it can do around copying data directly into memory. Without this function, what happens is that you have to notify the CPU that there is a piece of data. The CPU wakes up, takes that piece of data, copies it to memory and then notifies an application hey, application, you have data that you need to deal with. That extra step can be completely cut out and you can have they call it zero copy or something like that. You have that NIC copy and read directly from memory and bypass that CPU altogether.

Pete Lumbis:

There's a little tiny baby CPU on the SmartNIC. What they did is they took that and they elevated it. Let's take that baby CPU and leave it there. Let's have all that same NIC functionality. Let's slap a full-blown ARM CPU and memory and storage. Let's basically like shove a server inside your server. Now you can deploy this in one of two ways. You can either deploy in an isolated mode where it is literally a server that you can SSH into and you have full administrative control over. It's almost like a transparent firewall kind of thing. The server person and the network person doesn't know that there's this third party in the middle. Or you can expose that functionality to the CPU and say here is a card with extra capabilities. This is what VMware is doing right now is you can offload some of the VMware NSX functionality onto the CPU of that NIC, of that DPU, to get some of those CPU cores back. That's kind of the high-level marketing around DPUs and SmartNICs.

Tim McConnaughy:

I believe that the CSPs leverage this too for NSGs and security groups and all of that. It's all based on SmartNICs, on the actual hosts that the it absolutely is.

Pete Lumbis:

The most clear example is Amazon acquired a company called Anaperna a long time ago. Anaperna's whole thing was like they were the Not even the first DPU. They were DPU zero. I don't even think they really fully come to market before Amazon went and scooped them up and just shut down their commercial business. They're like nope, you're ours now. You will not have any customers. We are your customer. The Anaperna NIC is what drives AWS.

Alex Perkins:

This is really interesting. So this made me think of a question of do you think that is more an innovation for the compute space or for the networking space, or is it just kind of like a combination of both?

Pete Lumbis:

I hesitate here because there's a couple of answers here. The simple answer is it's an innovation for both, because really what you're doing is you're providing specialized, you're providing a brand new generalized resource to whoever finds the most value for it. So one of the examples that you could theoretically do I don't know what the capabilities are today, so take what I say with a grain of salt but I could provide an ability to do storage block check, summing onto that DPU. And so I slapped this and a storage appliance and all of a sudden it just got 30% more performance by just installing a cart. That's huge, especially when I'm talking about a Nutanix-like device or a hyperconverged like device where I'm running VMs and I'm running storage, and so every CPU cycle I get back means more VMs, which means fewer servers, which means more efficiency. So on the abstract, everybody wins.

Pete Lumbis:

I have extremely strong opinions that the general value of DPUs is to solve networking the way that we've always been trying to solve networking, in which the fundamental theorem of network design is smart edge, dumb core, and we have always treated the edge as the top of rack switch. And that is not the edge of the network. And the reason why AWS can create VPCs is because the edge of their network is the compute layer and the core of their network is the top of rack switch, and I think that the DPU fundamentally changes how networks get built in clouds. But we lack the capability to do it today. Like we have the parts, we don't have the glue.

Alex Perkins:

You mean like so, like not? Are you saying everyone lacks the capability to do this? Or like can the CSPs kind of do this right, because they have things like you're saying, like they can create VPCs? It's just that this has a trickle down, or? Just the skill set isn't there. Yeah, yeah.

Pete Lumbis:

Okay, I can't speak universally to the CSPs, but in general, like the CSPs are doing this today, right, they are putting the networking at that bare metal host, and the thing that is even more powerful about this is that you now normalize networking across your infrastructure. And what I mean by that is that, if the Nick is my network demarcation, it is the exact same demarcation for Kubernetes, for VMware, for bare metal, for mainframe, like for Windows, for Linux, like it doesn't matter anymore. Yeah, whatever the computer is, that's whatever the computer is.

Pete Lumbis:

That is the solution, and so I'll expect your question. The CSPs, these cloud service providers, the Azures, the Microsofts, the Azures, the AWS, the GCPs of the world they've absolutely done this because it's the only way it works. What my point is is that nothing stops the rest of the world from doing this, except for the fact that it's hard and complicated. All right, Like why do we buy cars instead of building them from scratch? I don't understand. We just smolt some steel and like harvest some rubber and like make tires and like boom, you've got a car. I don't understand why you don't do it. It's the same thing, Like it is complicated and there is no good solution to solve that complication today.

Tim McConnaughy:

Just real quick. So having this is this is topical because we often get into the idea on the podcast about what would it take for an enterprise to bring that cloud experience on prem, and I think that's a very good observation that to get that cloud experience on prem you really need to be able to bring that edge down with you, and the skill to do that and the silicon to do that is actually quite difficult to come by.

Pete Lumbis:

Yeah absolutely, and I think that NSX NSX is not that interesting to me, but they solve a problem and the problem is don't make me call the network team and that problem of this like idea that you know whatever like when you have a new piece of technology, it introduces some heartburn. The problem you're solving has to be like three X greater than the heartburn that you're introducing. Nobody's going to go and be like Linux is the easiest thing I've ever used. It's amazing. The all of the pain the Linux gives you just pales in comparison to the value it brings. I think that about something like NSX as well. And so, tim, to take your point, there has not been a solution that is enterprise friendly, that is flexible, for that Right, and I think you know we come back to kind of the topic of the you know the podcast today which is like compute and automation and things like that, and I think one of the things that compute is done is they've separated themselves out from a physical topology and I don't mean that to be like network virtualization.

Pete Lumbis:

What I mean is that as soon as I care where things physically exist, I've just exponentially made everything more complicated. But if I can just say like, give me three M's and put them together, I don't give it. I don't care where the, where the VMs live, right, I just assume that there are three of them and I assume that they're connected together. Evpn started down that route right to give me that same kind of anywhere connected, anywhere experience. But at the end of the day, I'm configuring a switch port. I have now put a physical constraint on everything I do and it's like running with a drag shoot. I will never catch up if that's my environment.

Chris Miles:

Yeah, it's funny. I'd like how you said that NSX solves a problem, to have them say don't make me call the network team which works out, because we don't want people to call us anyway. But the back back to the previous thing that we were talking about with the kind of the innovation of DPUs that got me thinking about this. So you know, the topic today we wanted to talk about was innovation in the compute space versus the networking space and why networking seems to be a little bit slower. Just then we kind of talked about this idea of enhancing the innovation space in networking, but it's by using compute. So is the innovation of networking always going to be tied to the progression of the compute space or do we think that there is an opportunity to, you know, advance outside of that? You have to be really thinking outside of the box.

Pete Lumbis:

Yeah, yes, you have to be thinking outside of the box, and I think that we as network engineers have done a massive disservice to our own industry and and so I think first you know, that's my, that's my little teaser before story time If we look back a handful of years ago, in the 6500, you know we talked about at the beginning of the podcast, my favorite example this thing had 128, 256 ports of really critical one gig core connectivity. Right, if I lose that 6500, I'm I'm gonna have a bad day. That 6500 was driven by a 600, 650 megahertz power PC single core, single threaded processor. Ios classic iOS did not support multi threading. And so people are like I don't understand why networking is so, like, why do we care about packet formats? And like why is everything so rigid and like, why are all these sharp corners? And the reality is like up until like just a couple of years ago, we were building the backbones of entire enterprise networks on like a TI 83 calculator. So we come from this world of trash compute. And the reason why is that? Because what we had to do was hard and domain specific, and so we're like we're going to sacrifice the CPU in sake of the ASIC, which is going to generate a bunch of heat, cost a bunch of money. And we did that and we built these great networks.

Pete Lumbis:

And then there was this shift, probably around like the early 2000s who does five, seven, 10, I don't know exactly, I'm not a hardware guy when we changed everything and we no longer needed really crusty old iOS software that didn't understand the concept of multi threading. But we were also able to put real processors next to networking ASICs. And the problem that we had as network engineers is we did not evolve what that hardware, or at least we did not evolve at the speed of that hardware. While compute people got virtualization, they evolved at the speed of virtualization. They went from 20 computers to 4000 computers and they were forced to adopt automation and distributed monitoring and distributed administration. Our, although that speed increased, for us our overall footprint did not increase that much. Maybe we went from a cap 4k as an aggregation switch to 10 top of rack switches. I can live with that. I can still do that by hand.

Pete Lumbis:

This comes back to like the amount of pain versus like the value, like there's not a two x value gain to learn automation for 10 switches. And so now, like now, all of a sudden, like where you have linear progression for both compute and network. Compute breaks away and compute now has this massive coefficient to grow twice, three times, five times as fast as networking. And now what's happened is networking is left holding the back. And here we are where half of our organizations have gone to cloud because they're like infrastructure sucks, it's too slow and I hate it. And the other half are like we cannot go to cloud because of regulatory, because of compliance, because of cost, because of a whatever, and I hate you and we struggle with that so much. And I think that history is really important because it's not just the hardware and the software, but it's also us. Culturally, as network engineers have had to be very conservative Because again I have that one massive chassis and I bring it down, I'm effed and even as we've evolved into leaf and spine fabrics.

Pete Lumbis:

it's really hard to get into the mindset of like if I crash a spine, nobody cares, because the reality is nobody cares. That's the whole idea of the architecture. But we're still thinking in a chassis based mentality and we just haven't moved fast enough.

Tim McConnaughy:

I think a lot of the network engineers, though, have been, you know, for better or worse trained to be extremely conservative because of the where, the place that the network holds within the, there's a couple things right. For one, it's like it's the classic problem of network as infrastructure, which might actually have something to do with our lack of innovation there as well, because it's like you turn on the water and you expect it to work right, like you don't care that the pipes you later 400 years old and you know terracotta running, you know just like old shit that has never been able to be replaced, because you know it's there and it works, and we have other shit going on in the business that is more important. So I wonder if there's a little form following function there.

Pete Lumbis:

I think there is. But I think this comes back to that disservice where I 100% agree with you. But it behooves us to look at the compute world where you can take that exact same argument 15 years ago and be like I expect the people to rack and stack it and give me an ILO, and then I have a computer like I don't understand what this is hard, and then you evolve that to the next age, whereas like I don't understand why you can't just give me a VM Like this isn't hard, right. Why is networking still hard?

Pete Lumbis:

And in my you know, my whole kind of hypothesis around this is it is a combination of we are too worried, and it's not just us, it's management. Like it's a whole chain, it's no individual. It's like if you have managers who allow you to blow stuff up, then ICs are willing to take risks and figure things out. So I'm not blaming any individual contributors in an organization. But the other part of this is I would ask any listener here to think about how they plan their change windows, and I unfortunately have a talk from eight years ago describing the network change window which is open. Notepad type commands email txt file to team. Have no one read it before your weekend change and then cross your fingers and hope for the best as you copy and paste it in that, like I had. I thought that was funny.

Chris Miles:

I thought that was funny.

Pete Lumbis:

seven years ago and we're kind of still doing the same thing. Ops sucks, but it's not ops fault. Why are we doing that? Because how do you test that change otherwise? Do you build a million dollar hardware lab, do you use container lab and get like 70% functionality and you're like I really hope that 30% I'm missing isn't really important, right, like we don't have the CML or whatever like some right like CML, cfc or whatever.

Tim McConnaughy:

Yeah, container lab whatever.

Pete Lumbis:

And again, compare and contrast to compute folks. What compute folks do? They click three buttons and they get a whole cluster of VMs that they can deploy Kubernetes on and then test their change to the point where that whole process becomes part of continuous testing. Where, like before on Monday, I proposed my change in GitHub, like I'm going to change my infrastructure as code, my YAML file for Ansible, whatever, it automatically stands up three VMs and AWS. It automatically deploys Kubernetes to it. It automatically tests to make sure that that's working. It automatically deploys my change. It tests to make sure my change didn't break and then a senior engineer looks at it to make sure that I'm not like trying to change things like the day before Christmas. And what do we do in networking? Thoughts and prayers. That's the answer, it's true.

Tim McConnaughy:

Thoughts and prayers, it's all. It's all tied to hardware, though right Networking, ultimately in a way that you has been able to break away.

Pete Lumbis:

But it doesn't have to be, and this is. I'm going to end up being a little bit of a shill for my former employer here, but that is a false narrative. That drives me insane, because I cumulus. We're like it's just a Linux box, and cumulus's magic was take Linux kernel, shove it into the hardware, like it's like the SpongeBob meme, like just take it from over here and shove it to over there.

Pete Lumbis:

And as soon as your software is the source of truth for your hardware, and that your software must have a software model, cumulus invented the VRF for cum for Linux. So it didn't exist. Gotta have VRF on a network device. So we went to Linux and we invented it. As soon as your software is the source of truth, the hardware doesn't matter. The hardware just becomes a jet engine, and what that means, though, is that I can take a software only version of the end, a container, a who cares what, and I can run that I have the exact same functionality and tell me any vendors who are doing the same thing.

Pete Lumbis:

Zero, and that is the thing that drives me insane about our industry. There's nothing that prevents Cisco, arista, juniper, you name them. I don't work for cumulus anymore. I don't care. I don't even know what they've done in the last couple of years, but there's nothing that stops any of these vendors, except for technical debt. I'll give them that. It's a hard problem to solve, but that's why we can't do it. Why can't I make a full replica of my data center on my laptop? Compute can do it. Compute has dev, prod, staging, engineering. Every engineer gets their whole production pipeline deployment of compute resources to just break and play with and I can't get a switch to make sure that I do switchport VLN or switchport VLN, add, add VLN, whatever.

Tim McConnaughy:

You're right. I mean, you're absolutely right. That also gives us the. That's how we end up getting the most of the changes that fail and you know the bad looks from everybody about oh well, your change failed and you know you broke something on the network. And yeah, where the fuck do we test this or do we test it?

Pete Lumbis:

Microsoft did a research paper a couple of years ago. You can look it up. It's called CrystalNet and they found I'm probably going to get the number wrong, please forgive me, but it's 70% of their data center failures and errors and outages, misconfigurations, somebody messed it up and CrystalNet was their attempt to build. They're like you know what, tell the vendors we're going to build our own simulators. And they're like we're just going to simulate all the vendors we have in our environment, whatever they were like. They don't talk specifics, but you can. You know like one starts with a C and the other one starts with an A and probably one starts with a J and kind of figured out from there. It was there like we're going to build our own thing and they can simulate and test because they realize through empirical evidence we're doing a bad job.

Alex Perkins:

So not to pour salt on the wound, but like when you were at Cumulus right, Like what was what about the customer side of that adoption? I know you were talking about how the vendors aren't doing anything, but what about also coming from the customer side, Like what were some of the sticking points to get them to realize that things need to change as well?

Pete Lumbis:

It's new and it's scary. Like it can basically become summed up to that, right Like, and it's twofold. Like I try to be more of a realist and not a total hater. Like, look, you have 40 switches in the data center running this new vendor that you have to learn and you're not very familiar with Linux and that all sounds really annoying. And you have 70 other network devices that are Cisco. Like I just don't care, I can't be bothered. I mean, there's a whole like, there's like the you know all, all stories into the next KCD comic, but like there's an XKCD comic about automation, right, like, look, if you spend a little bit of time, like, the amount of time you spend automating and managing that QMLS network will become basically zero compared to the existing infrastructure. I mean, who has time? Who has the energy? Like I just don't. I can't be bothered to learn the new thing when I'm barely treading water with my existing thing.

Pete Lumbis:

And that's a organizational failures, that's, management failures, Like it's on a lot of parts. I'm not just going to blame the people, but at the same time I encountered a large number of people who were just unwilling to learn.

Alex Perkins:

Well, and also you know to, to bring it to one of my favorite comparisons, like what kind of roles do the standards bodies play here? Right, like in the networking industry, you have things like IETF, but in compute, like the Linux foundation, they're so night and day different. And it's like why? I don't, I don't understand. Like what is the holdup there? Why can't we be as innovative as the compute side? I don't know if you have thoughts on that.

Pete Lumbis:

Oh my God, no stretch out for this one. So I think that the IETF, the IEEE, their function is exceptionally important. When two boxes talk to each other. If we cannot agree on a protocol, on a framing, on a packet format, we have closed fabrics, we have closed networks. We will never have a routing protocol. I can never connect a Cisco device to a Juniper device. Everything's terrible. However, when we're talking about management, I don't need my Cisco management to talk to my Juniper management, and the failure that we have had in the networking industry, in IETF, is taking that same assumption in which we need a universal manager. Right, we want the Esperanto of network configuration, and do you know how many people in the earth speak Esperanto? It's about the same number of devices that can be universally configured via IETF standards. Roughly, we'll round up to zero.

Pete Lumbis:

If you look at the Linux foundation and the CNCF and what they've done is they've been like let's find a problem space, encourage and cultivate solutions in that problem space. I'm going to take my own this is my point of view on this is like the CNCF, for example, is like what sucks about managing Kubernetes, infrastructure, like that's their, that's their hype, like they just put it out there and then somebody comes up to them and like monitoring sucks. I'm like cool, it does. Who are you? And they're like we're Prometheus. I'm like cool. Welcome aboard CNCF project.

Alex Perkins:

Right. Add to our landscape that has a million other projects.

Pete Lumbis:

And the thing is is there's like a capitalism element where they're like we're not going to just have Prometheus, like that's not our decided standard, we're going to have like two or three, and it gives you a short list of who to look at and then we'll let the industry decide who wins. And the IETF works completely the opposite. Right? The IETF is like Soviet Russia, where a bunch of bureaucrats sit together and decide on the best answer for you to run in your network and you know this is the answer. Please don't ask like no, no opinions, no questions, and so it's slow, it's clunky, not the right answer. You know we've had these things like NetConf, we've had Yang, we've had open config. There have been a couple of different things and like like they, somebody's going to write you some hate mail and be like Yang models are actually really great shirt, whatever. The reality is that, right? Alex says no. The reality is that bring it there.

Alex Perkins:

They can send it all they want.

Pete Lumbis:

Yeah, thank you. The reality is like I don't want to write XML, so like what's my alternative? And the alternative and the IETF land is just write, accident, like just suck it up, buttercup, conform to the standard. Conform to the standard. And so, if you look again, what does compute do? Like let's peek over the fence and see how the other half lives, and then they have just does whatever the hell they want. Do you want to do it in terraform? Do you want to do it with plume? Do you want to do it with cross plane? There's a bunch of different ways and the industry's figuring it out. At the end of the day, what has happened to compute land that has not happened in networking land is I have an API first mentality to configure my system, and it doesn't mean I have API's, it means the API configures a system. The CLI is a client of the API, not. There are two things. That's like programmed the same state, like if you want to command, if you run router BGP 65535, that is an API call.

Pete Lumbis:

Where we have failed is that even the good API are bolted on and we have failed at reevaluating that to be API first, because as soon as I'm API first, I don't expect Chris to build a whole API library for Cisco. I don't expect Tim to build a whole API library for Jinber. I need somebody else to build that provider so I can plug into it why. It doesn't matter if you know what terraformer it is or not. You have heard of terraform and the reason you've heard of terraform is because of exactly that they have abstracted out the AWS API.

Pete Lumbis:

They have abstracted out the Amazon or the Azure API and the Google API to make it simple so that you can write a little bit of code and get a whole bunch of value. And we absolutely cannot do that networking and there is no line of sight to be able to do that networking. Nobody cares.

Tim McConnaughy:

Even the providers that exist in terraform for the for the network, the traditional network devices that run in the cloud. Pretty bolt on. Yeah, not only bolt on, but just so lacking, right, yeah.

Pete Lumbis:

Right, sorry, I just got interrupted by child. Can you ask that question again, tim?

Tim McConnaughy:

No, there was no question. I was just pointing out that the traditional network vendors, terraform providers, are, like Chris mentioned, bolted on and just not feature. You can tell they were not built API first, as you would expect, and that's exactly right.

Pete Lumbis:

They're not API first. And I think, again, there's two huge components, right? Component number one is exactly what you said they're not API first. And component number two is there is a physical constraint, a physical topology constraint, around them. If I want to spin up an EC two instance, what I say I get provide a bunch of, like, abstracted identifiers. I don't really care where they live. Give me this easy. Give me this VM. Attach it to this ID of a network, which is attached to this ID of a firewall, which is attached to this ID of an act gateway. The thing is is that when I'm using ideas, I don't care where they live, but if I want to go and use the Cisco terraform provider, whoever, I have to type E, zero, zero. I suddenly have to have a level of knowledge of the physical deployment that I super don't care about. And now, all of a sudden, I'm back into a place of like well, this isn't even. Not only is it not fun, it's actually more tedious than if I just locked into the box.

Pete Lumbis:

Yeah your two solutions are give me a whole virtual copy, right, give me the ability to simulate my entire data center. Or give me, like, make the host, the network edge and make the whole data center go away. Because if I just give a trunk up to my hypervisor and I make my DPU and EPP endpoint, then who cares? Everything goes away.

Chris Miles:

I no longer care about the topology and instead we're stuck in networking where we get neither the issues that we're, that we're that we just talked about with, like the standards bodies, like it, you know, for using the examples of IETF first, you know, ieee versus the CNCF, do we feel like that ties to obviously culture, ties into that a huge piece, right, culture breeds the behavior. But do we think that it also relates to maybe a maturity and like a debt problem? Because obviously the standards bodies on the network side have to kind of coexist with a lot of this ancient technology that we know is still prominent in the world today, whereas CNCF, you it's kind of a newer playground, right, you don't have to, you don't have to worry about the dangers, like you know, there's not a lot of lead paint around everywhere, right it's. It's you kind of have this safer environment to operate in, right.

Pete Lumbis:

You know, I think it's a really good question because if there's not a bunch of lead paint around, what are we as network engineers going to get a munch on during a maintenance window? But I think it's a valid question, but I disagree with it being the problem because if you look at CNCF again, they're taking a. I'm not saying CNCF is perfect, right Like it has its challenges, but they're taking a here. Is it like they're taking a problem statement first approach and just letting different solutions solve that problem? The problem with IETF and, I think, openconfig you know, as they would say in the South, bless their heart that they tried to solve, is they're like what if we created a single abstraction for every vendor's configuration? And the thing is is like you have two different problems there.

Pete Lumbis:

Problem number one is that now OpenConfig runs behind the curve. Cumulus comes up with BGP unnumbered. There's no OpenConfig model for that because Cisco doesn't support it. So I know all of a sudden can't use this feature that is cool and shiny, because not everybody supports it, even though it's dope and I want it. That's problem number one. Problem number two I'm going to go to you, the GM of whatever at Big Vendor X. Chris, you own the business. You need to drive sales and revenue. We should put engineers on this thing. That makes our config the exact same as any box, so you can replace it at any time with any other vendor. Can I get four engineers to work on that? You were never going to agree to that.

Chris Miles:

Never, never in a million years, like you have to be insane.

Pete Lumbis:

Amazon has to be like we're going to buy 10,000 switches unless you do this, like those are the only two options.

Pete Lumbis:

Right, I'm not going to do it because some insurance agency in rural Iowa might buy because they thought OpenConfig was cool because of like a conference, they went to right Like that's not how it's going to work.

Pete Lumbis:

And so IETF's dependence on universal agreement has been its hindrance when it comes to network management. And I think that if you look at like why is Ansible taken off? It's because I can just build a template. That template is not the best way to do these things, but it's the way that works for everybody and Ansible is leaned into that and really like they're the only tool out there that works with networking. One because they're agentless, which is a whole nother soapbox, but two because I don't need a bunch of Ansible libraries. I can build a template, render the template and then push it and so I can take my network engineer knowledge of what is unique and different. And like the data structure of Juniper's BGP versus Orisa's BGP and like that takes me five minutes, I don't need six months of IETF discussion, I just build that, I pop in some variables, boom, automated.

Alex Perkins:

Well, on that note, Pete, I don't want to cut this conversation too short. We definitely need to have a part two. I think we're running out of time. There's so much more to talk about, so many additional questions that I think we all have, so we for sure need we'll have a part two on this.

Pete Lumbis:

Alex, you might be surprised, but I have some opinions in this space, so I look forward to number two.

Tim McConnaughy:

Yeah, we'll do it soon. That's been great Awesome.

Alex Perkins:

All right. Well, thank you all very much for tuning into the Cables to Clouds podcast. If you like this episode, please share it around to anyone you think might be interested. Give us that five star rating on your favorite podcatcher and, of course, hit those like and subscribe buttons on our YouTube channel. Until next time. Hi everyone, it's Alex and this has been the Cables to Clouds podcast. Thanks for tuning in today. If you enjoyed our show, please subscribe to us in your favorite podcatcher, as well as subscribe and turn on notifications for our YouTube channel to be notified of all of our new episodes. Follow us on socials at Cables to Clouds. You can also visit our website for all of the show notes at Cables to Cloudscom. Thanks again for listening and see you next time.

Unraveling the OpenAI Leadership Drama
Comparing Pace of Innovation in Tech
Innovations and Limitations of Specialized Cards
The Evolution of Networking Challenges
Challenges in Network Change Management
Challenges With Network Automation and Management