Cables2Clouds

Cloud Networking at Unicorn Scale

Cables2Clouds Episode 55

Send us a text

The story of cloud networking rarely gets told from the perspective of those building it inside unicorn startups, but that's exactly what this episode delivers. Richard Olson, cloud networking expert at Canva, takes us behind the scenes of building network infrastructure for one of the world's fastest-growing SaaS platforms.

Richard's fascinating career journey began with literally throwing rocks with phone lines into trees during his military service, progressing through network operations centers and pre-sales engineering before landing at AWS and eventually Canva. His unique perspective bridges traditional networking expertise with cloud-native development approaches.

Unlike enterprises migrating from legacy environments, Canva started entirely in the cloud with minimal networking considerations. Richard explains how this trajectory created different challenges - starting with overlapping 10.0.0.0/16 addresses across development environments and evolving to hundreds of VPCs requiring sophisticated connectivity solutions. By mid-2022, these networking challenges had grown complex enough to warrant forming a dedicated cloud networking team, which Richard helped establish.

The conversation takes a deep technical turn exploring Kubernetes networking challenges that even experienced network engineers might not anticipate. Richard explains why "Kubernetes eats IP addresses for breakfast" in cloud environments, detailing the complex interaction between VPC CIDR allocations, prefix delegations, and worker node configurations that can quickly exhaust even large IP spaces. This pressure is finally creating compelling business cases for IPv6 adoption after decades of slow uptake.

Whether you're managing cloud infrastructure today or planning your organization's network strategy for tomorrow, this episode offers invaluable insights into the evolution and challenges of cloud networking at unicorn scale. Listen now to understand why companies are increasingly forming dedicated cloud networking teams and the unique skill sets they require.

Connect with Richard:
https://www.linkedin.com/in/richard-olson-au

Purchase Chris and Tim's new book on AWS Cloud Networking: https://www.amazon.com/Certified-Advanced-Networking-Certification-certification/dp/1835080839/

Check out the Fortnightly Cloud Networking News
https://docs.google.com/document/d/1fkBWCGwXDUX9OfZ9_MvSVup8tJJzJeqrauaE6VPT2b0/

Visit our website and subscribe: https://www.cables2clouds.com/
Follow us on BlueSky: https://bsky.app/profile/cables2clouds.com
Follow us on YouTube: https://www.youtube.com/@cables2clouds/
Follow us on TikTok: https://www.tiktok.com/@cables2clouds
Merch Store: https://store.cables2clouds.com/
Join the Discord Study group: https://artofneteng.com/iaatj

Speaker 1:

And it was the same at Canva. So most enterprises have been fairly large for a long time. They're fairly mature. They've been through that progression from the 90s through the 2000s and ITIL and various waterfall turning into agile method, all that sort of enterprise type stuff.

Speaker 2:

ITIL is my trigger word. Man Don't say that.

Speaker 1:

I've triggered him. Oh sorry, Change management, release management, that sort of stuff, you love it. There you go.

Speaker 3:

Oh sorry, yeah change management, release management, that sort of stuff. You love it. There you go. He's like the Australian candidate.

Speaker 2:

Hello, hello and welcome back to another episode of the Cables to Clouds podcast. My name is Chris Miles at BGP Main. When you say Blue Sky Handle, do you say the dot-com piece?

Speaker 3:

Should I not say that I don't. Usually I just use the part Because when they search for you it's going to be at BGP Main. They're not going to type in the whole thing, right?

Speaker 2:

Yeah, fair enough, All right. So at BGP Main, on Blue Sky, and as you may have heard, you know, my co-host rudely started talking before I introduced him, which he's not supposed to do, but you know, we'll give him a pass today. My co-host today is Tim McConaughey, as always at Juan Golbez.

Speaker 3:

Wait, no, it's not Juan Golbez. No, it's not Wonko Biz. No, add Carpe DMVPN, carpe dash DMVPN. Dude, you're killing it. He's a man of many personas.

Speaker 2:

You know, just search for him. You'll find him out there. He's out there somewhere. Today we have a very fun episode. I'm very excited for this one. So we're going to kind of go back to our roots on this one. We're going to go back to a bit of storytelling. You know we love hearing, you know, engineers, stories and things like that. So I've come in contact with a great individual who's joining us today, richard Olson, who is currently at Canva, and I think he has a very interesting background and very interesting story to tell about. Kind of. You know, we're a niche podcast. We talk about cloud networking. You know it's about as niche as it gets, and I feel like Richard has had one of the most niche roles in that particular context. So I think it'd be good to kind of talk to someone that's dealing with it day in and day out. So, yeah, welcome Richard, thanks for joining us and, yeah, glad to have you on the pod.

Speaker 1:

Thanks for having me, Chris, and I have to say you've called me niche in a very polite way. That's the first time.

Speaker 2:

Very special man.

Speaker 3:

It's like nerd, but like more polite.

Speaker 2:

Yeah, there you go, but yeah, so let's just hop right in. So, richard, tell us a little bit about you know, kind of your background and how you got to the point that you are now with Canva.

Speaker 1:

Yeah, kind of your background and how you got to the point that you are now with Canva. Yeah, sure, thanks. So, as you mentioned, I work at Canva and I should probably just introduce the company and product for those that haven't heard of it. It's a SaaS application, it's a design application and I guess Canva's goal is to empower the world to design and make it accessible to everybody. So it's a really cool, interesting story. I'd encourage readers to go look into it. It's been covered in various places. But another cool thing about the company is that the company's got a two-step plan. Step one become the most valuable company in the world or one of the most valuable companies, and step two, do some good with that, do the most good that you can, and that's a really cool ethos which I'm I'm on board with. So, yeah, super happy to be where I am and to chat to you guys today.

Speaker 1:

I guess from from here I've got a long story. It's I've been doing this sort of thing we're not cloud, but this sort of thing for 20 ish years. I actually started my career in it, literally throwing rocks into trees, so that, which which sounds really funny, that that was literally my job. Uh, so I was in the military at the time, in the army, and um, there's no internet or mobile phones, or any four phones for that matter, in the bush. Uh, so you've got to make do with what you've got.

Speaker 1:

So I would take a phone line, a field phone line. I would get some electrical tape, tape that around a rock, find a tree that looked at about the right height and throw the phone line through the tree and run phone lines in the bush, cause that's a lot quicker than digging. So that was that was my start, and from there I guess, pivoted into networking. We did a little bit in my role in the military. We did what was called a basic router course and I think I learned on a Cisco 2500 series. That's probably dating me a bit now.

Speaker 2:

I think they were pretty old by the time I used them. I learned on that as well, you learned on that as well.

Speaker 1:

They were great boxes, yeah, they were Really good the octal cables for your out-of-band management, and so, from there, I discharged from the military and I managed to land a job in. I discharged from the military and, um, I managed to land a job in. I briefly transitioned through the public sector, but I I managed to land a job in a network operations center. So I was one of those shift guys in in those rooms that you see in the movies, you know, with the tvs wall to wall, and, um, we had this fancy uh frosted glass as well, so we would bring customers into the room behind us and then unveil the knock to the customers, which was this, this, this cool bit of uh theater there, but, um, that's crazy to know that so many knocks are the same, that they want to showcase it like that, like I.

Speaker 2:

I didn't know that everyone was like that, but my, my first knock that I worked in was the same way that they had something that we called the fishbowl, which was like that, exact, like private view.

Speaker 3:

You come in and you see the whole thing and if we wanted to do something.

Speaker 2:

you, you come in and you see the whole thing, and if we wanted to do something, you know, if someone was, you know, kind of a class clown, so to say, we'd put them in the back so they weren't viewable from the fishbowl perspective. But yeah, that's so funny.

Speaker 1:

Yeah, it's kind of weird being the center of where everyone can see, isn't it? And I'll be honest, we didn't look at the big screens at the front that much. I mean they were probably they were too far in a lot of cases to actually see anything, but it was cool, having you know big network maps.

Speaker 2:

That's what it was.

Speaker 1:

Do you remember Cacti and the weather? Map plugin, you'd build these cool maps and it would show you like the throughput going between certain places with, like you know, hot, cold colors.

Speaker 2:

Yep Very familiar with Cacti. I, I do remember it.

Speaker 1:

I don't think PHP is cool anymore, is it?

Speaker 2:

I'm not sure.

Speaker 1:

No, probably not. But yeah, from there I guess I worked up through the ranks. I started off on shift that was pretty tough, doing night shifts, but moved more into, we'll say, like a level three support and then progressed through design and a bit of architecture. Most of my customers back then were sort of customers that needed a higher degree of security. So lots of air-gapped management networks, lots of crypto. I was probably I get the feeling I was one of the few people that used GetVPN or GroupVPN, if you're familiar with that one.

Speaker 3:

I really liked it, but DMVPN which one. Sorry, get was good, but you're right, I don't think it was widely accepted or deployed Like if you had it, it was great. But DMVPN you were about to say was way more interesting, way more prevalent, yeah.

Speaker 1:

Having that overlay network gave people some sense of security. Your internal IP addressing schema is hidden from the untrusted underlay network whatever that was when GetVPN. You had native RADIC failover, which I think was one of the key advantages. You didn't have to wait for for that IPsec renegotiation. But, um yeah, so from there, I guess, being being air gapped, that created a lot of challenges, because I worked for a reasonably big um, the MSP side of a reasonably big telco, and they had a lot of tools to help them scale, like they were managing hundreds of thousands of devices we're not talking, you know, a couple of hundred here and there and not having access to those tools on the AirGap network presented a whole bunch of problems. And so, from my perspective, I had to replicate these tools to some extent, and there was a particularly big customer that we had at that time, I think the 800-odd sites, a couple of thousand devices, and I had this enormous spreadsheet that was being managed by hand by the design team. Input validation was not a thing, this was Excel, so it was lots of rough data, and I had to turn that into literally thousands of devices of what we call pre-configuration or you know, the sort of bootstrap config so that the tech in the field would get connectivity back to you to dump the actual end state config on. And so I actually started to learn how to code.

Speaker 1:

Back then I didn't have a coding background. I went to university to do basically IT telecommunications, so there wasn't much of the comp sci programming side and I hadn't done much development. So I started out in Python. Python was pretty cool back then. I think this was circa 2010. And it was in that awkward phase of Python 2 to Python 3 transition.

Speaker 1:

Yeah, I was going to say Still cool, still cool. Still cool.

Speaker 3:

I like it.

Speaker 2:

It's still real to me, damn it, you know.

Speaker 1:

I like.

Speaker 3:

Python still.

Speaker 1:

It's a great language. I mean, like everything pros and cons. I can work really fast in Python. Maybe it's not the best at everything. Yeah, true, so thing, um, so and yeah, from from there I sort of started to learn how to code and that sort of set me.

Speaker 1:

I mentioned the timeframe, 2010,. That set me on this journey of, I guess, getting into network automation. I actually say I got into network automation before. It was cool and I had this, um, I had this opportunity to uh do a little bit of working travel through uh Europe while my partner now wife had a brief secondment over in the UK and I was going from cafe to cafe smashing lots of coffee. But I had this opportunity where I had maybe a month of a clean slate just to work on a cool project. And this was around the time that Docker was starting to become cool, Ansible, was starting to really gain some traction, and so I spent that month of this working out of cafes just learning the various bits and bobs that eventually started to form the foundations of network engineering and the tools that we use. So that was a really great opportunity that I had there.

Speaker 2:

This is before Docker, so to say fumbled the bag, as they say.

Speaker 3:

Yeah, it was still the most popular container at the time.

Speaker 1:

Yeah, I'm definitely not an expert in how to commercialize and and make your startup a monetary success, but it was your big goodness, wasn't it? Everyone used it. I think it was just hard for them to make a dime out of it to monetize it.

Speaker 3:

Yeah, ultimately.

Speaker 1:

Yeah, I mean we've seen parallels with things like yeah, terraform, for example terraform cloud was the money-making engine.

Speaker 1:

Um, I don't know how many people use Terraform Cloud, but certainly Terraform is everywhere, right, yep, how do you monetize that? But yeah, from there, that was kind of my we'll call that the peak of my project design phase. I was definitely hands on tools then rolling out lots of pretty large scale networks. But then from there, I had a friend of mine who introduced me to a pre-sales manager at F5 and we hit it off and next minute I'm working through this pre-sales phase of my life. So I worked for a couple of US tech vendors there, started with F5, followed by Juniper, and it was at this time that network sort of automation arc really started to mature. Sd-wan was starting to become a real thing and that was a large part of my role in that pre-sale space.

Speaker 1:

But I wanted to keep refining my code skills and around 2015, 2016, I had this opportunity to start working with the cloud, particularly Amazon. One of my customers wanted to run a number of proof of concepts in different clouds, so I had the opportunity to connect. I think it was three clouds at the time, definitely Amazon and Azure and I had this taste and I wanted to learn more. So I actually was super fortunate in around 2021, I think it was to get offered a solutions architect role at AWS and that was. That was really cool. So I definitely stepped away from my comfort zone networking and moved into the world of cloud and I was a generalist at this stage and I think everybody on the call knows how many products Amazon have.

Speaker 3:

So that was intimidating.

Speaker 1:

Yeah, and what would the you know, the fair enough Amazon had this expectation that I think they called it level 200. You'd be level 200 across all of those core products and I I'm as you said so politely before, chris I'm kind of niche. I knew the networking like the back of my hand. But yeah, all the compute, the data, um, what's an S3 bucket? All those sorts of things, I mean not quite to that level. Ai was just starting to become a thing now too. So that was part of the role. But and one of the really cool things that happened when I was at Amazon was reInvent. 2021 to me felt like the year of networking, like just a whole bunch of things started to happen. Ipv6 was getting taken more seriously. The CloudWan was in preview. I think November, december-ish that year became GA the following year. Private links started to gain a lot of maturity. Just a whole bunch of really interesting features started to drop. Around that time I think VPC Lattice may have been dropped.

Speaker 2:

The following year as well, I think it was 2022.

Speaker 3:

2022,. Yeah, was it. Yeah, I think that's right.

Speaker 2:

I think you're wrong, but yeah.

Speaker 1:

To me it felt like peak cloud networking at that time. I'm like yes this is a thing and this is still really, really interesting, because, I don't know, to me just having some VPCs and peer links is not that exciting.

Speaker 3:

It's just Fisher-Price like very, very basic networking.

Speaker 2:

I think it's. I mean that probably was. I don't know if that time will come back, but Tim and I just got back from reInvent a few months ago and there was not much networking content.

Speaker 3:

I'll tell you that they're back to developer and AI, and that's well it was. Ai, everything right, but there was definitely more developer focus.

Speaker 1:

Well, ai is so hot right now.

Speaker 3:

Yeah, they've definitely the pendulum swung back the other way for sure.

Speaker 1:

Yeah, definitely, but it was around this time where I was kind of thinking to myself I love being a generalist. Around this time where I was kind of thinking to myself, I love being a generalist. This exposure, broad exposure to like literally everything in the cloud has been really, really beneficial for broadening my skill set. But I don't want to lose touch with my roots. I think I've got a really valuable deep set of skills and now I can combine that with this new environment, this new world called the cloud. And I was just starting to think about this and I really enjoyed working at AWS. But then one day in my email box I got two notifications and two different companies simultaneously, I think within a day, were advertising for a cloud networking engineer and I'm like cloud networking, that sounds like what I want to do, like I was thinking about becoming a network specialist to go down that path at AWS. But I'm like, oh, this is an opportunity to jump back into the project deep end, start delivering cool things, have that satisfaction of building something as well as doing what I think I really want to do cloud networking.

Speaker 1:

And I was fortunate enough to get an interview, an initial conversation with recruiters for each of those companies and during that interview I asked a very specific question. Both of these companies were very developer centric and are you looking for a developer who can do a little bit of networking or are you looking for someone who's done a lot of networking who can do some development work? And in the case of Canva, who was one of these two companies, they were looking for the latter. They wanted someone who can do networking with some development. And, yeah, the rest is history.

Speaker 1:

So I've been at Canva now for about two and a half years, maybe a little bit over and I started out. So Canva maybe we'll talk about this soon but Canva didn't actually have a cloud networking team and the day I started with another coworker of mine, the cloud networking team was formed. We joined someone who was kind of looking after networking at the time and we became a team of three, the cloud networking team, and I've spent some time as the coach of that team, which is our term for management, and now I'm in a technical leadership sort of individual contributor position and I'm in that same space. I'm predominantly looking at things like networking and, just as important, the intersection of networking with applications and getting compute and that sort of thing.

Speaker 2:

So that's my story. Yeah, I think that's the thing is like. Well, I found that so interesting and when we first met Richard, I was kind of surprised at how kind of I guess I will say mature the cloud networking team was at Canva, Because I mean, we're like you said, you've been on the cloud networking team for two and a half years now. I feel like most companies that I talk to are just now starting to kind of spitball on whether or not they form a cloud networking team, right, it's becoming a thing which is good, you know, good for all of us, good for, hopefully, this podcast, but we'll see.

Speaker 2:

We need to break out a big niche yeah there you go. We need to be run rate pretty soon. Um, but yeah, that's that's. That's super, super interesting story, like, especially because you went cause you, you started. The first first position you had was at enterprise.

Speaker 1:

Yeah, that's right At Telco MSP, so looking after various enterprises.

Speaker 2:

Cause I like I for one, I've never worked enterprise Um and uh, various enterprises.

Speaker 2:

Because I like I for one, I've never worked enterprise and, like, from what I've learned, I like a very you, yeah, but like that's the thing is, I feel like I've heard 10 times more horror stories about working in enterprise than than somewhere else.

Speaker 2:

But maybe this kind of speaks to what you were talking about before with you know kind of the company that Canva is and kind of the ethos that you're working in, which sounds pretty cool. But one of the things that I thought was really cool is Canva is, you know, kind of operating, you know by the industry term as a tech unicorn, right, and they're kind of this born in the cloud company, and I don't think you guys have ever had a physical data center across the globe, necessarily. It's all been 100% cloud developed and cloud deployed, right. So I mean, from that perspective, what kind of problems do you think, like born in the cloud companies typically have to deal with compared to you know, traditional ones? Like, do we all end up kind of suffering the same and bleeding the same blood, or is that trajectory a little different?

Speaker 1:

It's a big spag. So it's a bit of both that I've found. So we do end up suffering some of the same things, uh, but some is is definitely unique to to being here. So I guess, uh, yeah, you're absolutely right where we're 100 in the cloud, in the sense that we don't have any data center footprint or infrastructure. I mean, you know the office has some internet connectivity and wi-fi and that sort of stuff of course, but we're not like a classic enterprise where you know you might have your headquarters with two big routers and a whole bunch of campus switches and, you know, maybe you'll have your dark fiber lit up to your data centers and this SD-WAN branch construct. There's nothing like that. There Everything is in the cloud.

Speaker 1:

So I think, yeah, as I said, ultimately we're going to have the same problems, but we started in a very different place. So I think everyone who's worked in networking maybe unless you're maybe a startup ISP or a startup you know network technology specific company. I think everyone that works in networking knows that people aren't going to need your specialist skills until you get to a certain scale, and it was the same at Canva. So most enterprises have been fairly large for a long time. They're fairly mature. They've been through that progression from from the 90s through the 2000s and you know itil and various waterfall turning into agile method, all that sort of enterprise type stuff I tell my trigger word man don't say that I've triggered him, oh sorry.

Speaker 1:

Yeah, uh, change management, release management, that sort of stuff you love it.

Speaker 3:

There you go. He's like the uh candidate.

Speaker 1:

Yeah, it's pros and cons of every approach. Right, it certainly aimed to stop the cowboys and I think it definitely stopped the cowboys. But the way Canvas started, it was a very, very small group of people who had this vision to build an application. And these people weren't thinking about, yeah, what's my IP address schema going to look like and what's the fastest ASIC that I can chuck into a switch in a rack. These were not problems that they had. Their problems were building this app, getting to MVP, building features, building the customer base and that sort of thing.

Speaker 1:

So, in the early days as well, cloud, for one of the massive benefits I think everyone can agree about cloud is that it makes it so easy to start out with things. So you build a VPC, you chuck in a couple of subnets with availability zones Like the effort you used to have to go to in the 90s and early 2000s to achieve that was enormous. And now we can just click a few things or, better yet, use infrastructure as code. And so, yeah, we started out. I think it was originally one VPC, which quickly grew to three for different development environments, and I guess maybe you can feel where this is going. Each of these VPCs were given the same prefix 10.0.0.0.16. So personally, I've never come across that sort of a thing in enterprise the prevalence of overlapping IP addresses. But this is part of the growth and maturity that anyone would go through, I think. So from my perspective, great, low hanging fruit. This is an easy challenge. Let's fix it. But from there I mean the wizard company were very fortunate. The product was successful, customers started coming on board and there was this period of really really rapid growth. And rapid growth means that the teams are getting bigger, the number of services that we have are getting bigger and while a lot of the application-specific services lived in these VPCs, other VPCs were created around them to form part of the process whether that's part of our tooling, internal services and that sort of thing.

Speaker 1:

And now we're getting maybe a handful of VPCs let's say a dozen or two and the problem then becomes okay. How do of VPCs, let's say a dozen or two? And the problem then becomes okay how do these VPCs communicate? And I think in the cloud. We talked about VPC peering just before. It's boring, but it works right and it's fast and yeah, that's one option, but another option you can take you don't even need to think about networking.

Speaker 1:

We've got the internet and it's trivial to connect the internet to VPCs, create public-facing load balances and establish some degree of connectivity between your VPCs, and you still don't need a networking team like load balancing, http, web services, tls. These are things that developers are very, very comfortable with, so still don't need a networking team. But what happened then was we just kept growing, so we got more engineers, services started to become more specialized, the teams themselves were getting bigger, and when I joined mid-2022, I think we had around about 100 VPCs and it was really starting to get to this point where managing IP address conflicts was getting challenging. Connectivity between things without having to build out all of these public facing load balances and services, and the cost associated with those, started to become really challenging, and so the cloud networking team was born, and I'm very happy that it was born. It's been great. Yeah, that's awesome.

Speaker 2:

Yeah, so you got in on essentially at the ground floor, I assume, on the cloud networking team.

Speaker 1:

Yeah. So it was, oh yeah, a couple of hundred VPCs, not really much connectivity between them, a handful of VPC peer links here and there. Like I didn't know what to expect when I was joining. It could have been, it could have been anything. I had no idea how mature they were and it was a absolutely fantastic opportunity. Because here we are, as you say, ground floor we've got to build a network. Awesome, I know how to do this. I've done this before. Let's do it.

Speaker 1:

And so you know, one of the first things we did was look at technologies like transit gateway in AWS particularly. Uh, that certainly makes it a lot easier to connect a whole bunch of VPCs, and you know a full mesh of VPC peer links and that sort of thing, and and we got going from there and, and you know other things egress controls rolled into this, dns has been rolled into this, and so we went through a pretty rapid period of a lot of big wins, like going from no connectivity to almost ubiquitous connectivity is a pretty cool thing. And then, from there, a couple of other things happened as the business itself started to mature. Let's look at this infrastructure stack. What can we do to align this with best practices so that we're not inhibiting growth in any way, making sure that we're agile so the developers can do what they need to do, and things like the Amazon AWS multi-account started to or best practices started to, sink in. So what are we doing to ensure that our services are separated according to these best practices?

Speaker 1:

Iams it's not bad, but it certainly has a point where you want to move to a hard account boundary, where there's an explicit lack of access between two different resources. And so from there, great, we've built a network, we've got a couple of VPCs. This is awesome. How are we going to cope with possibly thousands and thousands of AWS accounts? You know this could be as granular as one per microservice. Is this, uh? You know, an account per lambda? How are we going to cope with this? And so we went down this path of, uh, looking at shared vpcs, and so I think it was.

Speaker 1:

Was it reinvent 2022? Maybe I think it was. Netflix was up on stage talking about, uh, shared vpcs and amazon multi-account, how that intersected with networking yeah, right. So, uh, we went down that path and, uh, it's been pretty successful, like, certainly. There was a point in time where I'm just like, damn, I'm creating like dozens of vpcs a week. This is not sustainable. We need to do something about this. And, uh, we very quickly went from from 100 to quite a few hundred, and I'm sure transit gateway made some of that a lot easier, but it just didn't feel it had a smell to the architecture. It's not something that I think we wanted to go down. I mean, do we really want to have a database in one VPC and a service accessing that database in another VPC? There's a lot of administrative and cost boundaries that you've got to go through to do that. Is it the most cost-effective solution? Probably not.

Speaker 1:

So, shared VPCs. They were a great solution to that.

Speaker 3:

I mean shared VPC actually makes a lot of sense. We have lots of customers and lots of people that do it From an administration perspective. If nothing else, it makes it infinitely simpler to be able to share out, like here's your subnet, here's you know. And then do you find with the shared VPC constructs that security is more difficult to enforce than not shared, like if you actually split the VPCs.

Speaker 1:

Not so much. I guess we're not going to one. We'll call it mono VPC, where every account gets access to that We've got a couple of course degrees of separation.

Speaker 1:

At its most basic we've got a production and a non-production network and then within those certain things have a degree of affinity. So things that are associated with a certain part of the application will all live in the same VPC. They were probably already living in the same VPC just now. It's a shared VPC running on Kubernetes instead of, say, a bare metal or or ecs task, right, not bare metal ec2 instance.

Speaker 1:

Yeah, I mean, they have it, you can, you can buy it, you can get it so, uh, that was yeah, and I think that actually, just from a reduction in the number of we call it ktlo keeping the lights on the reduction of ktlo was was enormous. We're no longer cutting dozens of vpcs every week. Yeah, I believe it. We cut a dozen and then we just share them, and so I guess, maybe tying this all back to the original question are we bleeding the same blood?

Speaker 1:

I guess where I was going with this story is no, I don't think we are, because if you look at a traditional on-premises data center, you don't have a VPC construct right Like you've got subnets and generally in your data center maybe you'll have some segmentation and that sort of thing, but generally in your data center you're going to have a bunch of subnets which can all communicate at very high speeds to a bunch of other subnets. Like a VPC is a very specific cloud construct which was designed to hold your cloud resources in a kind of private network and later networking between those VPCs became a thing. So we're kind of they started at opposite ends and the VPCs and the cloud networking constructs are slowly making their way more towards that more ubiquitous connectivity. But I don't think that was a problem that a lot of people on-prem had, like, how can I get from A to B? You already can. You're plugged into the network, right, right.

Speaker 3:

I think, yeah, I think user to user, user to app. I think, yeah, I think user to user or user to app. That's true, like I, just when I, when I worked in enterprise, I remember quite I remember when I got to the enterprise job I had. It was a big flat, like every country. It was a global company. Every country just had its own slash 16. And there were, there were offices, offices that had data centers in them, offices that were connected to data centers, whatever it was. But every site basically had like its own slash 16 and like.

Speaker 3:

So there was one where the the office was sitting right next to the data center, like in the same building is like in the same walk across the floor and you can get to the data center. All one flat network. So you got all these people that can just like attach to servers and stuff and so like. One of the things I did when I went in there was to go in there and break all that up and segment it and install firewalls and create that segmentation. It's very much the the same thing. So there's, I don't think we're bleeding the same blood, but I do think, you know, depending on the maturity of the networks we're talking about. There's definitely some similarity between breaking. You know when at some point you just got to break it up, get your segmentation in there, enforce your security boundaries.

Speaker 1:

This doesn't mean you didn't have firewalls and that sort of thing and and segments. It was just um. You're starting from a world which has like literally hundreds of segments and you've got to kind of kind of coalesce them, whereas on-prem networking is typically the the opposite of that.

Speaker 1:

You don't start with 100 segments that you need to you know, do vrf leaking and you, you probably got one segment and maybe a firewall. If you're getting really big you'll have a couple of segments. Yeah, I don't know. It's been a while since I've done enterprise networking. Vrfs are they like? How many does one have these days? It used to be less than 10.

Speaker 2:

More than you think. I've seen it all. I've seen it all. I've seen some people with three or four. I've seen some people with like 600. And I'm like production network in that fashion.

Speaker 1:

Yep Resume driven development Exactly.

Speaker 2:

Exactly, yeah, so that's a nice segue. So you know, on the show we talk a lot about, I mean, if you're involved in cloud on a daily basis whatsoever, there a there's a very strong focus on migration, right, we're still. There's all this talk about whether or not we're in early innings of the cloud, or you know. Or you know, to use baseball terms maybe we're at the seventh inning. You know we should be at the seventh inning stretch by now, but we're not even close, apparently you lost me baseball.

Speaker 2:

I'm sorry yeah, sorry, sorry he's an american.

Speaker 2:

So, yeah, I don't know the equivalent n NRL or potentially cricket term to use there. But you know we'll stick with baseball for now. But you know there's a there's a strong focus on migration and moving from from on prem to the cloud, right, and we still hear about that on a very regular basis. But you guys didn't have to go through that particular problem, right, You're still dealing with migrations, but I'm assuming they're more internally about migrating from service to service or from construct to construct type thing. But maybe tell us a little bit about how that's affected, the kind of overall culture and impact on the infrastructure teams compared to what you might see in a traditional organization.

Speaker 1:

Yeah, definitely. So you're right, I don't think anyone in IT can escape migrations, right? There's always something new which brings benefits of whatever sort, and let's jump on this new thing. It looks great, so don't have to deal with. Oh, I don't even remember how many R's there are of cloud migrations.

Speaker 2:

You know, refactor, reshift I don't know, as we're having this conversation, they are probably added a couple more.

Speaker 1:

Yeah, there's a lot of different, complicated ways. So no, don't have to do those refactors, but absolutely migration. So I mean some examples off the top of my head. I wasn't around in this era so I may have it slightly wrong, but I believe we started off on EC2 instances you know autoscale groups behind ALBs and that sort of thing. Those were changed to be containerized and we containerized the application and moved to ECS and from there we've moved since to Kubernetes. I think when we moved to ECS Kubernetes was still pretty fresh and new EKS maybe wasn't as mature as it is now. So we went along this journey and that's definitely still a thing. Luckily they're both containerized, so from that perspective there wasn't any refactoring required. But certainly it changed to a whole bunch of things like tooling and the ecosystem, troubleshooting and all that sort of stuff. But I mean, maybe this isn't so different from other places, other companies whose primary business is making software, but for me this is the first we'll call it software engineering company that I've worked for is making software. But for me this is the first we'll call it software engineering company that I've worked for and the focus and the structure a lot of the companies will call it?

Speaker 1:

What was the term that you used before? How does it affect the infrastructure teams? Everything is organized according to, I guess, platform teams and this is the more modern take on the evolution of DevOps. And there's so much material out there on differences and different opinions on those that I'm not going to wade into that. But I guess I started off my journey at Canva in the cloud networking team, and maybe this is just me, maybe this is networking people, but the network has always been a platform. If you want to communicate from A to B, you need to use the network. One doesn't simply have connectivity. So I guess maybe my mental model of a platform team was always networking, but certainly other teams around me were never based on platform teams, and so I work in what we call the runtime platform subgroup, and runtime wasn't a term that I was familiar with until I joined Canva, and it basically means it's where the code is running.

Speaker 1:

So you think about the mentality of a software engineering business. There's a whole bunch of different phases and places that that code is written, is built, compiled Runtime is where it's actually running, and so there's traditional enterprise silos of. You've got this massive team of systems administrators in a silo, who don't talk to the networking team, who don't talk to the firewall team. All of those constructs are broken down and we're all in these platform teams who offer our services to others and there's a real self-service drive here. We want people to be able to consume the network without me, you know, getting the email and the ITIL ticket to update a VLAN description and that sort of stuff. That's out the window. So, and this particular platform team, the runtime platform, is within the cloud group. So I think that's probably we're starting to get similar-ish structure to a lot of places. You have a cloud team, some places call them co-e's, which maybe there's more of an architecture flat to that but certainly we're all doers, we're all engineers, and that's probably a good segue to say.

Speaker 1:

One interesting part about working here is that, unlike a lot of enterprises, I'm one of just two networking people or people with a networking background in the business. I would say that 90% of the engineers plus have more of a development background, and so that's a real unique culture shift and perhaps explains a lot of the thinking around that software engineering practice and methodology. So all of the things that we talk about in terms of network engineering we've got to put our configs in Git Well, that's a given right. We've got to use CI pipelines, like which one? I've got dozens that do different things and work in different ways. So a lot of those challenges that I think a lot of other businesses may have, or enterprises moving to that methodology, they're kind of already solved because of where we came from, and that's been really interesting yeah, that's, yeah, I guess that's.

Speaker 2:

The thing is like software has been doing this much longer than we have right, so it's like we're we're still on the back foot, adopting this um, even from we're even we're even still debating on what tooling to use, whereas they've had stuff in place for many years now. So, yeah, definitely very interesting.

Speaker 1:

One small anecdote to that may be spicy, but just because it's different and the way that software people do it doesn't mean it's just necessarily better, like one interesting introspect from being on the other side of the fence. Like, take your mind back to you're configuring a firewall and you need to add a new URL to a URL list. Right, like? Let's assume that all of the administration, the ITIL, change management, the triggers, chris, how much is sorted out? You jump on your firewall, you add a URL to whatever construct in your firewall, click save or commit whatever it is, and walk away. The whole thing might take 30 seconds. Fantastic.

Speaker 1:

Now you put this into a CI pipeline and that pipeline's not necessarily optimized for firewall rule change deployments. It can take 20, 30 minutes to go through all the various CI steps. Maybe you're building a test firewall in this hermetic environment so that nothing affects it, and just for the purpose of adding this one firewall rule, right? I'm not saying that the approach is necessarily bad. The CI is a fantastic tool and has a lot of uses. But, done wrong, there's always going to be something you can cut yourself on.

Speaker 3:

Yeah, Like, should we spend a huge amount of time automating something that we're going to do three times ever, for example? You know?

Speaker 1:

Oh yeah, exactly, and the closer you get to the physical world, the more that becomes apparent. Right, Like going down the infrastructure as code path. Is it worth me writing some terraform to create a dedicated direct connect on Amazon when it's going to take, you know, an LOA, four days of people doing cross connects and that sort of stuff Like where's the value?

Speaker 3:

Like what are we actually gaining from this? I'm not saying, I didn't.

Speaker 1:

I absolutely do have this in infrastructure as code, for example, but just to point out the example with the differences of approach, yeah, that makes sense.

Speaker 2:

So it's good that you're still dealing with those legacy things like LOA, cfa, like we all do right?

Speaker 1:

Not very much so in terms of looking at the same plan. Yeah, I have had to deal with LOAs and cross-connects in a very, very limited basis, predominantly high-speed cloud interconnect. But that's not my day-to-day. How often do you need to do that? Not very, but maybe on that subject actually I did touch on it. Infrastructure as code like back to the networking automation is still trying to catch up.

Speaker 1:

When I came here, literally everything was in infrastructure as code. I mean, don't get me wrong, when you're tinkering with something, you may need to do some click ops here and there, build some resources, try to remember to tear them down and that sort of thing, so you get familiar with how the product works and the constructs. But once the rubber hits the road, like everything is in infrastructure as code. And that's wildly different to where I've come from in a lot of places as well, where I mean I've seen spreadsheets that generate router code. I've seen, you know, all the way through to perhaps a more mature, ansible playbook and some Ginger scripts doing bits and pieces here and there. But to have everything, absolutely everything, in infrastructure as code was quite an eye opener.

Speaker 2:

Yeah, definitely Super interesting. So let's actually use that as a point to pivot here and talk a little bit about Kubernetes. So I know you've mentioned Kubernetes up to this point already and from talking to you, richard, I know that you're maintaining one of the most sizable deployments that I've ever heard about running in the cloud. So let's kind of open that up a little bit. Kind of, given your strong networking background, what is, um, so to say, the good, the bad and the ugly about maintaining a Kubernetes environment of that size?

Speaker 1:

Oh, the good, the bad and the ugly Um, I'll probably start with. I'm going to start with the ugly Um. Just in my case, in my experience and other people, your mileage may vary. Kubernetes eats IP addresses for breakfast from a networking perspective, and what do I mean by that? So, like super whirlwind tour of Kubernetes networking, kubernetes has a concept called a CNI which is used to implement Kubernetes networking. The CNI is responsible for handing out IP addresses, making sure that the pods in a Kubernetes cluster can communicate to each other, you know, without NAT and that sort of thing. And there's a whole bunch of CNIs out there. But when you look at products like EKS or GKE in Google Cloud, they both come with a CNI which has some opinions, and their CNIs both opt to use what I'll call VPC native addressing. So we're not talking overlay networks which one might find on-prem. You know there's no VXLAN, geneve, yguard, whatever overlay technology that your particular CNI wants to use. These are actually getting assigned IP addresses which are native in the VPC substrate, if you will.

Speaker 1:

So straight away we've gone away from the ability to isolate these. You're in a VPC. Those VPCs are on the network. All of these IP addresses need to be unique. We can't just slap 100 or 64 slash 16 on them or whatever. Now, in the case of Amazon, I'll focus on Amazon the biggest IPv4 prefix that you can assign to a VPC is a slash 16.

Speaker 1:

Now, coming from on-prem network, and that's heaps, right, like that's what you might assign to your big sites, or maybe it's a good chunk of your data center. But 65,000 IP addresses no one's ever going to need that much, right you do with Kubernetes. So let's break this down in Amazon. So we're given our VPCS slash 16. We've got availability zones in Amazon and we need to create subnets per availability zone. That's the way Amazon works. So across three availability zones we can get some slash 18 subnets. And again they still sound pretty big like 16,000 odd IP addresses.

Speaker 1:

Who's ever going to use that many? Like? Do you have 48,000 pods? No, probably not, unless it's quite a decent sized cluster. So I'm still not seeing the problem here, richard. But then the way Kubernetes actually operates and the way it integrates with the network in these cloud vendors is in Kubernetes. We want to do bin packing, which is we're trying to jam as many active processes that use up as much of the CPU as possible into a given worker node, right Like the days of being on-prem and setting a CPU threshold alert for 80%, to say you know, this is bad, you might need to shift some stuff. It's the opposite in the cloud. You want to set the alert for when it drops below 80%, because that's wasted capacity you're paying for.

Speaker 1:

Yeah. So we want to do this bin packing right. So we want to put as many pods as possible onto a given worker node, and the best way to do that in AWS is using prefix delegations. Now, so in AWS you have an ENI. Now, so in AWS you have an ENI Elastic Network Interface.

Speaker 1:

On a worker node that ENI has I guess I'll call them slots a fixed number of slots depending on the size of that node, for how many secondary IP addresses that you can have on there. And it's usually pretty low, right, like 15-ish. We'll say let's pick that number and we want to have more pods than 15. In fact, in a lot of deploy deployments that I've seen, there's a concept called a demon set which is a particular type of thing is running on every node. Some deployments I've seen have 10, 20 demon sets. So that means straight away our nodes are using quite a few IP addresses already.

Speaker 1:

Now, given this limitation on the number of slots on an ENI, aws came up with this concept called prefix delegation, which, in the case of IPv4, you can put a. Rather than putting on a single IP address, a slash 32, you can put on a slash 28. And that gives you slash 28, 16 IP addresses per each of these slots, and you can have the same number of prefix delegations as you can secondary IP addresses. So straight away, okay, cool, maybe we can get a hundred a couple hundred.

Speaker 1:

Yeah, on a given node. And then we're starting to get into the territory of like how do we size our nodes? How many pods is it acceptable to go down when a given node has a problem? And we're changing the paradigm here a little bit. But let's say, for example, like we've got these slash 28 prefix delegations. Now, when you look at that against the context of your slash 18, you've only got 10 bits there. So we've gone from 16,000 IP addresses to 1,024 prefix delegations. Now if a prefix delegation gets assigned to a node, we're straight away artificially capping the maximum number of nodes to about a thousand, right, yeah? So let's say and back to the bin packing I want to put multiple prefix delegations onto a given node, let's say two. That means I've got a maximum of about 500 nodes in that subnet.

Speaker 1:

And then, just to make matters slightly more challenging, in AWS you create an instance. That instance in a subnet that ENI will be given a random IP address. It'll just be plucked from somewhere in the VPC. Now when you start thinking about, I want to have all of these contiguous slash 28s to take the most advantage of the IP space that I've given the subnet, and you start spraying random IP addresses around the place. That makes it really challenging to find a slash 28, right Like we've got to map to these binary boundaries. One node in the wrong place kills 16 IP addresses.

Speaker 1:

So AWS have this other product called a CIDR reservation and so what you can do is you can put aside a chunk of space and I believe in our case it's about three quarters of the subnet. We put that aside for prefix delegation and the nodes get created in the upper quarter of that given subnet and that gives us a bit of breathing room. The prefix delegations aren't being trodden on by the nodes and it gives us a bit of capacity in there in there. But I guess, just to serve the point, we've gone from 16,000 pods You're never going to have that in this subnet to okay, we've got 500 nodes. That's perfectly realistic to have here. And if you need more of those prefix delegations per node you're going to need a bigger subnet. And we've got VPCs which have like multiple slash 16s slapped onto them, and so when you think about that in terms of the 10 slash 8 range range, there's only 256 slash 16s that you have in that range and I mentioned 100.64 uh before. That's definitely on our horizon, but better.

Speaker 1:

Yeah, let's go to ipv6. Hey, so for the first time in my 20 years, there's a uh compelling use case for for ipv6, and so that's really cool. Now because, um, I think it was the last, uh most recent reinvent I don't think there were reinvent announcements but certainly quite a lot of AWS services are starting to get more and more IPv6, private link capabilities and that sort of stuff. So I think the time for IPv6 is nigh, dare I say it.

Speaker 3:

So, that being the case, then, richard, what do you? How does that impact? And I imagine the answer is quite a lot. How does that impact? And I imagine the answer is quite a lot. How does that impact your Kubernetes, integration with the rest of the wider network, Because everybody knows, okay, we've got Kubernetes and Kubernetes can talk to Kubernetes, and we've got our nodes, our pods, you know, like Service Mesh, everything's encased within Kubernetes. But what about once we have to talk to something legacy, something outside essentially the pod? Are we doing six to four NAT? What happens there?

Speaker 1:

That's definitely something that I'm actively thinking about, like how do we get, like, let's say, I make my IP address space problems go away? We're all at IPv6. We have IPv6 only pods Fantastic, I believe. With the VPC CNI that comes with AWS they get 169.254 addresses, so they can still do NAT44 to get to IPv4 resources. That might save a bit of trouble. Need to think about exactly what that's going to look like.

Speaker 1:

But to your point though, kubernetes is.

Speaker 1:

It's kind of a, it's a black box.

Speaker 1:

So previously, where we had let's pick on Terraform, we had Terraform, and Terraform was our infrastructure of code tool and it would go to the cloud and it would create an instance or an ECS task, whatever it is, and you would be able to, within a single Terraform ecosystem, reach out, create a security group for that ECS task or EC2 instance, do the same for an RDS database, and you would be able to say in that RDS database hey, this security group that represents my EC2 instance, please allow that inbound on my SQL.

Speaker 1:

And because that's all part of the single ecosystem, it actually becomes quite easy and trivial to have a reasonable amount of security, for example in this case, and it's done in one place, now into Kubernetes. To your point, kubernetes does a lot of this for us. It creates these pods. It creates, depending on what controllers that you run in your cluster. It might create your load balances, and I mean, if we say legacy, let's assume that you're still running on RDS databases which, for good reasons, your pods now need to access your database. The challenge is now how do you identify that pod to your database where previously we were able to reference these security?

Speaker 3:

groups. You've got to open it up too much, right yeah?

Speaker 1:

that pod is now represented by a security group on the node, and that means every pod on that node, depending on what you do with security groups could theoretically have access to that RDS database.

Speaker 1:

You can't just. And there's some things that work around. This AWS has a I'm not sure the exact name AWS Security Group Controller, which is kind of similar to ECS, if I recall, where it has the concept of a branch ENI which you can assign a security group to if you use this controller. But if you don't use this controller, maybe you're rolling your own CNI, whatever it is.

Speaker 3:

Yeah or Cilium or Calico or Flannel or one of those.

Speaker 1:

Yeah, calico, maybe a Cilium, certainly those and this is where it's really interesting watching Kubernetes and that ecosystem develop more in the network space. We've got things like network policies now which can to some extent, dictate what you can and can't talk to. And I'm seeing, I'm seeing a massive resurgence in the use of things like FQDN rules. Now, to me maybe people will disagree with this, but to me an FQDN rule was kind of the transition between the traditional firewall, which was your five tuple rule to a very static internet.

Speaker 1:

To me, an FQDN rule was the transition from that to a proper next-gen firewall, SNRI or HTTP layer 7 inspection in a more dynamic world. But we're seeing a resurgence of that because it's reasonably easy to create.

Speaker 2:

I love that we've had to pretty much refer to Rds as legacy at this point, which is a service that's? I don't think.

Speaker 3:

Well, I actually was thinking I forgot, uh, for a second that you guys were 100 in the cloud, but really I just meant outside, because, like service mesh has, this has this challenge as well. Right, service mesh is extremely good at building app player security and connectivity between the pods with the sidecars, but you know what if you have to go egress? What if you have to go to another something that's outside of the service mesh? Right, it's, it's still kind of clunky. Uh, even with network policies it can be kind of clunky.

Speaker 1:

So that's how I was yeah, like you, you can't just even if you're going to another cluster, unless you've got some sort of technology that bridges those clusters together, and you you mentioned service mesh. There's a whole range of technologies out there, which is definitely a different podcast. But if you've got a separate cluster, how do you authenticate services between these two different clusters? Now we're getting into that authentication space.

Speaker 3:

Yeah, like the SDO with the control plane and doing MTLS and all that crazy stuff.

Speaker 1:

Yeah, mtls like Spiffy Spire is being talked about so much now. Like my first exposure to MTLS, I think it was in 2010, I was rolling out a Wi-Fi network that used effectively what's now being called MTLS, but back then it was EAPTLS. It's cool again. Mtls is so hot right now.

Speaker 2:

Never drawn that correlation. Yeah, I never thought about that, but yeah, I think EAPTLS.

Speaker 3:

MTLS. I never put the two together, like EAPTLS, mtls I never put the two together. But you're absolutely right.

Speaker 1:

I mean slightly different purposes perhaps, but ultimately it comes down to authentication. In this case, or rather in the case of EAPTLS, it was letting you onto the network. In the case of MTLS, it's letting you access, or not even letting you access my service necessarily, but it's at least identifying that you are who you say you are.

Speaker 2:

I guess. One last question, because you mentioned kind of this, this concept of moving to everything being FKDN based. I mean, like I think about, like when I worked, I worked on the service provider side. Like I said, I've never worked enterprise but it's. It's odd, like when you're on the networking team, how much you reference stuff by IP address, right, if I'm just like there's, there's native things that that tick in your mind whenever you see like oh, this 10 dot, whatever, whatever. I know where that is, I know what that is, I probably even know what server it is, what's running on that service, et cetera. But like for you, where you're in this environment where IPs are pretty much meaningless, like how does that kind of mess with your frame of reference to like what services you're talking about, what you're supporting, what infrastructure is Like, has that been a challenge or has it been actually better in the in the long run?

Speaker 1:

Oh, that's a good question. Actually, I haven't really thought about that, but I to to maybe extend upon that as well. Previously, networking was a world of yet. To use the cloud analogy uh, pets instead of cattle you would not. You wouldn't give things silly names, necessarily, but you had a very strong naming convention. You would be able to identify from a device name what location that device is in, maybe what model and vendor. It is the function of that particular device. You would start to remember things. I still remember some of my customers' IP addressing plans. 10.32 is this big site in Sydney, for example. That's all gone. And where I was going with this is enter the cloud. Everything is given a well, at least in the case of Amazon. Everything is given a unique identifier that I cannot remember, I think.

Speaker 1:

I remember the ID of one VPC and it's a very old VPC so it has the short identifier. How do you remember these resources? And I guess that's where I? I honestly do not know what the solution for that is, but maybe one thing to to consider is, uh, storing that information in some sort of inventory which is used by engineers and has a degree of like enrichment around it yeah, like it could be a netbox, a nautobot. Uh, it could be. I mean aws, ipam is is a tool these days.

Speaker 1:

Not quite the same league, but it certainly does IP address planning, which I think is automating. That is absolutely essential in 2025.

Speaker 3:

What scares me about the whole DNS thing and don't get me wrong, I'm not saying there's any other way to do it, especially when we're talking about ephemeral resources like Kubernetes is that it's like how many times have we seen the meme like it couldn't be DNS?

Speaker 1:

It's not DNS. It was DNS right.

Speaker 3:

And remember a few years ago when US East 1 went down and everything broke all across the world for Amazon, because that's where a lot of the DNS stuff was hosted and stuff. It's such a I don't know such a linchpin thing.

Speaker 1:

The blast radius these days is very different to, I think, what it used to be. Everyone used to have their own data center, or at least be in a colo, and your blast radius would probably just be you. But these days, as you say, I think I remember I think it was an S3 outage, maybe circa 2016. The impact to me was that the app that I was using to order my coffee as I was walking to the coffee shop didn't work. I'm like, oh, that's weird. Oh well, I'll just do what a normal person does and order it at the coffee shop, but that was a huge issue.

Speaker 2:

Yeah, oh man, all right. Well, I think we are coming up on time here, so I think this has been a great conversation. It's funny how we I think we took this from a level 100 talk to a level 400 talk very, very quick, but I think that's I mean it's been super fun for me. So I appreciate you coming on Richard and talking about this and you know, maybe, once again, we'll have you have you on later Any any closing comments or questions from you Tim.

Speaker 1:

No, it's been great. Thanks so much for inviting me. It's been lots of fun for me too. All right, Um, yeah, Richard, anything uh, where can people find you online? Anything you want to plug or let people know how to find you? Oh, I'm really boring when it comes to social media.

Speaker 3:

I think I might have a mastodon account somewhere. I oh man, that's a blast, yeah, maybe um, but by our terminology, mastodon's legacy.

Speaker 2:

At this point it's legacy, yeah, it's like more than two years old, right?

Speaker 1:

uh, yeah, predominantly linkedin is where you'll find me awesome, cool, we'll get that in the show notes.

Speaker 2:

yeah, we'll throw that in there, all right? Um, well, thanks again for listening. Um, hopefully this has been helpful and if you want to uh, you know, leave us reviews, send us a comment, anything please reach out to us on social media or cables to clouds at gmailcom and uh, with that, we'll take it away and we'll see you next week. Hi everyone, it's Chris and this has been the Cables to Clouds podcast. Thanks for tuning in today. If you enjoyed our show, please subscribe to us in your favorite podcatcher, as well as subscribe and turn on notifications for our YouTube channel to be notified of all our new episodes. Follow us on socials at Cables to Clouds. You can also visit our website for all of the show notes at cables to cloudscom. Thanks again for listening and see you next time.

People on this episode