
Cables2Clouds
Join Chris and Tim as they delve into the Cloud Networking world! The goal of this podcast is to help Network Engineers with their Cloud journey. Follow us on Twitter @Cables2Clouds | Co-Hosts Twitter Handles: Chris - @bgp_mane | Tim - @juangolbez
Cables2Clouds
Cloud Networking Basics: VPC - AWS vs Azure vs Google Cloud
What happens when three major cloud providers each reimagine network design from scratch? You get three completely different approaches to solving the same fundamental problem.
The foundation of cloud networking begins with the virtual containers that hold your resources: AWS's Virtual Private Clouds (VPCs), Azure's Virtual Networks (VNets), and Google Cloud's VPCs (yes, the same name, very different implementation). While they all serve the same basic purpose—providing logical isolation for your workloads—their design philosophies reveal profound differences in how each provider expects you to architect your solutions.
AWS took the explicit control approach. When you create subnets within an AWS VPC, you must assign each to a specific Availability Zone. This creates a vertical architecture pattern where you're deliberately placing resources in specific physical locations and designing resilience across those boundaries. Network engineers often find this intuitive because it matches traditional fault domain thinking. However, this design means you must account for cross-AZ data transfer costs and explicit resiliency patterns.
Azure flipped the script with their horizontal approach. By default, subnets span across all AZs in a region, with Microsoft's automation handling the resilience for you. This "let us handle the complexity" philosophy makes initial deployment simpler but provides less granular control. Meanwhile, Google Cloud went global, allowing a single VPC to span regions worldwide—an approach that simplifies global connectivity but introduces new challenges for security segmentation.
These architectural differences aren't merely academic—they fundamentally change how you design for resilience, manage costs, and implement security. The cloud introduced "toll booth" pricing for data movement, where crossing availability zones or regions incurs charges that didn't exist in traditional data centers. Understanding these nuances is crucial whether you're migrating existing networks or designing new ones.
Want to dive deeper into cloud networking concepts? Let us know what topics you'd like us to cover next as we explore how traditional networking skills translate to the cloud world.
Purchase Chris and Tim's new book on AWS Cloud Networking: https://www.amazon.com/Certified-Advanced-Networking-Certification-certification/dp/1835080839/
Check out the Fortnightly Cloud Networking News
https://docs.google.com/document/d/1fkBWCGwXDUX9OfZ9_MvSVup8tJJzJeqrauaE6VPT2b0/
Visit our website and subscribe: https://www.cables2clouds.com/
Follow us on BlueSky: https://bsky.app/profile/cables2clouds.com
Follow us on YouTube: https://www.youtube.com/@cables2clouds/
Follow us on TikTok: https://www.tiktok.com/@cables2clouds
Merch Store: https://store.cables2clouds.com/
Join the Discord Study group: https://artofneteng.com/iaatj
Talk like you would talk. Um hi, I'm going to talk like I would talk. Okay, this is usually how I talk.
Chris Miles:All right, that's fine. Okay, good, I was just making sure it wasn't like going to blow my eardrums out.
Tim McConnaughy:Oh yeah, all right then, that's fair. Okay, hey, and welcome back to another episode of the Cables to Clouds podcast. I'm Tim, and with me, as usual, is Chris. We're going to try something a little bit different. People have been asking for a while for us to kind of get into some of the basics of cloud networking. I mean, obviously we have a lot of listeners that are network engineers that are trying to learn cloud, or maybe they just have to deal with cloud, whether they want to learn it or not. And honestly, to be completely clear, we messed around with this a little bit when we first launched, but we couldn't seem to get the cadence right or it was just really boring, so we kind of shelved it for a long time. But we're going to give it another try. We kind of thought about it and we're going to give it a shot.
Tim McConnaughy:So today we're going to talk about VPCs and VNets and VCNs and just kind of the containers that clouds use to build, to build so that customers can build their cloud networks. We're not going to get super deep. We're not going to get super high level either. We're going to be kind of. We're really going to just talk about kind of design, that kind of design philosophies that all the different cloud providers use for how they're going to build, or rather allow customers to build, networks within their um, within their platforms, cause, at the end of the day, a CSP is the same as a.
Tim McConnaughy:As an MSP, it's. It's all managed services, right. And so the question is then, for a managed service, when you want to build a network using a managed service, what tools, what you know, what did they, what did the provider give you to be able to build with? And that's what we're going to get into today is just starting with again with VPC, for, like AWS and GCP column, or Google cloud calls it, excuse me, vpc, azure calls it VNet, and then, of course, you know, oci calls it VCN, and I think that's all of them. I can't think of any. I don't know any any. Any other ones, chris, I think that's it right.
Chris Miles:None none that matter, there's only so many we can talk about.
Tim McConnaughy:So we'll see yeah yeah, okay, fair, fair, fair so. So let's just get into it and let's talk about so, basically, just let's you know, basically, divine, define what that is right. So, anybody who creates whether it be an infrastructure as a service type of service, where you're creating VM workloads or you're just creating some location to put data, whatever that is when you're creating or consuming services from a cloud provider you're creating, at some point you're going to have to create a VPC. I'm just going to say vpc, but we're talking about vpcs, vnets and vcns, right? I just don't want to say all three. All the time as we go through this, it's going to get really old really fast.
Chris Miles:So I'll just say you can't even use an acronym to describe, because they would all. It would just be all vvds the triple v's guys, when you're making the triple vs.
Tim McConnaughy:So that's again. The basic idea is that it's a container, a container for whatever you're going to build in the cloud. Now some services are serverless, meaning that you don't actually create and hold on to a particular piece of infrastructure like Lambda, aws, lambda or Azure you know, it's not Bicep, hold on, save me, because I can't remember the name for Functions Functions Azure Functions. So yeah, just code that you kind of generate and then you know the code does something and there's an outcome. You don't need a VNet or VPC for that, necessarily, right? So it's really focused on you needing to build some sort of infrastructure or some you know holding pattern for something that you're going to do inside your own cloud network.
Chris Miles:So yeah, yeah, there's a lot of times I feel like when you're first learning cloud, there's a lot of similarities drawn between you know a VPC or virtual private cloud is is your instance of a data center in the cloud, and I like we'll get into this a little bit later but in some scenarios I like that analogy, but in some that I don't, because I think of a data center as a very like broad-encompassing thing, like everything is in a data center, whereas you know we'll get into this a little bit later Maybe in Google you do have, you know, one of those and it contains everything, whereas AWS you might, you know, pick it apart and compartmentalize it, etc. So a virtual data center is probably the closest thing, but it can be as something as small as just one network, one small subset of the data center, right? So I think that's important to remember when you're kind of designing these out.
Tim McConnaughy:Yeah, I mean, and in the book that I wrote for Network Injurers a few years ago now, I think I specifically said you can think of it more like a network with a router connected to it and all of the resources that would be connected to that router. It's fairly close.
Tim McConnaughy:Now sometimes that would be like a full data center, depending on how the size of the data center, or sometimes that's just a piece of a data center. So these are, yeah, good explanation in both cases. So let's talk a little bit about kind of that shift from traditional networking into the cloud network piece, and I think for me the biggest one is this idea of virtualizing the network. Right, we know virtual network, we've used virtual networks to some degree. If we have any experience with like ACI or like Cisco ACI or what is the one that Juniper has, oh, my God.
Tim McConnaughy:Abstra Sorry, yeah, I'm like my brain's not working today, man or even like VMware, you know if we're talking about, like VNICs and all of that, with virtual networking inside a VMware environment, right, but even then, all of those, all of those virtualized infrastructure things fail to reach the level of abstraction that the cloud service providers, you know, put you at because, again, it's a managed service, right? So the idea of a hyperplane, where the hyperplane is this, is this gigantic automation system that's orchestrating entire data, you know, not one, two, three, but hundreds of of racks and data centers together, right, yeah, it's um you know, cause all the cloud providers.
Chris Miles:I think you know we've probably talked about this at some point. I've introduced this idea of availability zones, or AZs, and you know, under the hood, an AZ is essentially multiple data centers that are, you know, have different levels of fault tolerance and are not, you know, dependent on each other, whereas if you, as a customer, went to build that yourself, like if you had a regional deployment of something you wanted to be across three different data centers, that's a one to two year project on its own right Like that takes a long time to design something like that, whereas in the cloud, you do have the option to basically just be like oh well, you know, in this region, yeah, let's put these three things in three different data centers.
Chris Miles:Now, that does have its drawbacks and there's still complexity at the end of the day, which you end up paying for. But yeah, it's like you said, it's a mindset shift. Yeah, for sure. So yeah.
Tim McConnaughy:So each provider kind of redesigned or re-imagined cloud networking from scratch when they were building their provider services, right. So let's kind of redesigned or reimagined cloud networking from scratch when they were building their provider services, right. So let's kind of get into that a little bit and it's going to feel very kind of high level. But I hope you know, if any of you have been using or playing with the clouds, it'll start to make sense or, if not, it'll at least give you an idea of what to expect when you go into any one of these clouds. And again, this is more like their design philosophy of how they make their cloud networking, or not just networking right, but for our purposes, cloud networking, but really just the whole infrastructure available to you, right? So you have kind of.
Tim McConnaughy:So AWS was first, obviously AWS was first, and they have both first mover advantage and a little bit of the what'd you call it, not tech debt. But, like you know, everybody else got to see what was good and what was bad with what AWS did and then decide if they wanted to do it differently. You know what I mean. So so, yeah, so AWS kind of has this virtualized familiar concepts, right, familiar concepts of of networking that have been virtualized, right. So we were talking about AZs, or availability zones. Whether it's your one or more discrete data centers In AWS, when you build a VPC, that container lives within a region, and by region we're talking about a true geographic region, but also a logical grouping of data centers of AZs that AWS uses. So, as an example, the biggest one and the first one was actually I don't know if it's the biggest anymore, but certainly the most important to AWS is still US East 1, right, the Northern Virginia.
Chris Miles:I think it's still probably the biggest right.
Tim McConnaughy:So, again, within US East 1, we have multiple AZs and then altogether the geographical grouping of all these AZs make up the region that is US East 1, we have multiple AZs and then altogether the geographical grouping of all these AZs make up the region that is US East 1. So from AWS's perspective, a VPC exists within a region. So if I create a VPC, that hyperplane object, if you will, exists within every data center that is inside of US East 1, right.
Chris Miles:By default. You don't have to explicitly define it, and I think when AWS declares a region, it's at least three data centers typically.
Tim McConnaughy:I think so.
Chris Miles:But US East 1, for example, goes up to six, so it obviously has a higher level of redundancy. But, like we said, that's just by default and you get to pick and choose how much you want to actually utilize that, and the other cloud providers do the exact same thing. Right, everything is all. These AZs, or specific data centers, are tied to that specific region as a whole.
Tim McConnaughy:That's right, that's right. Although, as we talk about each provider, you'll see how they chose to. Even though they all have that idea of like, hey, here's an AZ and here's a region, they chose to use that differently when it came time to like to doing the infrastructure for it. So, in this case of AWS, let's just start there, right. Aws ties. Each subnet is tied to an availability zone. So when you and that's an explicit choice that you make when you create a subnet, so yeah, so maybe just to take a quick step back is like when you define this logical container, this network container per cloud provider.
Chris Miles:Doesn't matter what it is. You basically assign a C cider range. Yeah, that's right. Moving to cloud was the weirdest thing.
Chris Miles:We started saying cider instead of subnet I don't know why this happened, but like cider, was the the way to describe a a network? Um, but you basically assign a cider range, which is a super net that's assigned to the entire vpc or the, the entire virtual networking construct, and then you kind of chop that up into separate subnets, right? And like Tim is saying, every time you chop that up and you put a subnet into a VPC, you have to define what data center does this belong to or what availability zone does this belong to. In AWS specifically In AWS. Yes, that's a good point.
Tim McConnaughy:I forgot to mention the CIDR piece. So yeah, that's a good, good, good point. I forgot to mention the the cider piece. So yeah, so it's a subnet, is just like a subnet. That hasn't changed, right? What's changed with the cloud piece is that again we're starting with a supernet and that supernet exists within the cider, or sorry, within the vpc, but not outside of the vpcs. That's the whole construct piece, right, and and we're gonna not talk a lot about how each, each VPC can actually have the same CIDR, which causes all sorts of network design problems down the road.
Chris Miles:But anyway, long story short, don't do that. Yeah, try not to do that.
Tim McConnaughy:But that was a design choice that I think all of them made, really, except for Google, ultimately, you know, was that they were thinking to begin with that you know an entire enterprise would only ever need one. So they were like, oh well, just give yourself a slash 16 network and just go to town. But what ended up happening, of course, is that there were administrative or security or other you know logical reasons to break this up, at least in AWS. The way AWS designed their constructs made it easier to do design by breaking it up, and what ended up happening is that, you know, then you get into weird things like that, but let's get back to it. So subnets are assigned to AZs, and that's explicit choice that you make in AWS. But you want to tell us how they did that differently in Azure. So Microsoft chose to do this a little bit differently.
Chris Miles:Yeah, so Azure still has the same concept, right, all the regional data centers are divided into specific availability zones but by default, any subnet that you define within Azure spans all of those data centers. It spans across all of the available AZs, right, so you like. Whereas, if you think about logically, where you put resources in that virtual container, right, and AWS, when you put you know virtual machines, essentially the infrastructure that's running in there, a virtual machine is always going to be tied to an AZ. That means it's always going to belong to a subnet that is tied to that same AZ and everything is kind of locked in this kind of vertical pattern, right, whereas Azure is horizontal. Right, you could have virtual machines deployed in separate subnets but they're all within the same availability zone and we won't get too far into it.
Chris Miles:But, like this, this kind of led to them also developing something called availability sets where you'd like. Because you have this nature of being able to span across multiple az's, you have to define you know where in those um. What. What logically, between those um virtual machines is is kind of fate shared, I guess, is the word I'll use there.
Tim McConnaughy:That's right.
Chris Miles:So you have to, you know, kind of go to extra lengths to make sure that you're making sure that those things are consistently available, you know with maintenance and things like that. But that is a fundamental differentiator between AWS and Azure, and if you're from one person moving to the other, it kind of changes the way that you have to think about it, right?
Tim McConnaughy:Yeah, and especially how you have to think about how you deploy your infrastructure, right?
Tim McConnaughy:So in AWS, because you're very explicit about this subnet is going to be assigned to this availability zone, think about from a resiliency design standpoint, like what that means. Right, so you're going to deploy an ace, essentially like an A side and a B side, like we would like a red green, like we would think a red blue weather, like we would think in in. That makes it very logical. So what I find is that most network engineers find that AWS is very logical to them. Like just coming straight from network engineering to AWS, it seems very logical because of things like that. Right, as network engineers, we think in resilience a very large amount and the idea of explicit resilience is very, very normal for us as engineers to be thinking about right, about right Versus, where Microsoft, like Chris said, microsoft took this idea that like, hey, it doesn't matter where you deploy it because we have it everywhere and if there is a failure in any of these availability zones, we're going to move your virtual machines for you to another availability zone, right?
Chris Miles:Yeah, aws like AWS fundamentally is. A lot of people compare it to Legos. Right, it's always, we always give you the building blocks and you can do whatever you want with those. Like, we give you that flexibility, whereas Azure does kind of be like oh well, yeah, you can explicitly put stuff in certain configurations that will align to what exactly you want, but let us handle a lot of that stuff, right? So it's um, and then there's default behaviors that drastically change things, so you gotta be very aware of those.
Tim McConnaughy:Yeah, absolutely. And then uh, so let's talk about Google cloud as well, because Google cloud completely throws the book out in terms of how anybody designed this Right. So, with Google Cloud, their idea was let's create one global VPC for customers and that global VPC and it's not that you can only have one VPC, but when I say global VPC, what I'm talking about is a true geographically global construct. Like remember I said the other all the other providers do region locking for a VPC, vc, vnet, whatever Google Cloud chose to make their construct global. And then, when you assign subnets, your subnets are actually associated with different regions, like global regions of the world, basically.
Chris Miles:Yeah, so they took the idea kind of the same way Azure does, where the subnet can be, you know, span across multiple AZs within the region. But whenever you cross regions, those things were always in different virtual networks from an Azure perspective, whereas Google is like no, everything's in the same VPC. So it's like it can kind of obviously dictate how you need to facilitate routing and firewalling and things like that between between these things. So yeah, it's just another, another thing to be considerate of. It's like it's like this expanding bubble of how what the VPC can contain across the three major providers.
Tim McConnaughy:Yeah, and and again, what you know, what you should be doing as you listen to this if you, if you knew it. Cool If you didn't know. Cool, as you're hearing, though, about different ways that the cloud providers are providing the same service. Basically, because I mean, at the end of the day, a VPC is a VPC is a VPC. You know VNet whatever it's a logical, and you know VNet whatever it's a logical and you know construct for you to put your resources inside right.
Tim McConnaughy:So, but think about, from a design perspective, or just even an implementation perspective, like what does that look like when you're deploying resources into a Google global VPC and have to deal with, like latency concerns, right?
Tim McConnaughy:So you're probably not going to put in a red blue deployment where they're in, necessarily, like you know different subnets or different subnets that are one in Tokyo and one in London or something like that. You know that wouldn't make any sense, assuming that these two ever had to talk to each other or that they were serving the same customers, or whatever you know. But also think about the complexity, the lack of a complexity, probably from a routing perspective of being able to have everything just kind of connected in the same VPC but at the same time, the incredible complexity that that's going to add if you want to do any kind of segmentation or inspection between these workloads, right, and then it's actually flipping the script to go the opposite way. To talk about like AWS versus, you know, azure versus Google Cloud, where you have the opposite problem, where now you have routing concerns, moving between VPCs, you know, and actually the inspection doesn't get particularly easier in that case as well, it just becomes part of the routing. Actually is what ends up happening.
Chris Miles:Yeah for sure, yeah, I think another point that's kind of continuing on this kind of differentiating philosophies between the cloud provider, these three major cloud providers as well, is the concept of the VPC, or the virtual network router, and how that participates in the conversation between the infrastructure that resides within the logical container, between the infrastructure that resides within the logical container. So you know, like within AWS, there's a concept of a VPC router. You don't actually see a construct called VPC router. You don't actually, you know, configure something called VPC router. The way you do that is by defining route tables and assigning them to subnets essentially, right. But you know, if you've ever worked on a router, you know what a route table is. Right, it's a. It's a very, you know, definitive list of destination prefixes. Next hops, et cetera, right. So in this scenario, the next hop is typically, you know, it could be something outside the VPC, it could be something within the VPC, it could be an endpoint, like if there's a service that's deployed within that VPC that you need access to, et cetera. But one thing that is also fundamentally different is, you know, azure and Google have the same construct. Right, there is a router that exists within that VPC. It's a virtual construct and the way you control, that is, defining those route tables.
Chris Miles:One thing that is unique that I'll call out specifically with AWS compared to the other major two is when that virtual router is involved in the conversation. So if we think, like me, logically as a network engineer my entire life, I'm like oh, these two things are on the same subnet, they're going to talk directly to each other. There's no, nothing needs to sit in between. That conversation that is true. In AWS you can have direct peer to peer communication. That conversation that is true. In AWS you can have direct peer-to-peer communication. It's like over a virtualized layer.
Chris Miles:Two thing we won't get into exactly on that there's no layer two in cloud, yeah, like we didn't even touch on that. But in Azure and GCP that is not the case. The virtual router actually behaves or is involved in every single conversation, and if you think about that, then that can obviously cause issues when you didn't expect there to be issues, right? If you just assume things are on the same subnet, they should have, you know, full east-west capability to talk to each other. That's not always the case, right? So there's that concept that comes into play as well.
Tim McConnaughy:Yeah, for sure. I mean, and for completeness, just we'll speak briefly about OCI, because the truth is, a lot of what OCI does is very similar to the way, I don't know, it's kind of like from a routing perspective it feels a little bit more like AWS. It is slightly different. The big one, the big difference to point out with OCI and we're not neither of us are OCI experts, by the way, just FYI.
Chris Miles:Yeah, I was going to say the reason we didn't really include OCI on this wasn't a dist OCI, it's just the one we know the least.
Tim McConnaughy:Yeah but I mean, I did some research on this because I didn't want. I wanted to include OCI, because OCI is becoming a major player now that you know, especially with like the AI workload stuff that they're doing or not. The big difference with OCI is that their concept of a VCN is really tied to these things called compartments, which doesn't really exist meaningfully in the other clouds, which is the idea of not just tenancy but like secure, like administrative policy. So it's very, very tied into this idea of you know, if you want to connect two of anything, if you want to connect anything together, there's like a whole compartmentalized policy that sits on top of of all of that. So I'm not going to get into that.
Tim McConnaughy:We're we'll, we could do a whole show, probably on OCI when we, when we become better at it, or we'll actually probably what we'll do is reach out to some of our friends at OCI and actually have that, have that conversation. We, we know several, so, um, but yeah, but again, as chris said, not to diss you know oci, which it's the one that both of us know the least because we both had the list, I think. I think I had one customer when you know that I that had oci and I had to learn enough about the, uh, just to be dangerous max yeah so.
Tim McConnaughy:But I think that will change, especially as, as oci is coming up in the world now, they're getting definitely a lot more adoption. So all right, let's get so let. So let's talk a little bit now about the, a little bit about the philosophy, how the philosophy means the behavior is different, right? So what that means is like all right. So, for example, going back to AWS, with manual AZ design, which we're talking about, like everything is very explicit in AWS. What that ends up meaning is that you as a network you know, cloud network engineer, whatever the title is that, when you're building cloud networks, it means you have to be very, very specific. You have to be very granular but also very predictive with it, which is honestly what, like I said, network engineers really like that. Because, you know, somebody told me this years ago ago and I still think it's pretty good today. You know, a network is like a dog, right? You want to train your dog. You don't want to let your dog run around rampant just doing whatever it wants. You want the dog to do the tricks when you tell it. So that's very normal for us, right?
Tim McConnaughy:But, like with Azure, again, like Chris was saying, it's very much a let us handle the complex parts right. You build it and we'll handle all of the resiliency for you. Less control it's easier HA, but you get less control over it. Now in Azure you actually can force objects to be deployed in certain AZs, but they actually recommend you don't do that because it breaks their HA automation. Basically, then it's stuck to that AZ and if that AZ goes down it won't move. So again, different design philosophy. Aws is like that's how it is and that's why you should design around it. And Microsoft is like you don't need to design around it, we'll handle it. It doesn't end up being the case, I don't know. And then Google is just like global networking Just put everything in the same network and we will do all of the work for you. I think we've said many times that we feel like Google Cloud networking is much more developer focused and friendly, because you don't really want you to necessarily care about it.
Chris Miles:Honestly, Unfortunately, but yeah.
Tim McConnaughy:Oh, anyway, all right. So let's, let's, let's wrap up some with some gotchas that you can run into, like you know, into in just with VPCs, even something as simple as just VPC design the philosophies of the different cloud providers. If you don't what's the word I'm looking for? If you don't kind of conform to the philosophy that they're using when they designed like how they want you to design your cloud, basically If you rage against that and try to do it a lot of times, it'll work, but it ends up being a lot more brittle.
Tim McConnaughy:It'll work, but it ends up being a lot more brittle. Like you know, when you're using third-party devices and you kind of use third-party devices to overcome some of the design choices, sometimes you have to do that right, like if you can only have 500 routes in the route table and that's what they give you, but you need, like 10,000, you have to do a third-party, something that can support it, but you should still design your third party implementation with the design philosophy of that cloud in mind. Does that make sense? Hopefully that made sense.
Chris Miles:No, yeah, I see what you mean, yeah.
Tim McConnaughy:All right, so okay, so like. So AWS, the big one is because you are very explicit about, and they expect you to be very explicit from a, from a DR perspective and also from an HA resiliency perspective. They want you pinning resources to different availability zones, run kind of like an AB or a red blue they. That leaves you as the network designer to deal with traffic that will cross availability zones. So what that means is that, like you know, sometimes that's a design philosophy, Like you just need to be able to do that for resiliency purposes. Sometimes you're supposed to like build, basically like application rails, if you will, where, like, you've got your X number of tiers of application and the A tier or the A side should never talk to the B side and vice versa. You know, but depending on your application, depending on your use case, maybe that's not feasible, right, and then let's not talk about and then, you know, think about down the road, we might talk about firewalls and stuff where that's you know that could be part of your resiliency strategy.
Tim McConnaughy:You know if you have an actual firewall and that virtual machine dies, like what's going to happen to the traffic on that rail? Is that rail going to just die with it or is that you know what happens? Do we cross availability zones? So the reason I bring this up is because cross AZ traffic and cross region traffic and just basically just transfer, that leaves the data center essentially with aws. Is money like you charge, they charge you, and now it's an infinitesimally small amount of money, but it adds up very.
Chris Miles:It can add up very if you're doing, if you're doing, you know, synchronous replication between things, between two data centers that can.
Chris Miles:That can eat up a lot of money very quick, right yeah, and it has right so let's like, in that same vein of of which I mentioned, you know, like it's just a click of a button to make something you know kind of multi availability zone, multi data center or multi region. Um, that all comes at a cost. Right, they build, they give you these. You know building blocks, but the building blocks are kind of no holds barred, you can build whatever you want. So you can kind of build your own noose if you're not careful.
Chris Miles:Right, so it's it's, you've, you've got to be very aware of that in the in, in, in the grand scheme of things because I've, I've said this many times is that the newest inch, the thing that cloud networking brought to the table, was this concept of a toll booth.
Chris Miles:Right, it's like no matter where you're transferring data in and out of their data centers, you're paying the toll booth. You know whether it be one cents per gigabyte or two cents per gigabyte, what have you? It doesn't sound like much, but you haven't been paying that in your, in your physical data center for the longest time, right, you've, you've, you've paid for a pipe and you, you can either use all the pipe or use none of the pipe, right? Whereas now it's like, hey, we built these pipes and as you put oil through them, we're going to charge you for every gallon, right? So it's a different way to think about it. Yep, absolutely.
Tim McConnaughy:So that's I mean and every. To be clear, like every cloud provider has some version of this, but you definitely keenly feel it, not feel it? What's the word I'm looking for? You have to really think about it. In AWS versus like Azure, where you know you don't even deploy in a specific availability zone and all of that, right, you're generally just deploying gear and trusting Microsoft to take care of it, but you're using availability sets to say, okay, azure, you know, this is my A rail, this is my B rail. They exist in the same az. Maybe maybe they're in the same damn data center, maybe they're the same rack, we don't know right, but when you azure, do, uh, maintenance, consider that these two devices need to not be down at the same time and the azure automation will do that right. So that's, you're handing the keys.
Tim McConnaughy:So the trade-off there is that you're handing the keys over to Azure, to automation. There's no person making this decision, right? Azure automation is deciding oh, I need to reboot this rack. Or maybe Azure's not deciding it at all. Maybe Godzilla steps on the data center and that AZ goes down and now Microsoft is going to automatically move those workloads to another data center. Move those workloads to another data center. But you know, when you designed the applications or maybe you maybe well, you didn't design them because you're a network engineer in this case, but the application may not be able to handle the latency that just happened from having to move to a second data center and connect to other stuff, you know. So it gives you less granularity, a little bit more. You know easy resiliency, but it also means that you it's the classic Faustian bargain, right Like you hand over the keys and you trust that, well, if it's not in my control then you know, hopefully they can take care of it. Yeah, and just to be clear.
Chris Miles:Aws has this. They have kind of services and constructs that are available to let you fail over in the same scenario, but it's just not as implicit right you have to build it.
Tim McConnaughy:That's it.
Chris Miles:So that's really the only minute difference. I just want to make sure it doesn't sound like we're saying AWS is less resilient or anything. Obviously, like you said, they had first mover advantage and they chose a path and it was to let the customer build anything and absolutely everything, whereas Azure kind of came back with a different approach. But I will say there's also things that we see in terms of just the way the cloud providers are moving. It seems like and this is much more widespread than just the networking piece but there are certainly differentiators that make the cloud providers look more appealing to certain you know customers, or you know types of customers, verticals, et cetera, things.
Chris Miles:Because I think they'll see, like, you know, if you're a major cloud provider B and you see cloud provider A start charging for a certain thing and you know people gripe about it, but they end up, you know, making record margins, that's right. Eventually cloud provider B is going to start charging for that as well, right. So it's like, just because things are free today or different today, that can drastically change. Like just back to the concept of you know paying for cross AZ traffic or cross region traffic. I think you know you can build this concept of a peering between these logical containers. So you can have a VNet peering or a VPC peering, whereas keep me honest here, but I think across VPC peering as long as you stay within the AZ, you don't pay for that.
Tim McConnaughy:Whereas Azure if you use a VNet peering.
Chris Miles:They actually do charge for that, like on top of it. So it's like everything is there's nuance to everything.
Tim McConnaughy:Yeah, I mean, there's a lot more nuance than we could possibly get into in a podcast episode, but this is the point that we're making right Design philosophies, right. So AWS does exactly that. So VPC peering counts as data transfer only if the money, or sorry, the data, crosses AZs. Which is to say like, again, take two different VPCs, put two different workloads in those VPCs and maybe they need to talk. Well, if the workload in VPC A is deployed in subnet A, which is an AZA, and then VPC A is deployed in subnet A, which is in AZA, and then VPC B is in B is in subnet B, those are two different data centers and so when those two workloads talk, even though it's entirely across the database's backbone, it's still using their pipes and moving between crossing AZs. But whereas if everything was in A, then that VPC peering would be completely free, because it's actually not even leaving the same data center. It's in the same data center or discrete group of data centers as they, as they call it Right.
Chris Miles:So yeah, that's hopping across the hypervisor Exactly.
Tim McConnaughy:So which and this is this is the point, Right. And then you have, you know, you have Google, where it's all global, it's all the same, it's logically all in the same VPC, right? So from your perspective it's all the same, but obviously you know they're. They're different between subnets really means between either geographical locations or at least data centers, so, or something like that. So, yeah, why don't we wrap up real quick with some, some just misconceptions about what other people you know, what we really get wrong, what we really, uh, assume based on the names and and and kind of how we expect it to work, and then we'll, we'll wrap it up. You want to take this one.
Chris Miles:Yeah, sure, Um. So a couple of ones that we hear a lot. Um, as far as misconceptions go, is like we, uh, you know people talk about you know kind of within VPC is like oh, VPC. Or you know people talk about you know kind of within vpc is like oh, vpc. Or you know, subnet with it's it's kind of just like a vlan. Um, but kind of going back to that piece that I talked about before with the, the vpc or the virtual network router, um, that can very much not be the case. Um, I would say that it don't. Let's not talk about vls man In cloud, it's all layer three. There's a couple scenarios where you need to identify VLANs, but that's only when you're building physical connectivity into a data center and you need to define the VLAN.
Chris Miles:But other than that, let's not talk about VLANs. Everything is segmented either at a virtual layer, where you don't need a VLAN, or it's all layer three segmentation, right? So every provider, the concept of it being just like a VLAN, every provider breaks that in a different way. Um, so this like, if we want to, you know, get down into the nuts and bolts, that's, that's not really true. Um, also, this idea of like, oh well, I can, I can put this in. You know, if I want to change it later, that'll be super easy, and that's, and that's, relatively true most of the time.
Chris Miles:However, there are certain things that become immutable per provider, right? So there's things that once you put them in, you cannot change them without completely deleting the construct. Right Now, we won't sit here and kind of list out each one of those things individually, because we could be here all day. But you know, there's, there's the cloud makes things easier, but, like, there's also dependencies built on their back ends where they can't just, you know, swap things out, right, you have to be aware of when that comes into play. So that's something you should also be very considerate of. Any additional ones you can think of Tim.
Tim McConnaughy:Well, I mean just speaking about the VPCs themselves. Right, Like, a VPC is immutable in that once you create it and you assign a CIDR to it, you can add new CIDRs, but you can't like change the CIDR that was assigned, you're done.
Tim McConnaughy:That's every you know. And if you want to change the CIDR of a VPC and not just add a second one but fundamentally change it, you have to destroy every resource that is in that VPC and then delete the VPC and then you can recreate it with a new site. And if you think about it, if you've deployed a whole application stack in that VPC that's not a small change, right If you've designed it right, then hopefully maybe there's another VPC or you just stand up another VPC, you know, and replicate the application. That's really more. What we're talking about is, truthfully, you're going to in cloud.
Tim McConnaughy:You're not going to, you know, have this downtime where you're going to destroy something and then rebuild it, right, you're just going to build another one. Remember, I said you got VPCs where you can have the same ciders and all this other stuff, but you're going to build another one and then you're going to tear that one down, right, and you're going to move all your workloads and everything over. But the point is, I know that cloud is agile, but it's not immutable or it is very immutable. There are things, especially with VPCs, that basic construct is an immutable construct.
Chris Miles:Maybe if you're a very important customer and you call support, there's probably some strings they can pull, but you don't want that to be part of your regular workflow. That would just be, I don't know how. If you guys like spending time on the phone, I don't, so I wouldn't. I wouldn't make that a make that a habit.
Tim McConnaughy:So if you guys, uh, we'll go ahead and wrap up. If you guys I'm sorry if you guys, if, if our listeners, um, enjoyed this, uh, you know this type of format, or if it you know we hit too high, we hit too low, we didn't hit the right things, I would love, we would love some, some feedback, because we like the idea of kind of going into some of this, especially like how it's different between providers and also how you, as a network designer, are going to have to think about the fact that it's different and what you're going to build with it and what it lets you do or doesn't let you do. If we're hitting the right notes, let us know and we'll do more like this.
Chris Miles:Yeah, we know we have listeners that everyone's at a different stage in their career. Some people are absolute beginners, some people are very seasoned vets, and then the same scenario applies to cloud. Right, some people are just getting started with cloud. Some people have, like, only been doing cloud for the last you know five, 10 years or something like that, right? So it's like we want to kind of go over some of the base layer concepts and maybe we'll turn this into a series or something like that and we'll talk about you know kind of.
Chris Miles:You know we touched on the most basic thing here, which is VPC, but we can start talking about, you know, inter-vpc or inter-VNet networking. You know services like Transit Gateway, cloud WAN, virtual WAN, et cetera. So I think we'll probably expand into some of that and talk about some of the security concepts and draw you know analogs to traditional networking there and talk about network segmentation et cetera. So that's where we're headed, or where we're thinking we're going to head. So we would love some feedback. So please reach out, email, twitter, x, blue Sky, linkedin, whatever. Just send us a PM, let us know. Yeah, appreciate it, okay.
Tim McConnaughy:Let's go ahead and wrap up. Speaking of. Chris already gave all the list of places. We want you to follow us.
Chris Miles:On YouTube as well. On YouTube, of course, you can comment on.
Tim McConnaughy:YouTube if you want. Yeah, please, please feel free. It'd be nice. You usually just get the spam ones.
Chris Miles:No Nigerian princes, please.
Tim McConnaughy:Yeah, yeah, all right, we'll wrap it here and we will see you guys in the next episode.