Cables2Clouds

What's the Deal with AI Agents?

Cables2Clouds Episode 52

Send us a text

The conversation centers on the transformative role of AI agents in shaping the tech landscape. Through expert insights and practical examples, the episode explores AI agents' functionalities, their implications for the workforce, and the ethical considerations that accompany their development.

• Introduction of AI agents and their significance 
• Evolution of AI technology from simple models to complex agents 
• Practical applications and examples of AI agents in various fields 
• Mechanics of building and utilizing AI agents 
• Considerations regarding workforce changes and AI's augmentative potential 
• Discussion surrounding the ethical implications and risks associated with AI 
• Encouragement for listeners to engage with AI agent development and experimentation


How to connect with John Capobianco:

https://bsky.app/profile/automateyournetwork.ca

https://www.linkedin.com/in/john-capobianco-644a1515/

Purchase Chris and Tim's new book on AWS Cloud Networking: https://www.amazon.com/Certified-Advanced-Networking-Certification-certification/dp/1835080839/

Check out the Fortnightly Cloud Networking News
https://docs.google.com/document/d/1fkBWCGwXDUX9OfZ9_MvSVup8tJJzJeqrauaE6VPT2b0/

Visit our website and subscribe: https://www.cables2clouds.com/
Follow us on BlueSky: https://bsky.app/profile/cables2clouds.com
Follow us on YouTube: https://www.youtube.com/@cables2clouds/
Follow us on TikTok: https://www.tiktok.com/@cables2clouds
Merch Store: https://store.cables2clouds.com/
Join the Discord Study group: https://artofneteng.com/iaatj

Tim:

Hello and welcome back to another episode of the Cables to Cloud podcast. As usual, I am your co-host, tim McConaughey at carpe-dmvpn on Blue Sky, and with me, as always, is my co-host. Amazing, wonderful co-host Tim McConaughey at carpe-dmvpn on Blue Sky, and with me, as always, is my co-host amazing wonderful co -host. I ran out of words for you already, Chris.

Chris:

I was very happy-go-lucky intro.

Tim:

Welcome hey somebody told me yesterday I had a face made for radio, so I figure I need the voice to match. But yeah, so Chris Miles is with me at BGP Main on Blue Sky as well, and with us we have a returning guest and a friend of the podcast, John Capobianco. I don't remember, John, are you on Blue Sky?

John:

I feel like you are. I feel like you are right, I am, and I love that you can register with your own DNS. If you've got a domain Like I, love that. Yeah, so it's I'm not sure if it's just my name at automate your network on Blue Sky, where you can find me now. Okay, I've been hanging out there a lot more yeah.

Tim:

Yeah, I like the vibe there more than yeah, than the other place has been. So yeah, anyway. So let's get right into it. So we brought John back because he has been doing some pretty cool stuff and, honestly, given the listener base we have, given the circles we're all in. You know, some of you may have already seen this, but we thought it was really important to bring John on to talk about AI agents and specifically AI agents in the context of you know what is an AI agent? Where does it fit inside the insanely fast growing, you know AI technology stack? If you will, you know, most people are still talking to chat, gpt or Claude or whatever, and so the idea of an agent is a little strange. You know, people don't know where it fits and how they should use it. So, yeah, john, I won't stand on ceremony man, let's kick it. Let's get right into it.

John:

Yeah, let's get into it. So I know there's been a lot of hype. Even myself, I've had predictions for 2025 and AI agents was my number one. I think it's important for people maybe to understand how we got here and maybe what an agent is. I know there's lots of different definitions. Here's one thing to keep in mind I think agents are like AI 2.0. And so like, if we back up, if we roll that back to AI 1.0, I would say is making an API call to a large language. Model 1.5 would be the retrieval, augmented generation and other retrieval augmented approaches that you covered very awesomely, tim. I watched your session. I thought it was great.

Tim:

Thanks, sid, I appreciate that.

John:

No, like I really liked the library and the magazine comparison and now, because of the advancements in the models, mainly we can now do agentic or AI agents. So I found this wonderful. This is probably the most succinct and this isn't like a Google search. This is from Google. Google put a paper out that's called. It's just called Agents, but these are authors from Google.

John:

And they say it's the combination of reasoning, logic and access to external information that are all connected to a generative AI model that invokes the concept of an agent. I think that's probably the most succinct definition that I've found so much like RAG, we can do external calling based on what my work and what my approach and what I've figured out over the past few weeks. You decorate tools. Here's an example you might make a little calculator function in Python. You know X plus Y or a multiplication function. You literally decorate that as a tool. You say this is a tool and then in your prompt to the AI, you give it specific instructions and make it aware of the tools that you're providing it. So you say here's the prompt. By the way, you have access to this little calculator tool because AI isn't great at math. It doesn't claim to be good at math. So maybe when, if you do, and then you know subsequently when other people ask questions, if the AI agent detects oh, there's some math involved in this prompt. Oh, check it out. I have this little calculator agent that I can go ahead and invoke to do the math function. Or I have a little weather app and the weather app might have an agent that makes external calls to a weather system API and it's the combination of the reasoning and the action. So that's the approach that I found is literally called a React agent model and yeah, so, and there's actually a paper on this. It's kluai and I found this paper from their website, the React agent model, and it's kind of works like this.

John:

Here's an example of my code. So within my prompt I say to use a tool. Follow this format thought, do I need to use a tool? Follow this format thought, do I need to use a tool? Yes To the action to take should be one of these tools. Action input is the input to the action. An observation is the result of the action and then the final answer, and you literally put that in your prompt that you send to the LLM and it will follow those instructions.

Tim:

Interesting. So yeah, I mean so to back this up a little bit, because there's a lot to unpack at the high level. What I'm hearing is that, you know, agents are more like a almost like a Docker container or like a Pythonic class or function. It's wrapped up, it's like a wrapped up prompt, you know that includes the tooling necessary to carry out the task. Is that accurate? Yeah, exactly.

John:

So, for example, you know, like CRUD activities with a REST API, right? So you're going to make four tools a create tool, a read tool, an update tool and a delete tool. You're going to package those up in the prompt and say by the way, you have these CRUD tools that you can use to invoke against ICE or DNA or whatever API of choice. It is right, and there's minimal coding. There's no deterministic code. It's not like an Ansible playbook or even a Python script where you're specifically laying out the logic to take with if else type statements. You're saying look, if you need to read something from the API, here's the tool, here's the URL and the credentials and whatever. Right Now, it helps if you give a few examples. So, after that prompt that I just explained to you, I typically have a few examples of like do I need to use a tool? Yes, what's the tool called Get data from Netbox? What's the action input? There is no input because this is a get activity, things like that. Right?

Chris:

Yeah, that was going to be. My question is like, obviously, giving an AI agent access to specific tooling sounds relatively powerful, but I was curious how much of that instructiveness do you need to give to the agent? Right, tim and I or you we know how to use an API. We could comb through and say, oh, this is relatively what I'm trying to do, but how much training do you have to give the agent in order to be able to use the tool appropriately?

John:

Oh, not much at all. So some of my scripts, like, let's say, a Netbox agent, it might be 300 lines of Python, the entire agent as a Python file. You're going to have your tools. You're going to have your prompt Most of the code is actually English in a prompt and then you're going to have, you know, an agent executor in Langchain or Lama, index or whatever framework you're using. You're going to use their agentic approach to invoke this agent I interact with. So I use Langchain for my agent. Framework is what I've written the code in, with a Streamlit front end, with a natural language front end. So what's neat is in the logs. You know, when you start to get your hands on this stuff. You can see in the logs the AI says thought and then it tells you it's thought oh, I see this.

John:

Yeah, I think I need to go to the netbox api to answer this guy's question, and then it will. I need a tool to do that. Oh, there's a read tool. Turn on the read tool. Oh, here's the json that got sent back. I bet you it has the answer. It's, it's. It's like watching a child verbalize their early thoughts.

Tim:

Yeah Right, I was actually just today. This is weird. I was working with Andrew Brown on. You know, he's doing the generative AI boot camp and I'm helping him with some of the Japanese stuff and whatnot. We were building an app and he was showing me, you know, an ID tool. Was it Wildfire or something like that? I think that's what it's called. Anyway, point is you know an ID tool, was it wildfire or something like that? I think that's what it's called. Anyway, point is you know, through he was building code and he was having the tool help, but what was interesting is that the calls were doing exactly that. You could actually follow the reasoning that the model was using and like. It was pretty interesting to see almost like.

Tim:

It's almost like debugging, if you will like see how the llm is is reasoning out the thing you've told it to do. So you can adjust the prompt or just the code or whatever, right?

John:

yeah, it's really neat to see it. And sometimes you'll go, oh, it's called the wrong tool here, and then you realize that it's your instruction set. Do you know what I mean? Like it's doing what it's told, and and it's your own logic and the and the way you word things, and and it's your own logic and the way you word things, and, and it's funny, things are emerging. Like if you put now begin at the very bottom of your prompt the last thing you say, now begin, it apparently helps these agents work even better. Apparently some people are offering monetary rewards at the end of the agent code, saying, for every question you get right, I'm going to give you twenty five thousand dollars.

Tim:

For some reason it seems to improve the performance of these reasoning models right once they take over the world they're going to be coming to look to pay up.

John:

man. Well, I was thinking as I was doing this. So I started with a fun agent. I tried to write one for the pokemon api, and then it sort of became almost a repeatable formula. I figured I cracked the code with Pokemon, now let's try Netbox, because it's REST APIs.

John:

So then, after Netbox, it was like these little isolated agents aren't good enough. What I want is a beehive or an ant colony. So then I said well, what if I make an agent? For, let's say, I'll spin up a CML environment. The modeling labs are now free, right? So I'm going to put two routers, two switches, connect them, some VLANs, some IP addresses, whatever, put all that information into Netbox, in the DevNet or the DevNetbox cloud instance, and then see if I could have the agents work together and I'm calling this infrastructure as agents, like. It's almost like infrastructure is code, where you've got a source of truth and a YAML definition or whatever per piece of infrastructure. Well now, like, we can have our F5 agent and our core agent and our firewall agent and this agent and they can do this reasoning and action together, right? Oh, the firewall agent has a tool to update an access control list or whatever, right?

Tim:

Okay, and now these and these agents are essentially built or coached or prompted in such a way that they're, they know how to interact with the specific object.

John:

That we're okay, yeah exactly so the router agent. Instead of CRUD code I have a configuration tool that is a PyETSconfigure. I have a PyETSparse or learn or whatever, right? So those are the tools that I use with routers and switches, cisco routers and switches. And then the agent says oh, they want to know what their default route is. I'm going to use the run show command tool. Here's the command command show IP interface brief as the action input.

John:

And literally I see the LLM say here's what I need to do. And then I see my test bed loaded and then I see it connect to the router and then I see it configure the interface. Right. So I tried to see honestly this work, guys. So I have router one, router two, 10 interfaces, four vlans, a bunch of ips in netbox. I say to the llm can you please configure all of the ip addresses and descriptions on router one and two? And then could you please configure the vlans and interfaces on switch one, switch two.

John:

The agent goes to netbox a bunch of times and gets all the data it needs connects to router one, connects to router two, connects to router three, connects to writer four or whatever, and literally builds this whole topology like cdp. Neighbors come up and a running ping starts to work, just from the natural language prompt um, it's impressive, it's and it really like that's me alone hacking away at this with chat gpt helping me write the code and just struggling and fighting with it. But when it worked I was really like wow, this is, like you know, I don't want to cost people jobs Right, like I'm doing this because it's experimentation and it's bleeding edge and the tools are here and the models have evolved. But I'm very frightened by some of this work that I've that you know, like this could be Frankenstein's monster kind of thing, right.

John:

Oh yeah, we were talking about this, Chris, you might want to look into this. If you just Google ServiceNow AI agents, they literally have a calendar countdown to the 29th of January when they're launching their agents and there are things saying these agents can do things like network reviews, verify network stability, analyze, use cases, Agent force so Salesforce they actually have something called agent force. Oracle just launched out agents, Postman of all things. Postman is launching an AI agent builder, so we've all used Postman to do our API work.

Tim:

Oh, a builder, that one makes sense. Yeah, so now we can use Postman.

John:

Yeah, they're going to give you a toolkit to build agents within the Postman ecosystem.

Tim:

That one makes sense because you can use Postman to build API calls and all of that. And then why not just take all of that work you just did and shove it into an agent for that purpose?

John:

yeah, no, that makes perfect sense so I don't, you know, I don't know how far away the tidal wave is, right, but but we're on the beach watching this thing. Come at us right now, right, uh, we, I just mentioned four big, massive companies off, you know, just rattling off the top of my head, um, so what can we do about it? Like what, I don't you know, how do we harness this power? How do we use it as individual contributors? How do we use it to improve our daily lives at work? Right, there's a lot to a lot to consider here, guys.

Chris:

Yeah, yeah, very interesting. I'm curious so, like from your perspective, like obviously up to this point probably wait to your previous points about AI versions 1.0, 1.5. I to your previous points about AI versions 1.0, 1.5. I feel like there's been a lot of basically people building front ends and wrappers around a single LLM right, probably chat UPT, right. There's basically just a front end that they interact with that gets the information directly from that LLM. So with this orchestration of agents and I'm assuming they will live they're probably not all going to live in one place, right. They're going to be kind of scattered throughout the ecosystem in some way. How much effort needs to go into building a front end for that right? Because I mean, at the end of the day, it's how we're going to interact with it. That shows the value, right. So like, how much effort goes into that piece of it?

John:

So I know that, um, so I let's just with a previous example. We have four different infrastructure agents and a and a net box agent. Yeah, I had to write a. Let's call it main agent or parent agent that really acts like a router or a shepherd.

John:

It's like a router. So when the initial prompt comes in from the user, that main agent will say OK, I need to call the net box agent first and then I need to call the router agent and it sort of does that routing and orchestration. That. That's even smaller than the other agents because there isn't a lot going on there. You're just kind of saying hey, main agent, here's all of the sub agents you have access to and can you orchestrate communications between the user and the backend In terms of the front end, like an interface, I love Streamlit. If anyone is out there ever struggled with Django or Apache or IIS or any of those web sort of things. I've got this awesome code and I want to put a web interface on it. That becomes a bigger problem than your original code for some people right, I've built Flask apps for that purpose.

John:

Yeah, so Streamlit has to me really democratized that experience. It's a Python import and you literally say like stheader, and it makes a header page, stinputbox, input box, and then strun in your Python script and that will bring up the Streamlit app that listens on 8501. So I think that that's going to help people get their proof of concepts out rapidly. Or, you know, layers of abstraction, let's call it.

Tim:

Yeah, that makes sense. I mean yeah, I don't want to like. What do you think about agents that use multiple LLMs or do you just try to like? You know one, one agent per llm up. It's probably a little bit easier to go with, but like you know what I mean. Like task oriented, like some, some llms are going to be better. You know even claude as, or claude, or you know chat gbt have different versions that are better at different tasks, right?

John:

so there's a lot to be done there. Yeah, so there's some like there's specific coding llms, like they're coder specific for generating python code like cohere is really good for developers and stuff. Yeah the other thing. I'm glad you brought up models. Tim, I just about slipped my mind.

John:

You're gonna start to notice dash r on models now yeah, I think I already got one yeah, I think that connotates reasoning, that it's a reasoning capable model or that it could do tool calling. So command. I've been playing with coheres, command r, that's 7b, really nice model, really small, 7 billion parameters. It can do tool calling. It can do agents, um. The other one that just came out yesterday was deep seek r1, the real deep seek. So I got fooled. Someone put a fake deep seek out that was just llama or 3.2 reskinned and I sort of took the bait. I didn't really do much research, I just started using this model, um. But the real deep seek just released their actual model on olama no, actually I want to very quickly.

Tim:

I'm curious about this. So if people start poisoning, like llama index or whatever, like are they? I mean, are we assuming that basically anything I put into that poisoned model is going somewhere? Is it like, or like, what's the value in poisoning a model like that? Or not really poisoning, but kind of a replacement, a bait and switch, if you will?

John:

Yeah, I think it was just to get clicks and to get downloads of their specific model. Someone reached out, they watched my video and said to me privately the model you're using is actually not DeepSeek's official model. Someone has re-skinned Lama 3.2 for whatever reason. What's the value? I have no idea.

Tim:

That's what I mean. I mean, we're already talking about AI. I don't want to change the subject, right? No, we're talking about AI security. Now it's becoming a big thing, especially like prompt, of course, the prompt harvesting and whatnot to be able to pull stuff out. You're not supposed to pull out. We know about that, but could we get to the point where we have these poisoned models?

John:

Right supply chain. Issues with the models.

Tim:

Right, exactly, anyway, issues with the models right, exactly.

John:

So anyway, it's no. No, there's a lot going on, and I'm wondering how long, for example, chinese models will be available to us in the west right I mean, and should we use them? I don't even know right yeah, apparently it only cost them five million dollars to make and it competes at chat gpt 0103 levels I, that's what they said, but have we seen it like?

Tim:

I don't know, but I've also seen things.

John:

Like you know, if you ask it about tiananmen square, it goes. I have no idea about this event.

Tim:

Yeah, that's. Uh. Yeah, I don't know what that is. Is that in the video game? Is that in a video game, or?

Chris:

something. Your social credit score goes down.

Tim:

Yeah, you asked the question no, that's a this are there. There's so much more to that right, like without one, unpack it too much because I want to stay on the agents. But there's so much more to that right, without wanting to unpack it too much because I want to stay on the agents, but there's so much more to the security, to the integrity, like you said, supply chain. It's like in a supply chain attack right? If somebody gives you a poisoned LLM, to what end? Right, could that LLM be somehow taking the data, the things you're putting into it, you're copying and pasting code into it? Is it doing something with that? You know Well.

John:

I'm glad to see, in particular, the Cohere model, but Lama 3.1 does tool calling as well and old Lama offered tool calling support in June or July of 2024. So we're not limited to the cloud providers or having to pay to do. If you want to get involved and start writing your own agents, there is a mishmash of free open source tools and stuff to get going at home. Um, for a while there you need a chat, gpt 4.0. No other, no other model could really do tool calling. You know where do we go from here? Like I, I don't know. I sort of it feels like the early internet. We're gonna have agents, right, one organization might have all of their agents. Let's say academia, let's say something that's non-competitive right, ucla is going to have the ucla agents and maybe they could talk to the harvard agents and maybe we start to get swarms of agents like darpanet sales, the house stuff, exactly. Yeah, so that, like I see it, growing and connecting like the world wide web did, with even greater potential, right?

Tim:

Yeah, no, absolutely I hadn't thought about that. Yeah, Brave new world. But you're right, it does feel a little bit like early internet, right, like the fundamentals of getting things talking to each other that previously had no way to share data. I mean, of course, in our case, I don't even know if it's a good thing, right? So I guess it ultimately has.

Tim:

It's going to happen regardless. So, yeah, we need to understand it. So one thing that's been a little bit murky to me, a little hand wavy, is this idea that you could just so like you know, say, say that one of our listeners wants to build an agent. Right, it still feels a little hand wavy, like, oh well, you just call a tool and create an agent, make a prompt, like it's you know what I mean.

Tim:

Like could you, could you get a little bit, a little bit deeper on like okay, so how does someone actually build an agent?

John:

Yeah, so I would start with. Start with something like a low hanging one tool, a read activity against an API. You're going to need, I would say, langchain, so a little bit of Python experience and they have an agent executor class that you can call and build and it will have things like the max iterations. So agents will self-correct or know that they're wrong or that they got the wrong information and literally try it again. Like I've seen, the agents just iterate, iterate, iterate. So you have to put a max iteration so if it goes off the rails it doesn't just infinitely loop. Um, anyway, that's a minor detail, but um, you're going to literally use at tool. So if anyone's done a decorator in python or decorated a function, yeah, you decorate at tool. Um, pokemon, read a Pokemon API, read tool, the URL of the API you want and just the simple requestget right. So you write a little function that would normally work on its own as standalone Python to do a request against the Pokemon API.

John:

And you put that in the prompt, no, just a separate standalone tool that you could actually run that function and it would work as a standalone function.

Tim:

Okay, and then the rest of it, honestly is a prompt template.

John:

Now that is very important and I urge you to like steal my code honestly. Go look at some of my templates because they took a long time to figure out. But it's that whole thought, input, input, action, observation, sort of thing in your, in your template. Honestly, do this with a chat GPT helping you write an agent. If you explain to the chat GPT I'm using line chain, I'd like to make an agent that talks to an API. Right, here's what I've got started with. Can you help me build this tool? Um, I I promise you you'll have a working agent in an hour, two hours interesting.

Chris:

Okay, so it's. It seems wrong.

John:

I mean getting started with ai, it's so if we think about the complexity of rag right vector stores, that embed and like that's not an easy thing to do, a retrieval augmented generation line chain so many pieces, yeah.

John:

You know there's a lot of moving parts there. This takes, you know you don't have any of that complexity. It really is quite simple and it's it's two to 300 lines of code from end to end and most of that's going to be your prompt template in in natural language. Anyway, the videos I did do the video about the pokemon one. If you want to get started like, if you don't want to start with infrastructure, if that's a bit much. I don't want to talk to routers, I just want to get a general hello world. Um, take the pokemon one, take the netbox one. I like the netbox one because it'll work out of the can. You could clone my repository, bring up the docker, compose, get a key from the public netbox demo and and it would just work. You can literally just start chatting with netbox. I wish there was more out there. To be honest with you, like I wish I could just say go to how to build an agentai and follow the instructions, right, um, but.

John:

But I do think it's important to to kind of cut through some of the hype. I don't think this is going to like be a great depression, like wave of layoffs, because agents are here all of a sudden, right, it's an augmentation. It's like giving everyone on the accounting floor the first digital calculator, or whatever like a cell phone or something like a cell phone or BlackBerry or whatever, like.

John:

I think that it's going to augment us and you might say, wow, look at all. This time I have to focus on what's important, because these agents can do all this. You know, mundane heavy lifting of stuff, right, yeah, yeah, I think.

Chris:

I think, obviously, similar things were said about automation as well, right, and so I mean, this is sounds like just the next step in that direction. I guess I should say I'm curious to get your take on something that we discussed on the show just a couple of weeks ago, um, about a quote from Jensen Wong which, uh, we found pretty comical but, um, yeah, so he's. He was obviously he's uh, you know, he's a showman, right, so he's going to say some some uh, you know, uh, nuclear hot takes, but he was like said this thing about it just becoming the HR of AI agents. Right, you're just going to be managing, you know, fleets of agents and things like that. I mean, how much of a reality do you see that?

John:

Well, from what I've seen so far, I think he's closer to being correct than incorrect, like I think that we're going to become custodians or human representatives of the agents that we've built, or that the agents that we support. Hey, that weird agent for netbox is suddenly swearing in every third answer. Right, we better tune that prompt a little bit. It's getting a little cheeky with people, things like that. Right, are the agents playing well together? Just, john's agents work with tim's agents. Now they're. You know we've got some human resource type conflicts between these agents. They're not working well together, right?

John:

So, I mean, I think, like you're right, it's a showy, flashy thing to say, but it does give you pause, a reason to pause and think. I mean he's been correct about a lot of things so far, all right. He's clearly a brilliant man and, um, you know, I don't know that that's going to. Obviously it's all to help him sell graphics cards.

Tim:

Of course, of course.

John:

Right, right, yeah, but maybe I don't know like I think our roles have shifted with automation. I'm glad you brought up automation. I've got a bone to pick with the automation community. I agree with what you said. This is an evolution. This is the next step of what we've been doing all the way back to bash scripts and tickle scripts and other things like that.

John:

It really is just time marching forward. But for some reason, because it's too gimmicky or because it's, I don't know, I think because a lot of us are Gen Xers, there's a bit of counterculture in us. And we sort of want to reject the popular thing and where there's a bit of counterculture happening right now with AI, where we want to rage against the machine a little bit.

Tim:

I think there's definitely some truth in that. There's a gestalt in us. You know not including Chris, because he's not a Gen Xer- I'm a millennial. His problem is he can't afford a house and he can't retire.

Tim:

We have different problems than that. No, I think that's true, but I also think that there's a little bit of what's the word I'm looking for. There's just some vinegar in the in the mix, if you will. Just how many people are so tired of hearing about how you know the next thing is going to be the thing, and it's got you know, especially the people that bought a whole hog into network automation. Like those people, they were ready right and they were like, yeah, this is going to be it. If you don't learn this, you're going to be out of a job, and it just it didn't it didn't materialize it didn't happen, I know, but at the same time I can remember.

John:

I can remember a serious conversation with a PBX engineer.

Tim:

Yeah.

John:

In 2002. And me explaining to him listen, we're getting this new system called voice over IP. And like we're gutting all of this, like we're gutting all of this like we're that pbx is gone. Yeah, pbx has been in here longer than the mainframe and I said, well, I mean, there's two ways to look at that, right right exactly I mean it all has to do with the shift of the lowest common denominator right.

Chris:

Eventually, things move forward. Um and um yeah, I mean people. Now it's like oh, at the end of the day, there's always going to be somebody that needs to, you know, configure the vlan on a port, blah blah. You know you're not going to do that you know somebody has to plug it in, so yeah, like eventually things do shift. Um yeah, like to tim's point, you're probably not changing when somebody needs to plug shit in, but um, actually I don't know. There's plenty of robots and shit that might be doing that soon.

Chris:

Elon's, elon's already on it, yeah right but it's one thing that I struggle with in that, in that particular capacity, just like thinking about um, you know this thing about being the hr of ai agents, you know as as um kind of obtuse as that might sound like the, the fucking, like the observability and the blast radius in that sounds so volatile to me, like it sounds so unpredictable, like I don't know how you put guardrails on these things where people feel safe.

Chris:

Not safe to them, like personally, but to their infrastructure. Right, that sounds like. If I can't predict it, there's no way I'm going to put that in there, right?

Tim:

Yeah, what if an agent, you know, oh shit, my prompt. Or they change the model, the model trains, and all of a sudden it feels differently about a certain percentage of you know. You know, it's like basically a big predictive text machine. Well, if the model gets updated and numbers have shifted, god only knows what's going to happen next time I run my agent. There's definitely, you know, and for people who are, every minute of downtime is a million dollars, like there's, there's. There's an element of risk there that you wonder if the appetite is there and I don't, I'm not. I'm not using that as a shield, like saying ha ha ha. Therefore, therefore, they'll never use agents. That's not it at all. Right, but there has to be a reconciliation, a risk reward Probably going to be last again.

Tim:

Right, yeah, network's probably going to be last again right, yeah right, network's probably going to be last in this. Well, it touches everything, right. Like, if network goes down, like everything blows up.

John:

So right, it's the very last line of defense, like if you will yeah, if you know, if that one agent with that one-to-one conversation with a single customer glitches or has a problem, that's that's negligible risk. If the agent that's pushing routes out gets it all wrong, yeah, right, and uh, so I don't. I, you know, I mean, I'm a big star trek fan, right, I think that we're going to get there, maybe not tomorrow or in the next weeks or whatever, but I feel like, like I said that tidal wave's coming and eventually, yeah, we're gonna not even like the keyboard might start to go away, right, we're just gonna start talking to these things, uh, sooner than later right, I often pick up my mouse and just computer, you know like scotty style yeah, yeah

Tim:

uh, no, I think you're, I don't think you're wrong, right, and uh, I don't know when that is. Um, I don't know when that's going to be, but I, I, I think generally, I don't know, I don't know. So the thing is, I don't know if it's going to be more efficient. I think there's some efficiency there, and over all of this, over all of this the agentic stuff, like just the more efficient we are getting and and the better ai is getting, I'm still there's still to be a huge question mark, and it's the it, it's the cost, like right now, it's, it's being fueled by an absolute gigantic wave of of, you know, vc capital and money, and you know what you see at Project Stargate shit with $500 billion, we're going to change the world Like but, but and and.

Tim:

The point is like you know, a chat GBT subscription is like 20 bucks a month or something, and they know OpenAI is just shoveling money into a fire because it costs so much more for a person to use it Apparently, even the $200 a month is losing money yeah.

John:

And on that 500 million I don't know if you saw Microsoft's attire he goes because they were asking that. You know, elon said there's not enough money and that SoftBank doesn't have it, and he goes. I've got my $90 billion. So, yeah, pretty remarkable things. You know. What are we all going to do, right? Like are we going to enter, like, a post-scarcity society? Are we going to need to work Like is any of this going to happen to benefit humankind other than funneling more money up into the top?

Tim:

A few humankinds probably. Right, yeah, Very few. But yeah, I mean yeah, no, I couldn't agree more.

Tim:

Some would even say 1% yeah maybe even less than 1%, maybe two or three people. But yeah, I mean, what about the OpenAI's thing, like you know, for the benefit of humanity and all of that, but, like you know, running as a Now they're saying they're going to do a benefits court, which hopefully we talked about that on the news last week, you know, hopefully that gives them the shield, the legal shield that you shouldn't even need, but the legal shield to do right. But the question is, will Sam Altman actually, you know, push comes to shove when there's a billion, billion, billion dollars in his pocket.

Tim:

You know, will he do, right, there's so much, anyway, yeah.

John:

I want to be hopeful. You know I'm a little concerned in. You know and it's a good discussion that we've been having and you know a few people have brought up the point about the gap between let's call them, you know, mid to senior network engineers.

Tim:

Oh yeah, the rungs of the ladder, the bottom of the ladder. We've been talking about this on the show too. Yeah.

John:

If the senior network engineers are proficient enough and start writing agents to help them as opposed to juniors to help them, you know where does that leave that drift.

Tim:

There's a gap when those people retire, and then what is the agent going to take over? Like there's no pipeline to expertise? Yeah, this is something we've been talking about for a good long time and I agree. And what is the answer? I don't know.

John:

I don't know either, and I think that maybe it's a little short sighted, for I'm sure certain MBA type people go well, agents don't need to sleep. Agents don't need vacation. Agents don't have children. Agents don't have pets. Agents don't need to go to the dentist. You know that someone's doing this calculus.

Tim:

Oh yeah, don't need to go to the dentist, right you?

Chris:

know that someone's doing this calculus. Oh yeah, 100, you know for sure. No question in anybody's mind about the budget line items being figured and and all of that, right. So yeah, they're, they're the actuaries of it. Pretty much, right, right. But yeah, uh, I want to.

John:

Well guys, I've left you a lot here to unpack and to think about. So maybe um once, once we have a little more traction, a couple more weeks go by, we'll come back and we'll revisit I want to.

Tim:

I yeah, I'll be honest, I kind of want to try to build an agent and just see like, okay, what does that work, what does that look like? And this is because I've been working with andrew a lot on the gen is. I've seen so much behind the the you know, behind the curtain now that it's kind of interesting. I'm not going to be an ai engineer or anything like that, but I am curious like, all right, well, what, like, what, what really is goes into doing this? Like how hard is this going to be?

John:

so it's interesting, but yeah, well reach out if you need any templates or anything, tim, just let me know, I'll be more than I need everything.

Tim:

Help you get going, I will reach out. I need everything.

John:

All right, all right, and that goes for anyone listening honestly just ping me, drop me a LinkedIn, drop me a blue sky. I'm I'm trying to really help people pick up on this and it's exciting. It's fun too. It's really fun. Sometimes you sit there and go there's no way this worked, there's no way that this worked right, and you read the logic and it you know it lays it all out. Uh, it's, it's, it's.

Chris:

I think you're going to really enjoy it once you start building them cool, cool it's basically, the lesson is get in now, before the bottom rungs of the ladder are gone. Yeah, yeah, yeah, be the one at the top of the ladder, not the one, uh reaching for the rungs that aren't there that's right um, all right, well, uh, thanks for coming, john.

Tim:

It's always wonderful to have you on there. It's been a great discussion. We'll definitely have to have you back again and, um, yeah, I, for everyone else who's listening, I hope that. I hope you found this helpful. Uh, interesting, entertaining, hopefully in some small way. Uh, please, uh, subscribe to us on your favorite podcatcher, if you're not already. Uh, watch our youtube. Uh, you know, and do all the normal things we give you as a call to action that I can't remember right now because I'm tired. Now you go to bed. All right, we'll see you next time hi everyone.

Chris:

It's chris and this has been the cables to clouds podcast. Thanks for tuning in today. If you enjoyed our show, please subscribe to us in your favorite podcatcher, as well as subscribe and turn on notifications for our YouTube channel to be notified of all our new episodes. Follow us on socials at Cables to Clouds. You can also visit our website for all of the show notes at CablesToCloudscom. Thanks again for listening and see you next time.

People on this episode