Viktor
00:00:00.000
Have you ever seen a standard or attempt to create a widely accepted standard? Propagate so fast, we're talking half a year and There is hardly a company right now a public facing, you know, software vendor that doesn't have MCP something, something. It might be good. It might bad It doesn't matter, right? There is no agent that doesn't use it. Nothing ever propagated that fast. No. Standard became a standard used by everybody that fast
Darin
00:01:35.003
So in starting out with AI coding. One of the things you'll probably hear about pretty early on is something called an MCP Viktor. What is an MCP?
Viktor
00:01:47.738
Model context protocol. It's a way how agents, after receiving instructions from models, what to do can accomplish that something by, uh. Talking to tools or APIs or what not, right. It is essentially to create the standard protocol, right? So think of it like API for agents, agents can, use APIs and CLIs and so on and so forth, right? But even though they can do that, it's, it's, it's in a way API for agents. Yeah.
Darin
00:02:28.390
So everything that you were just talking about. To me, you're talking about MCP itself mod model context protocol. That's the spec. You have MCP servers, but to me, MCP servers are nothing more than sort of a backend server that we interact with. Right. Think about it from a, if you had a decoupled client server type setup, that's what it is. It's just what it does.
Viktor
00:02:56.380
you know, if I keep the knowledge of APIs, stick of it as actually, okay, API spec is not the real thing. You need actual API, with probably some backend that serves that API, that's what MCP server is, right? It's the implementation of that spec. Now, that server can be as simple as. I get something from the agent and I forward it somewhere else, or it can be, no, no, no, actually I get something from the agent and then I do something. Right? it could be sending requests just forwarding, transforming and forwarding those requests from the agent to, let's say, Kubernetes, you know, cube control get, or it could be processing information and doing something without going. Anywhere out of itself. Right. Just as the API. You don't know what what the backend behind the API is doing. It could be sending to some other backend or doing the actual work.
Darin
00:03:52.210
Right. It could be off to a third party, API behind itself, going to a local database or just doing local work. It could be anything
Darin
00:04:01.930
except from a model context, protocol, spec, perspective. There are tools, there are prompts. There are I think one or two other things. Correct. Those are the two big ones to me.
Viktor
00:04:16.900
Yeah, so tools are what is most commonly used. and there are prompts. That is a way how to essentially share prompts to the agent, right? So, it just returns prompts in the same. So instead of agent reading it from a directory, like cloud code would read it from.cloud/commands. if you implement prompts in your CP that MCP is serving prompts to the agent, right? But the main one there, there are a few others. I don't even remember them to be honest. Uh, the main one is tools, really. the most important thing about, and this, this is where things get slightly different or complicated, not complicated, different, right? the most important thing about any tool in an MCP is the description. That's everything. The description is everything. It's in English, kind of. Hey, use me if you feel like performing any operations related to Kubernetes. Now, that does not guarantee that agent will use you for anything related to Kubernetes. It's still. It's not forced, it's not enforced, right? It's just a suggestion to the agent. And when I say agent in this context, I mean to, the agent context that is sent to the model that decides what to do. it's a suggestion kind of way. I'm here, you can use me. You don't have to. Your choice. And the reason why you might use use me is uh, English, right?
Darin
00:05:51.794
Why did this even come about? What was the purpose of MCP? I mean, we've talked about there's tools, there's prompts, you know, it gives you a centralized way of doing things. But what, why did it need to come into existence?
Viktor
00:06:03.754
It needed to come to existence to enrich the context so that model can make a decision how to do something. because if you say, I'm going to stick with Kubernetes, right? if you type there, I want to retrieve all the resources from my Kubernetes cluster, right? That's a context. Your intent is a context. Agent sends it to the model and model based on that, sends back instructions to the agent what to do. Those instructions could be execute cube control, right? Or uc, use curl or whatever. Right with cps, you are extending that context, that those are the, those descriptions I mentioned before, right? So those descriptions become part of the context. So when you type, retrieve all the resources from Kubernetes that is sent together with all those descriptions of different tools to the model. And model now has more information, that can use to make a decision what the agents should do. Agents don't think, they're just getting a plan of action from models. So it is a very hard to reach that, right? 'cause if you say, without, uh, let's say something more complicated, right? Let's say. AWS, right? You say, I want to create an issue to instance in AWS, and there is no MCP, right? There are probably 5,000 different ways, or suggestions A model could send back to the agent what to do in that case. Oh, you want to do something in AWS, create a Terraform module. You ask it again, and it says, uh, how about A-W-S-C-I? and you send it again the same, and then you clear the context and again, and it'll suggest, oh, maybe you should do this. Right? With MCP, you're enriching that context. Basically you're saying, this is how we talk to AWS. Indeed, in that example, right.
Darin
00:08:10.733
Because it could be well left up to its own devices. the model will just tell you whatever's at top of mind,
Viktor
00:08:24.848
It's not necessarily whether it's correct or no. Of course that is as well, but whether it is. Within your context, and in this case, when I say context, I don't mean the agent context, but context of you, your project, your organization, and so on and so forth, right? in context of your organization, you can say we are using Crossplane, but if that context is not propagated to the model, CloudFormation, right? Can I feel like CloudFormation today?
Darin
00:08:58.090
Another word that you used was intent. Intent is a big thing in general in dealing with any kind of AI coding that I've learned.
Darin
00:09:07.690
your example that you just said there is like, Hey, we use Crossplane if. The con if, if the whole thing that you're doing doesn't understand, we use Crossplane and you're saying give me an EC2 instance, like you were saying, I think probably it would give you CloudFormation probably 90% of the time, Terraform probably 8% of the time, and you might hope to see Crossplane maybe 1%
Darin
00:09:34.555
just because of the vast amount of documentation around each of those tools. And you're. I'm wanting to say, I'm wanting you to do something inside of AWS Well, you're wanting to do something in AWS Well, you must want to use or CloudFormation, right? Just by default. It's like you ask me, if you were to ask an AWS system engineer, give me an EC2 instance. What tool do you think they're going to use?
Viktor
00:10:01.276
It could be any, but it'll be whichever they already decided before you ask them. That's that context. Or maybe they haven't decided anything. Maybe the, that decision was made even before that person joined the company, right? that would be that context, right? Intent would be give me EC2 instance, right? Context is everything else. It's, Hey, in our company we always use Crossplane. That's what we do. Right? And that engineer has that context in, his or her head, right? Oh yeah, you ask for EC2. Cool. Let me do this. And that models don't know that they know what other people are doing, but not what you are doing in your company.
Darin
00:10:45.711
It seems like sometimes we would say, you know, I don't really care what tool is used. And sometimes that may be true, but in other times it may not be.
Viktor
00:10:56.120
Yes. Now, now we are entering into, the whole context engineering or whatever would be called that. Maybe you're using nothing and then, but in that case, you would still not say, gimme C two instance. Then you would say, you know what, we are just starting for the first time with AWS. What are the options I have available with me? Right? For me, right? You would be talking to it, you would be brainstorming, and you would come to conclusion eventually to use X or Y or Z, whatever that is, right? Based on that conversation, back and forth, back and forth. Think of it like in that case, that would be, AI would be equivalent of a consultant, right? Consultant came. Came to your company saying, oh, okay. I heard guys that you need me because. You're just starting with AWS today, how can I help? And they would, and then the person would say, uh, let's start with this two instance, right? And then that consultant and, and this is now where, where things start depending in case of consultants, but I think it applies the same to ai, depends on seniority or maturity, right? If I use May as example. I was that consultant, like I'm now throwing random numbers here, more or less, but let's say 25 years ago you say, gimme C two instance. I say, no problem, gimme 15 minutes and I would come back with easy two instance. I would do it with Terraform or CloudFormation or whatever. I would, jump straight into fulfilling your desires right now. Co consult me consultant, not 25 years ago, but let's say 15 years ago would be, uh, wait. We are going to have a full day of conversation, Until I fully understand what the heck are you guys doing? Then after that full day and maybe a couple of rounds in, in, in a bar afterwards, we would come to some conclusion and do something right. I'm not sure how this ties to MCP and why are we talking about this anymore, but, somehow we got here.
Darin
00:12:58.143
Well, it's an interesting correlation between an MCP server and a consultant because if it's tool correctly. It could be a good consultant. So let's, let me play it out. Let's say you're not using an MCP server at all, and this is gonna be a very bad example, but I think it's now, by the way, 25 years ago for EC2 wouldn't have, wouldn't have mathed 18 years ago. Is, is the math on that?
Darin
00:13:25.276
So if 18 years ago I, and we had ai, I would say give me an EC2 instance, and that's all I said. No MCP server, no nothing. Just asking a model, what would I get? I'd probably get a T one micro because there's probably a little bit of knowledge like, Hey, let's not waste their money. Or it could have gotten the exact opposite and gotten the most expensive instance, you know? It could have been anything. Now, fast forward, you know, 10 years from that, there's patterns built up. Terraforms come into existence. Crossplane has come into existence. CloudFormation is now in existence.
Viktor
00:13:58.073
Still the, the, point is that AI today is that young version of me. It tries very, very hard to please. And it is doing that by not asking too many questions, if you now ask ai, gimme two instance, and you're asking that from an empty project, let's say, so it cannot de use from your existing patterns, not, it would most likely not ask you anything. It'll just output A-W-S-C-L-I commands or maybe Terraform or whatever, right? it goes straight most of the time. Not always, but most of the time it goes straight into fulfilling your intent. And your intent is gimme easy to instance. Right? And it's up to you. And that that says it is as it is right now. we cannot change that unless you meet speed jobs and join a Tropic or Google or whatever, right? So that means that you need to fix that stuff on, on your end end. You need to start having conversations, kind of your intent. Or let's put it this way, AI will give you the most likely good outcome based on what you asked. And if you ask silly questions like, gimme easy two instance. Without asking what would be the potential solutions, what are the pros, what are the, if you just ask Gimme C two instance, it pleases you immediately. Here, here's easy two instance.
Darin
00:15:34.587
I get it. But that's the model. Just raw. If we were to introduce an MCP server that has been trained in how to do things, that's where the difference comes into play.
Viktor
00:15:48.046
exactly. Because then, then the context that model gets is not only gimme C two instance, and in this context I'm ignoring the system prompt of your agent and stuff like that, right? It's not only Gimme C two instance, but it's also, when working with AWS. send requests in this format, with those fields to this MCP. So your context now is different, And the response from the model is different because it receive different information. You think you send only, gimme easy to instance, but what you really sent is that plus the. Augmented context from the descriptions of the tools in that MCP, So MCP from that perspective is in a way, and that's definitely not the only thing, and there there are many other things missing. It's, in a way, it's sort of a way to train models of what to do under certain conditions. Not enforce it, just to be clear, but suggest.
Darin
00:16:59.381
I think the word there, train is not the right word. I think suggest is the more correct word because here's why you're using the same MCP server that I'm using. So if that agent, not the MCP server, but the agent has my context, you have your context, right? You, you have your agent, I have my agent. Those agents are running within our unique context. But if we offload it to the MCP server to take care of things, remember we said the MCP server can actually do things for us as well.
Darin
00:17:34.279
So if we act, actually offload it to the MCP server, whose context does it have now? Because you want it to have full admin to actually get things done? Because I probably don't have full admin locally in my context. You probably don't have full admin. But then the problem is if it's doing it full admin, that's great. Things can get done, but it's not doing it on my behalf. It's doing it as itself. this whole security thing is the crazy part of this.
Viktor
00:18:05.059
That greatly depends on where that MCP is running, right? For most people, CP is running next to the the agent on their laptop, it cannot imagine permissions that CPS permissions are the same permissions as your permissions on that laptop, right? If I use cube controller as example, so Kubernetes as example, yeah, it's going to use the current. Cube config context, right? And whatever I can do, it can do and vice versa. When it becomes more interesting is when companies start running CPS remotely and telling me to connect my agent on my laptop to that MCP server. then we, we enter into situation where multiple people are hitting the same MCP server. That MCP server needs to have very elevated permissions because it needs to serve the needs of many different people. And this is the part of the story where I don't think that we have a solution yet, right? How to distinguish different users, how to authenticate and so on and so forth.
Darin
00:19:15.595
Right, because as of today. As we're recording this in August of 2025, in the MCP spec, there is no concept of authentication or authorization.
Viktor
00:19:29.410
It's human, the death. here's an example. I haven't found the decent way to run it in Kubernetes.
Viktor
00:19:38.324
It means that I haven't found a decent way to run it in Kubernetes. Right. So if you're talking about running it remotely, what's the most natural place where I would run it As a company, right?
Viktor
00:19:48.854
Yeah, exactly. But then kind of Kubernetes, most MCP servers are talking through p ba, ba, S-T-D-I-O, not h, CT P.
Viktor
00:20:01.830
Most I, I'm not saying all most right? D io, all of a sudden you have complications. Okay? So you cannot use ingress, right? Because there is HTT P, you cannot use this, you cannot do that. Um, you need some kind of a bridge that will convert S-D-D-I-O in, sorry, HTT P requests into TDO and S-D-D-I-O into STP requests. We have so many unsolved problems right now. I only. can rely on MCP service running locally, honestly, or, uh, running, uh, remotely. But, through third parties. Like if you, I think that GitHub doesn't not even offer anymore the option to run MCP locally. It's remotely, it's controlled by them. Nobody knows how it's running, why it's running, but it works.
Viktor
00:20:58.218
I don't use it, so I, I don't know how it would feel. Uh, that's separate discussion. I think that's like MP is like, it have very useless right.
Darin
00:21:06.117
But it's interesting because it's, that's actually the correct model model. See, we've reused words so many different ways here. That is the correct way to interact with the MCP server standard. IO is not the correct way. In my opinion, it's fine for small things, but if I'm a large organization, I don't want to have to keep that MCP server updated on everybody's local machines.
Viktor
00:21:31.937
No, no, no. The, what I'm rather trying to say is not that I'm trying to say that, you know, there is a lot of pending work. CPS are new thing, right? They appeared yesterday, And there are no standard ways how to run them remotely. I'm not saying it's impossible. It's definitely, I think I mentioned Kubernetes. There is tech lock. It sometimes works, sometimes doesn't. Uh, nobody knows. Just, uh, we need a bit of time. It's not that SDIO is, is is better or worse option, uh, that we separate discussion. It's just that we don't have standard ways to do those things because those things appeared yesterday. Like MCP, you know, it all started from Atropy that says, Hey, here's a standard protocol. Cool. Here are a few examples, uh, that we run in docker containers locally for bus and docker containers are actually the better option. Uh, most people say, ah, no, I'm going run into MPX directing my machine, but you know, it's only, only few months passed half a year.
Darin
00:22:35.977
And it's gonna keep changing, I think, but I think we're in the right direction. What do you think some of the real pros are? We've talked around some things, but let's sort of hone in on this. What are, what are the real pros of having an MCP server to help you do your work within your agent that you're using?
Viktor
00:22:53.902
real pros are when they're specialized. I mentioned before GitHub, I don't need MCP for GitHub. gh, HCLI works perfectly well. Right? And model knows, knows. It kind of knows perfectly well. Gh, HCLI, right? Kind of. It's a commonly used tools tool. It knows the whole internet. It knows exactly what to do with it. It works well. I don't need anything else. Now. I'm, I'm in this context. I'm talking about CPS running locally, just to be clear. Now Kubernetes, it knows how to use cube control. Why? Why do I need MCP for that? Right? What they do need CPS is when they do more than mirror known tools and APIs one-to-one, So if you would say, let me stick with Kubernetes. You would say, Hey, you have the knowledge, how to run an application. There maybe knowledge based on my company policies. And I use that, use that MCP, that will not just invoke cube control, blah, blah, blah, but will assemble it in certain way that works for us and what not. If we would have additional logic, then only cube control, then it becomes something very useful. And the way I tend to describe the pattern that I feel we should be following with MCPS is that if MCPs are. Architected or designed around intents, my intent is not to execute kubectl get, or my intent is not to execute kubectl apply. My intent is I want to deploy an application. I want to see what the heck is going on in my cluster, right? Something that is more focused on what we normally ask for. than replication of the tools that already work pretty well and which agents can use perfectly well. I, I even go, I would even go as far as that, and I did those experiments. Like, your agent will statistically more li is more likely to do the right, job or right set of tasks in your GitHub repository with gh. Then you need GitHub, MCP. Because it knows it better. But if you say, let's, let's say that I stick with GitHub, right? If you would say, end of development pro, uh, I, I finished developing, right? That would be my intent. Then I, that would be sent to an MCP that would, push all the changes that are not pushed. Merge the pull request, sorry. Create a pull request. Uh, wait for reviews. Uh, if there are any in that pull request. Wait for CI pipelines to finish executing, uh, evaluate those outputs and then, uh, merge the pull request back then we are talking, then we are talking about something that is very useful, that goes way beyond. the capabilities of ACL I.
Darin
00:25:54.927
I think one of the pros, and this is specifically for remote MCP servers, it might work with the standard iOS, is let's say today you're using cloud code or you're using Cursor, and then the next big thing comes out and now you're ready to move to the other thing. If you had it locally that lo, that standard IO may not work with the new agent because of whatever weird bindings or whatever. It could be anything. It may, it may be fine, but it may not be. Whereas with an HTP based MCP remote MCP server, that's probably gonna be fine no matter which agent you use. Is that reasonably true?
Viktor
00:26:35.175
I see slightly differently. To me, that is not really the question, whether it's this. Specific protocol or that specific protocol within MCP protocol, right? Or transport mechanism. those are, you know, technical details that we can now discuss what is more effective, what is better, and so on and so forth. What is the most important thing in this case is that we all, uh, rallied around one protocol. It's similar, like, uh, was the open telemetry right? We can discuss whether open telemetry is better or worse than this and that, but that's not the most important discussion. The most important discussion is that everybody talks open telemetry, even if you find me a better way to do X, Y, Z, than with open telemetry, I'm still going to get more value with Open Telemetry simply because I know that my tools and your tools and his tools and their tools are all. Designed or updated to talk that protocol. And that's the real value of MCP, there were many other atoms, I don't even remember their names. That's how important they, they are. This is the one for the, for some reason or another, sticks. And absolutely every, almost every agent talks MCP right now, almost every company that wants to. Wants other agents to talk to them, their software publish CPS and so on and so forth. So it's a protocol protocol that everybody adopted. That's what really, really matters there.
Darin
00:28:20.009
Is that really, I don't think that's a con. I mean, it's just fine, right? I mean.
Viktor
00:28:24.014
I mean the, there are many things missing. I can tell you the thing that I was missing the most, and I'm not missing and I'm still missing it, but now it's in a spec, but nobody had implemented it or very few, and that's synchronous conversation, The way how it works Normally, MCP, without that new addition to the spec. His agent sends something to M-C-P-M-C-P responds back and that's it. It's like HTTP, What I'm missing is, no, no, wait, wait, wait. Kind of I send you another request, you just forgot everything we spoke. Right. Unless my MCP has a stateful storage and so on and so forth, now there is ability to actually talk back and forth, back and forth within the same session, let's say, until both parties agree. Okay, now we are done. that simplifies greatly the design of cps because like the one I'm building right now, I need to create some session ideas and store stuff into JS O files so that if, if another request comes related to that same thing and I can retrieve the previous state and context and what not, and then continue where we left. that was very painful, but that's now solved. The only thing missing is for hot code cursor and to implement it, I think. Vs. Code has it only right now.
Darin
00:29:44.493
So again, it's new, so not everything is there yet. We're waiting on the agents to catch up to. Okay. Implement all those new features, or not necessarily new features, but the the spec features that are really coming to use.
Viktor
00:29:59.313
That's where power of the people comes in. like we mentioned earlier, how CPS can serve prompts. I think it's, it's one of the simplest and most useful features 'cause it's very simple to implement. It's very useful. Works almost exclusively in include code at least. Since couple of weeks ago, last time I checked and then I went to Cursor and kind of, okay, there are issues opened by people. Where is this? We want this. Then I went to, to some other agents and none of them implemented it give it a couple of more weeks and everybody will have it.
Darin
00:30:32.250
I have to admit, I like having prompts in my MCP server, especially for things that I'm doing over and over and over again.
Viktor
00:30:39.345
Now think about it on a company level, right? Kind of that those prompts are really our internal processes. They should be shared by everybody on all projects and what not.
Darin
00:30:49.845
They're effectively a runbook, a standard process document, uh, what are those called? Typically the, uh, I don't even remember anymore, but all those things, and you just don't have to think about it.
Darin
00:31:03.855
And guess what? When the process changes, somebody goes and makes the update in one place and everybody gets it. A few caveats, but everybody gets the
Darin
00:31:18.390
Yeah, that's the only trade off. But most people, I probably would think, I would hope I, I think this is the one tool out of every thing we've used in the past few years, you want to restart it every day. You really want to restart your agent every day for, for numbers of reasons, but that's yet another one. Context being the, the context window being your primary one. That's a different conversation for today though.
Viktor
00:31:46.672
going back to what we talked look at, let me ask you a question. Have you ever seen a standard or attempt to create a widely accepted standard? Propagate so fast, we're talking half a year and There is hardly a company right now a public facing, you know, software vendor that doesn't have MCP something, something. It might be good. It might bad It doesn't matter, right? There is no agent that doesn't use it. Nothing ever propagated that fast. No. Standard became a standard used by everybody that fast
Darin
00:32:27.403
And MCP? Well, it's, that's comparing apples and oranges because they're much, not so much that they're different, but open is huge. Whereas MCP, it's actually fairly well scoped.
Viktor
00:32:41.911
fair enough. Uh, d different scopes and many reasons why it should go faster or slower. I'm more alluding that simply. Things are moving so much faster than before. Just as before they were moving faster than before that, right? big or small. I never saw it propagate this that fast.
Darin
00:33:02.410
I guess the wrap up on this is MCP servers are good until they're not. They can be really good for you. especially in a prompt scenario, useful for tools for things that are, maybe you've written a tool inside your company that the models have no idea about. An MCP server is gonna help you tremendously in that. I mean, think about all the little command line things you've written.
Viktor
00:33:26.239
exactly. And that, that's, that's so much what you just said is outta the box. Better use of MCP than, let's say, GitHub. Right? Because models know GitHub, they don't know your stuff. You can wrap your, even if you don't go crazy, you can, you can wrap your processes. You can create workflows in cps, right? Because MCP agents send request to the MCP. And MCP responds back now think of this. What if MCP responds back by to the agents saying, do this and that, and when you're finished, come back to me with this information. that now si, uh, circle becomes a workflow. do this, come back to me with this information, and then I'm going to come back to you with second sec set of instructions and a third and a fourth and a fifth, right? Until you are done.
Darin
00:34:28.353
Yes. Well actually you create a prompt that would far off that process that would go right.
Viktor
00:34:35.347
Yeah, yeah, we, you can, you can do it with prompts. The advantage of MCP is that prompts are always, executed on demand. So you do slash or add prompt and, and this and that happens, right? CPS are extending context sent to models. And so that models make decision what to do, right? So if you, let's say that you say, slash I'm done, And that starts, uh, that executes a prompt, that creates a poll request, whatever you've done, right? now if you do the same thing in MCP. It can provide the instructions to the agent. Hey, whenever you see that the task that somebody's working on is finished, call me. So you don't have to do slash done or that's done, or whatever, right? You, you don't have to necessarily, uh, rely on, rely, forget or not. Now you're relying whether model will forget or not, right?
Darin
00:35:41.080
Well, this gets us back to the initial example. Give me an EC2 instance. With your MCP server that you've written, not your, but you've written in the company that's like, okay, who's making this request? Let's assume that that has been passed along with the request. It's like, oh, that's Viktor. Viktor works for Pam. Pam reports into engineering. This is working on this project, so let's open up a ticket for this. Okay, here's the ticket. And, and all that stuff just magically happens until finally. You've got your EC2 instance, but all the paperwork, because you've written it in the process is done.
Viktor
00:36:23.065
Here's even better version, that I was alluding to before. You know, when I was saying inexperienced consultant, I just do that MCP that you did. When you say Gimme easy two instance can evaluate that intent. It says, uh, Darin, that's not enough. Can just come back to you immediately with instructions to the agent, two instructions. Ask that in follow up questions, right? Whatever, which region you want, blah, blah, blah. Ask him these questions and then come call me again with the answers to those questions, because then you control the flow. Right, assuming that's your MCP that you design and so on and so forth, you can control the flow. You can decide what is enough, what is not enough, what is missing, what goes after what, and so on and so forth. You're effective with creating workflows, While with models you are in, uh, free flow.
Darin
00:37:19.792
And when I answer back with what's a region. The response will be, call somebody who knows something. That's what your response would be.
Viktor
00:37:28.278
here's where it gets really interesting. What if that MCP that, and I'm going to stick with gimme easy two instance. What if that MCP has an agent connected to a model backed in, and then MCP receives that request positive to the agent with destruction? Okay, uh, I got this intent from Darin. Is that enough? Do you have all the information? because when you went directly to a model, you never asked, said, gimme two instance, do you have all the information? Right? You, you never said that, that that was your mistake, but since you control the flow, you can say, Hey, pass the same intent to your local agent instead of the whichever agent you, sorry, your local mo. Your model connected to the 10 CP instead of model used by your client agent and provide additional instructions. Is this enough? Did the user, uh, ask for the, provide this and that, whatever. Does it meet our company policy, and so on and so forth, right? And it doesn't even have to do the work. It can go back to the client agent and say, okay, this is now the complete, complete information. Now you can send this to whomever you you wanted to send before you started using me. Right? And now that model that you were using from the start will actually give you the correct answer because MCP just augmented the context.
Darin
00:38:54.844
Is your head spinning yet? it it should be. If your head's not spinning yet, you need to come on and be a guest because all of this, especially that last example that he gave, when you're putting agents behind the MCP servers whole, that's a whole other set of unfortunately, what you're just recreating there is the monolith to microservice architecture. Yet again, that's
Viktor
00:39:23.319
It depends what what you put into it, but yeah. My, my favorite, to be honest, is, Uhm CP is a combination of code and, and ai, right? When I would, perform some operations, let's say in Kubernetes and say, okay, oh, users just ask for this. let me give you the list of resources, uh, and schemas and so on and so forth. So our use codes to retrieve. Information 'cause that's more effective than using AI for that. And then augment that context to that kind of like, this is, here's everything you could ever imagine that you might need here. Here are the resources, here are the schemas, here are the permissions, here is everything. I'm using some kind of dynamically generated prompts that are passed. to models and those prompts end up. Sometimes I go crazy and, uh, my prompt that is out, not prompt what I type, eh, but the prompt generated by that MCP sometimes would be like a hundred K tokens, uh, one prompt, but with absolutely all the information you need.
Darin
00:40:27.987
So if you're not using MCP servers yet, why not? If you are using them, why head over to the Slack workspace, look for the podcast channel, and you should see something called episode 3, 2, 1. Leave your comments there.