Viktor 00:00:00.000 I think that, current model subscriptions are a no-brainer. 200 bucks a month for, whichever plan on Anthropic is still. Compared to what you're getting out of it? when I say that you get out of it. I don't mean in terms of token or tokens or stuff, like kind of how much value you get back. And I, don't think that I ever said that 200 bucks a month on a company level per person or individual level is a steal. I don't think I ever said that. Yeah, but I'm saying it now. it's cheap.
Darin 00:01:40.743 Well, now that HashiCorp's been rolled into IBM. We still have Plummy and other companies existing. when is the hype cycle going to move towards reality for AI driven all the things infrastructure I.
Viktor 00:01:57.786 Just judging by the speed, everything is moving probably very soon. cannot take longer, right? If I heard we hardly had any, any use of AI in any form of way, now all of a sudden everybody's using it in some capacity or another. I can only speculate that for infrastructure or anything else we that, that will be the next wave, right? Because right now, you can think of it as two waves of AI so far, right? Very generic. One like Jet Gpt of the world. Like, hey, you just go there and you ask it the question or ask it to do something and it does something and it's equally capable or incapable for everything and nothing. And then with Anthropic and few others that followed. we see a more specialization. Okay. So yeah, this, these, those models and those tools, they can still do anything you want, but they're slightly more specialized in software engineering. Right? Can you use cloud code outside of software engineering? Of course. But the, the focus is software engineering, right? Then what? must be coming after that is more specialization, right? Kind of. Okay. But now this system is actually even better for only application developers, And this something is better. Then whatever we had before for managing infrastructure, uh, another one for database security and so on and so forth, right? We will see specialized ai, I mean especially saying AI without using the words like agents or this or that, just because we don't know the exact form that will be, that will be more capable in specific areas, and then we will start connecting those more capable ones into some sort of a mesh.
Darin 00:03:51.091 Wait, you're co-opting the word mesh now for ai.
Viktor 00:03:54.257 Oh yeah, yeah. There will be agent. I think that there is already a term agent mesh or ai. Anyway, if I'm the first one who said mesh in context of ai, which is sincerely doubt, then uh, please let me know. I will license it. AI mesh.
Darin 00:04:12.398 what do you think the real current state is? I mean, at the time of recording AWS has just released their MCP server, or it's called control. Something, API, I can't remember exactly what it's called, but you know, whether it's good or bad. Don't know. I haven't played with it yet. I mean that everything's still early.
Viktor 00:04:31.000 right now, I'm less concerned. About something like being released by AWS because AWS is public knowledge, And that means that models are already pretty capable with AWS, AWS itself can make it even more capable and better and so on and so forth. Amazing, right? But. The areas that are more interesting to me are address kind of like, okay, but where is all the private data? Where is all the knowledge that we accumulated and so on and so forth, right? So I think that the big missing piece is not public knowledge and making AI or models or agents better based on everything that exists, uh, in public. It'll continue being better, but that, that's not a big deal. Now it's more about, Hey, how do we create workflows? For example, not many. I, I dunno, many that work on workflows, for example, And when I say workflows, I mean workflow between a human and, uh, ai. here's a silly example, right? If you go right now to. Any of the solutions I'm aware of, excluding the one that you mentioned for AWS because it's something that was released just now. I don't know it, I haven't tried it, but you go to any solution right now and you say, gimme a database, you'll get it because there is no workflow. It is just trying to please you. What we need to build in some kind of. Workflow that will insist on getting additional information from you, squeezing it out of you. in a similar way, like good consultants or professional services operate, and I intentionally say good because many don't. You know when you go and visit the customer, and customer says, I need this, and you have two options, one is to do it. That's how models agents operate right now. But the good consultant would say, okay, wait a minute. let's spend the day whiteboarding it and talking about and trying to figure out, maybe you have no idea what you need. And that's something I don't see right now from the solutions. And I think it's one of the big stumbling blocks because then people get wrong results, wrong outputs. They're not happy, and I wouldn't be happy either. You know, if I come and say, Darin, I need this, and you just don't ask me anything, any follow up questions, you just do it for me, and then it turns out that I have no idea what I'm looking for. I will be disappointed.
Darin 00:07:08.858 That's the thing I've realized as well. I wish the default would be the opposite. I wish it would keep asking questions until I say, no, you've, you've got it. Go ahead and go, because I have to tell it. what else do you need to know? Right? Ask me. Ask me whatever the questions you need, and the times that I forget to include that after prompting, it's like, well, duh, I didn't want you to do that yet. Give me, gimme a few more minutes. Like, let's, let's talk about it a little bit more.
Viktor 00:07:40.443 you know, it's all knowing thing, so. You ask for it. I'm, um, I have all the information in the world. I can give it to you. It's your problem that you're a dummy for asking a dummy thing. That's what he's doing right now and that's what many people in services are doing as well. Wrongly.
Darin 00:08:00.910 Where do you think the hype is right now? Where do you think the real, real hype is? What do, what do you think the vendors are wanting people to believe?
Viktor 00:08:09.251 Uh, what vendors want people to believe is, is just ridiculous. It's so much silly marketing. Kind of every single new model is going to change the world. a good example is recent. At the time we are recording, not necessarily at the time you are listening to this release or GT five. Is it better than others? Yeah, probably. Is it? Significantly. No, it's not And it's normal because model server are being released every week, everyone, every next one is probably potentially better than the previous one and so on and so forth. But the marketing is kind of, this changes the everything. And then it's given to influencers, you know, who get access before it's even released. And then they of course, kind of handpicked and everybody Yeah, this changes everything. it's, it's overhyped. Overhyped. It's actually, it's hard to explain. It's, yeah, I think it's under hyped and overhyped at the same time. Expectations are overhyped, but understanding of what we can get from it is under hyped. I feel that people are still not aware how much they can get from it.
Darin 00:09:19.930 that's my question, I guess stated differently is, you know, what are the realities today versus the marketing claims that are being thrown out at us?
Viktor 00:09:29.436 Realities is that there is a very limited number of people to begin with that are proficient with it. And when I say proficient, I don't mean data scientists. That, of course is in a huge demand, right? Not, not from, but not from the perspective of, hey, I can create even better model. I'm talking about from the user perspective. there is a huge demand and very few, very small number of people that actually know what's going on right now. Of course, if you ask people around, oh yeah, I'm very proficient with ai, kind of, I'm using cursor. When you dig a bit deeper, kind of like do you really kind of like, are you really understanding how prompt engineering works? Have you worked with vector databases? Have you worked with graph databases? Have you created your own agents and so on and so forth. Then we are almost at nowhere, right? Uh, zero, uh, ground zero, which is normal 'cause this is all new. But it's also something that prevents companies to, to see the real benefits, right? Because, you know, I imagine this, right? Kind of if I tell you, Hey Dan, you should move from on-prem to AWS, and I show you S3 buckets, and that's where it tends. Is it beneficial for you to have S3 buckets in AWS? Well, maybe, maybe not. I dunno. Right. Is that really what should hook you in and give you confidence that AWS is the right thing? No, there's much more to it than than S3 buckets, right? Same thing with the AI today. Me.
Darin 00:11:11.675 what are those foundation pieces? I mean, you, you went there. S3 is a foundational piece for anything running in AWS today. What are the foundational pieces that we're going to need that are real today that we should be thinking about?
Viktor 00:11:27.763 there are foundational pieces on individual level and organizational level, right? on individual level. Easy, low hanging fruit is kind of get really familiar with writing prompts, kind of like spend time with it. It makes a huge difference. But that's something that everybody knows more or less. Right. Start writing, start, start creating your own agents now, and then we are talking. That's, a real deal. Kind of like, okay, you have your own agent that actually does something. That will be your cv, by the way. Right? I think that people will be judged in the future on, interviews, kind of how capable they are, on doing something by bringing their own agents. but they're still relatively easy. The real challenge is more organizational level because that's much bigger scale. That's when we start thinking about, okay, so how do we actually make this work for 10,000 people? What do we do with all our documentation? What do we do with our Zoom calls? What do we do with, uh, how do we feed AI with information? How do we securely enable people to use that ai? Right? And that's not a public AI anymore. That's our own ai. Maybe we start building our own models. That's expensive, but who knows? Maybe a bit of reinforced learning now, right? On a organizational level, we are. Getting into something that is not doable in a day or two. Right. It's a huge investment, like massive investment.
Darin 00:12:51.502 If we are not wanting to make that investment, and let's focus specifically on infrastructure right now, and I'm gonna have to go ahead and include our laptops, desktops in the infrastructure now. I, I, I think that has been left outta the conversation a lot used to, we would need, as an engineer, as a developer, give me 64 gig machine, 128 gig machine, all the things that I need to make that work. Lots of disc. Do I even need that anymore?
Viktor 00:13:26.480 as an individual or as an organization.
Darin 00:13:29.903 As an individual, okay, as an individual in an organization.
Viktor 00:13:34.175 I mean, as individual? No, I, I strongly believe video without ai, that we are going into, this is a dump terminal situation, everything is a service somewhere. Whether that's somebody's servers in your company or AWS or what's not, it doesn't matter, right? I don't think that, and models are even proving that, right. You will see people, Hey, I can, I can run this model, I can run that model. But the difference between the capacity of models that can run on a, on a normal hardware, let's say, right? Some MacBook or something that is not, I'm going crazy and I just bought a real server with, I don't know how many GPUs and not models are ridiculously bad when if you try to run them right it, it's serious tasks. Uh, it's silly tasks. They're, they're good. so I think that that's only accelerating the move towards, this machine is a dumb machine. It's a terminal, that accesses services from somewhere else, including ai, where we will need to think about hardware heavily. That's, you know, okay. But on a company level, oh, okay. I, I, I need, I need stuff. I need a place to run all that. Imagine when you start creating your own models. Oh, that's going to be tough.
Darin 00:14:50.944 Oh, you're gonna have to have a few hundred million dollars, if not billions of dollars to build your own models ground up.
Viktor 00:14:57.981 Yeah. But
Darin 00:14:59.584 period.
Viktor 00:15:00.936 Now for, for many people, common models will not be, we're gonna be okay. Right? But if you something like, I dunno, like, uh, bio-engineering, pharmaceuticals and so on and so forth, they'll eventually start having, start, start building their own models if they're not already right. It'll be highly specialized.
Darin 00:15:23.203 but again, that's, that's the key point there is highly specialized, which typically equates to higher revenues that can justify the spend.
Viktor 00:15:32.415 Yeah. But also the, the cost will be dropping over time. I mean, we have, we see with the inference costs are dropping constantly, right? So the, the cost to run models is, is going down or at least staying the same, at the same level and then getting much more out of it. so I don't think that that, that, that will not be a problem on the long run. But it is a problem now still, nevertheless.
Darin 00:15:57.297 Well, the real problem is people that have figured out how to build a fleet of agents to do work for them 24 7. The model providers don't really like that a whole lot unless you're actually paying for tokens.
Viktor 00:16:10.899 Yeah. Yeah, I mean, I mean, you know, I, I'm not worried about them. The revenues are of providers, like on Tropic are going at increasing it insane rates, so be fine.
Darin 00:16:25.661 Yeah, but when you pull back the curtain, just because their revenues are good doesn't mean that their cash outlay each month. Is that good.
Viktor 00:16:34.467 Yeah, I mean, you know, you don't want to to be. earning too much, right? Because that means that you're not growing too fast enough, right? Uh, at least the companies at that stage, Google needs to think in Amazon about revenue from that perspective. Even Amazon is still not earning money, really. but revenue one where or another is, is increasing, right? And inference cost is decreasing. If you look at any graph of the inference cost, it's, it's going down.
Darin 00:17:04.741 Where do you think we're at today in build versus buy?
Viktor 00:17:08.762 It depends for what, like model you're, you're, you're buying it for sure, kind of like it, you need to be very, very special to create your own model. Right? and even if you do create your own model, that will be as a addition to an existing model, right? that's kind of the top of the, of the chart. Kind of like don't go there for majority of them. now if you're talking about, uh, built like for agents, yeah. I think that companies will be building their own agents hundred percent instead of buying agents. Now, when I say build, just to be clear, it's like build your own Linux distribution. You're still basing it on top of a kernel right on top of something. Just to be clear, I'm not saying build like open a new, new project in vs code and literally from scratch, but, companies will be building their own agents so that, that's that other extreme of compared to models kind. That that's a build for me, clear build.
Darin 00:18:04.503 And that's reality today, right? It's, it is. I'm gonna air quote, it's easy to build an agent compared to six months ago.
Viktor 00:18:14.690 Depends how you define build. An agent like, um, agent is a lot of code or not a lot of code. Actually, agents are not that heavy on code, right? It's a code that accepts your input and passes the input to a model. Model response with instructions, what to execute, and then agent executes that and, uh, passes that, uh, input, uh, output to the model again, right? So it's back and forth. here's the output, tell me what to do next, and so on and so forth, right? So now if I change the system prompt. Then, yeah, everybody's building agents. If you build, we mean really build agent, that code part. Not many are doing it today still.
Darin 00:19:05.644 If an organization wanted to get started today. What is the infrastructure pieces that they're going to need? I'm not, I started out the whole episode with HashiCorp and Lummi and other, I left out your company names too. I mean, that's going to be, that's one angle, but what is the real infrastructure an organization is going to need? Because I think you've sort of led into it right there with the build versus buy. I think if you want good quality, you're gonna go to services for now for. Model. You're gonna be going to services for the Claude Codes of the world, the cursors of the world, right? It's, that's where you're gonna head, but what other things are we gonna need internally to, again, you said we don't need 64 gig laptops anymore.
Viktor 00:19:53.970 so let's start with what we, today's kind of the bucket list for using ai and not necessarily that you have to use it all right. You need the model. You need an agent. and you might want to use some cps, you might want to have vector database, you might want to do embeddings and you might want to do some, uh, do to have some graph database, There are other pieces, but that's, I, now you can get very far with, model agent, maybe a few cps, right? And when I say model agent, I mean public model, public agent, like cloud code few, cps. I'm not building anything. that, that's probably a very good first step. And then the companies are going to start thinking, okay, you cannot continue running CPS on your laptop. We have security concerns, we have other concerns. CPS need to be in some server. Cluster, whatever. It's, that's go, that's probably the first step that most companies will take, from that, in se. Second step. First one was a cloud code model. Maybe Kubernetes, MCP, what's not, right? They'll start running it somewhere that will be more secure. Everybody's clients will be connecting to those cps. and then they're going to start building, custom agents, whether custom means custom, prompt, or code or, or both. Um. Spending to discuss. Uh, and then they will have to figure out where they're running that. And that's starting then to look serious. Okay. We have a number of agents. We have a number of cps, they're all hosted somewhere. and after that, and agents are now probably the first thing we are building, just to be clear, right. Kind of like for real. And, uh, those agents are sub-agents of, cloud code. Just to be clear. You'll probably not be using that custom company agent directly, but hey, click code. I need to do this. Go, go, go over there. And Joe made it for, for the finance department, the right, and then that's the first build. Uh, and from there on, it's a rabbit hole. You know, databases, what's not.
Darin 00:22:05.393 The storyline you just laid out there was the same storyline we've heard for decades. My parallel to that is have a local spreadsheet that became an access database that became a SQL server database that became an access scale database with Oracle.
Viktor 00:22:21.969 Correct. The timeframe is probably the biggest right? Kind of you, you are probably running Excel, uh, sorry, access once you got it for like 10 years.
Darin 00:22:34.980 Right, and until you moved it off and you needed more because of security and you brought that up in your point, we can't run local MCPS anymore. the MCP servers, I've been writing, I'm not even running locally. I, when I build them, they just run externally just 'cause I just don't even wanna deal with it. 'cause that way I'm exercising that part. It's like standard IO is easy. I need a little bit of hard and a little bit of hard is getting it run externally and behind a proxy and all the other things.
Viktor 00:23:07.075 Exactly,
Darin 00:23:08.122 It is not too hard because it's a parallel. It's just like putting an app server behind a web server because guess what? It is an app server running behind a web server.
Viktor 00:23:17.634 but you know, but then we start talking about very, very serious security. Problems or curing, right? 'cause if you're running agent or MCP or whatever, locally, let's say you're doing something Kubernetes cluster, uh, you're doing it all with cube control at the cube confi that I gave you. And that cube confi can operate only in that namespace, But now once we start running it at organizational level, which is happening or will be happening soon, there is zero doubt about that, then that becomes a very different story. Because that agent needs to have admin permission because everybody's attacking it at the same time, while still maintaining level of security depending on who is using it, when it's using it, and so on and so forth, right?
Darin 00:24:06.839 That's the hard part.
Viktor 00:24:08.521 That's the hard part That's going to be critical in upcoming days, weeks, months.
Darin 00:24:14.926 Hours. Maybe. So. Maybe hours.
Viktor 00:24:18.783 By the time we finish this, there will be a, uh, this recording, there will be a solution.
Darin 00:24:25.066 Exactly. Now you speak solution. Halfway. Joking. Halfway. Not You've open source a project. How do you name it? How do you say its name?
Viktor 00:24:36.873 Oh, the short version is ai. The long version is DevOps ai toolkit. And the name, came because I'm too lazy to came up with the name. I've been only using DOT as a short for DevOps toolkit and I just slapped AI over there. It's a playground
Darin 00:24:54.166 ai, right?
Viktor 00:24:55.353 and dot dots how many, whatever number of dots you want.
Darin 00:25:00.376 It's DO t.ai. Um. Uh, you can find this out on Victor's GitHub. I'll link up to it. What's the purpose of it? And there's also a YouTube video. I'll try to remember to link also down below.
Viktor 00:25:14.478 So I want to prove, and I'm not there just to be clear, I want to prove that. It can all be configured and set up in a way that when, when a person comes and says, deploy an application, or gimme a database, or I want a cluster or whatever, that that will be done reliably, safely. Follow company's best practice is all the shenanigans, kind of like that. You can trust it completely. or almost completely. I'm not there yet, just to be clear, but, I've been going through different phases and trying to solve one problem as well after another, uh, I'm not even sure whether the project will survive. I might throw it to the trash once I figure it all out, you know? No, now I know, right? It's my playground. But that's, that's kind of, that's the mission. I want. AI to be capable of doing the right thing and you being incapable of making it do the wrong thing.
Darin 00:26:21.042 That seems like a reasonable goal to have for any software that we write. We want it to do the right thing and make sure that we can't mess it up.
Viktor 00:26:30.568 Yeah. But, and I'm, I'm not, I'm not even, to be honest, I'm not even. I did not even start with the mess it up part. Most of the time I've been focused on doing the right thing, which might sound silly, but it's actually extremely complicated. Uh, you know, oh yeah, Claude Sonet knows everything about everything cool, but it knows nothing about my cluster. It cannot scan in my cluster all the time. Uh, you know, there, there are silly problems like, imagine that I have a Crossplane composition called sql, That manages PostGresQ in AWS or Google Cloud and what not, right? Can instruct AI out the box. Imagine we just cloud code. Hey, you're connected to the cluster. Create a database, create a database in Azure. It'll never even consider that composition I mentioned before. It'll look at it and says, oh, there is something called sql, kind of like that has no relation with DA Azure.
Darin 00:27:39.009 even though that is the right thing that you wanted it to do.
Viktor 00:27:42.130 Yeah, it's definitely right thing. It's definitely right thing. And if I could force it to dig deeper into it and check the schema and completely analyze it from top to bottom, it would come to the same conclusion that's the right thing to do. but then, you know, checking the schema for thousand different resources, that takes what an hour every single time and it'll still probably not find it. Or it'll find too many and then it'll not know which ones to use. on top of that, still doesn't understand the patterns and best practices what's not. But, um, I'm not talking much about it because it's, it's really just a playground.
Darin 00:28:24.948 but since it is a playground, that's actually good for people that are wanting to just have a playground too. It's
Viktor 00:28:30.830 If, if you wanna try, check it out. Yeah, I mean it, I think it's very cool. So just being cautious, kind of. Don't use it in production, please. Kind of like if, if you want to see, want to see kind of where we might be going in the direction, stuff like that. I think it's useful to check it out. Just use it in production. No SLO behind it.
Darin 00:28:51.899 No, it's, it's MIT license. It's just, they're just don't use it in production. Let me rephrase that one more time or restate that. Don't use it in production. Could it turn into production at some point? Who knows? knows? that's
Viktor 00:29:09.061 Uh,
Darin 00:29:09.629 only thing you could say.
Viktor 00:29:10.591 likely not, but, uh, almost certainly not, but I. see blueprints that, uh, might be taken out and then converted into real products.
Darin 00:29:26.422 Okay, so blueprint's concepts, so the one concept, the concept to me that I'm really living on right now is having prompts served from MCP servers. That way if, if you are doing. Large scale refactoring, you got a couple of people that are working on it, or you have multiple agents working on it. Having those people or those agents use the same prompts to go through a mass refactoring, is useful. And that is where you would not want to have a local MCP server running on each of the things that would be just silly to keep maintained. Whereas if it's just you've got it wired up to an external HTT P endpoint. Then it's, it's easy. One small thing on at least cloud code if you are using that. Once you start up cloud code and you've got wired up to hit an MCP server and it's pulling in prompts, it does all that great. But if you update the prompts on your MCP server cloud code will not reload those until you shut down cloud code and bring it back up.
Viktor 00:30:26.351 Yeah, that's
Darin 00:30:27.104 Today, Today, the way we're at, at the time, we're recording this the way it works,
Viktor 00:30:31.726 Yeah. That,
Darin 00:30:32.234 for all the others as well.
Viktor 00:30:34.301 yeah. Yeah. They're all, they're all loaded. So the way it works in general, and this is, this is actually not related to prompts from SAP, the same, what you just said is equally valid for slash commands or crucial rules, right? You change them. They're not taken into the effect until you restarted. That's because. then a session starts at least a new session, CPS and commands and not are loaded into memory so that it knows what to do with them. Especially cps because basically, you know, when you some MCP and say, oh, create an application in, in Kubernetes, the reason why my agent uses CP is because that MCP when, has a description, it says, Hey, whenever you want, do something in Kubernetes. Use me, use me, me, me, me, me, me. and that's loaded. That, uh, startup, it's almost like a. Part of the the system context.
Darin 00:31:30.149 So to recap real quick from an infrastructure standpoint, if somebody's just getting started as an individual today, good machine, a good agent, cloud code cursor. Take your pick. There's others out there, and a good internet connection and some money to throw out a subscription to Cursor or COD or somebody else.
Viktor 00:31:54.071 I think that, there are very few things that, uh, I feel that I can recommend as a no-brainer in terms of expense. And I think that, current model subscriptions are a no-brainer. 200 bucks a month for, whichever plan on Anthropic is still. Compared to what you're getting out of it? Not when I say that you get out of it. I don't mean in terms of token or tokens or stuff, like kind of how much value you get back. And I, don't think that I ever said that 200 bucks a month on a company level per person or individual level is a steal. I don't think I ever said that. Yeah, but I'm saying it now. It's, it's, it's cheap.
Darin 00:32:41.299 It is still not what I want to pay, so I'm still holding off. I still have the $200 a year. But I hit limits. I get put into timeout every day, which is okay 'cause that gives me a chance to go do something else.
Viktor 00:32:55.206 Now as individual, I think that that might be too high of a price. But now what I was saying before, as a person, you're working eight hours a day. Company, company paying those 200 bucks, it's still
Darin 00:33:11.339 Do the math. If it's $200 a month. For a person and you're working on average $20 a day. $20 20 days a month, that's $10 a day. So instead of me bringing in lunch for everybody, I get them CLA subscription
Viktor 00:33:34.421 exactly.
Darin 00:33:35.899 and they can go buy their own lunch.
Viktor 00:33:37.221 per person, right? It's two coffees.
Darin 00:33:40.039 It's two coffees per person.
Viktor 00:33:41.151 a country and stuff like that. But in states, two coffees, right? One Starbucks.
Darin 00:33:46.299 One, one Starbucks probably. Yeah. So that's, but see that's the hard thing for me. It's like, okay, even as an individual working for a company, it's hard to not just want to pay for it outta my own pocket because of the actual performance value that I get out of it. it's, I mean, it's like I want a paper. I just can't do it right now. But I'm okay paying $200 a year 'cause Okay. I, because I'm not sitting on it 24 hours a day. If I, if I was coding 24, um, a full eight hours a day, it would be absolutely worth it to me.
Viktor 00:34:22.388 yeah, the, my previous comment was based on, yeah, you, you are, you're using it eight hours a day. Uh, in that scenario, it's still.
Darin 00:34:31.122 you might disagree. And that's okay.
Viktor 00:34:33.285 especially if you look at what's happening right now. Uh, the G PT five models are like five times cheaper.
Darin 00:34:41.353 Right,
Viktor 00:34:41.925 are going down.
Darin 00:34:43.181 so as the prices go down, things may get better or we may not have as many. Of course, they're trying to also recoup probably because of the bad actors that have been using a little too much.
Viktor 00:34:57.250 Yep.
Darin 00:34:58.288 So what do you think? Are you all in on ai? Are you toes in? Are you pretty much set up that basic infrastructure? You got a, a good machine, not a great machine, but a good machine. You got a subscription to something, Claude Cursor, whatever, and something to work on. I think that's the baseline that you're gonna need for these next months. I was gonna say years, but I don't think that matters anymore. I think it's months. I think you've got to either get on board, figure out how it works for you or not, and then that's okay too. It's your choice. Head over to the Slack workspace. Look for episode number three 19 in the podcast channel and leave your comments there.