Ben
00:00:00.000
shadow AI is, going to be in everything next year or this year. Literally like everyone's gonna have AI in their stuff, so, just because you're a security leader, if you have a platform, assume that's probably gonna have AI in it. And then that means where's your data going? That's in there.
Darin
00:01:22.900
Viktor. You and I have spent how many years doing this? not the podcast, but just working in tech and working specifically in the stupid, silly named podcast DevOps that we have right now. Right. DevOps paradox. It's like
Darin
00:01:43.894
And we were doing this way before it was called DevOps. I mean, Patrick, thank you for naming it right. It gave us a name. we already had a name. We were shared services before. Come on.
Viktor
00:01:53.684
gimme that. Patrick. Thank you for naming something that nobody understands what it is and everybody calls it the same, but does something completely different.
Darin
00:02:01.707
Yeah. But see, alright, so let's assume DevOps is fine for a moment, right? We'll just make that assumption. it's been. Taken over and I saw one the other day and I had to shake my head. I I, I'm gonna go ahead and say it now and then we'll get to the real point. It was a IML DevSecOps or Dev Ssec finops.
Viktor
00:02:23.150
Oh, oh, oh, wait, wait. You forgot to test repeat again, but put test in it and let's see whether we can do it.
Darin
00:02:31.664
Yeah, I, I can't 'cause I've already forgotten it. On today's show we have Ben Wilcox on from ProArch specifically ProArch.com. Ben, how you doing today?
Darin
00:02:43.364
Ben is also the CTO and CISO, so I want to dig into some of that a little bit later. But I want to just jump into one thing right now. we're poking at DevOps. It's easy to poke at DevOps, but then DevSecOps comes out, right? That was sort of like the first iteration of that, and to me, DevSecOps is nothing but a lie. How far off am I.
Ben
00:03:06.806
I don't think you're too far off. I'm not sure it's a complete lie, but there's, challenges when you try to force a developer to join the security team, and that's where I think DevSecOps has really had a huge problem.
Viktor
00:03:23.201
when you say developer join security team, do you mean like be member of that team or do the developer staff Securely.
Ben
00:03:32.126
I think it's getting the mindset. I'll give you an example. I was in a call today and it was a security review of a platform. during the security review, you could just see the sweat starting to happen on, the head developer's, you know, eyebrows as he's having to go through all of the security conversation pieces. Right. And this is retroactive conversation it's not early on. It's, towards an initial release because it didn't happen earlier and there wasn't great guidelines. This DevSecOps approach of trying to force security and, development together in a manner that doesn't have good guardrails and you know, isn't kinda pre-planned really doesn't work too well.
Viktor
00:04:19.206
I feel that the problem in those situations is not really in guidelines, but more or block of them, but more that, at least my experience, and this is not my main area, is that. Security is usually reactive. I'm going to tell you what you did wrong. Instead of focusing on, I'm going to embed myself into processes. I'm going to create some services, I'm going to do this and that. So it actually. Your development is done in as secure way as possible, and this is the important part, without you suffering in the process. In a way, it's almost as if there's a level of, I'm not sure whether to call it laziness or in inability to develop what needs to be developed to actually make all those things work simply right.
Ben
00:05:17.534
you have your point, there, and you know, one of the things that I noticed. this was 20 years ago and it's still true today, right? when a developer's given too many choices, especially on areas that they don't have deep expertise, right? And we're not aligning necessarily on an architecture that is, been already thought through when it comes to security, compliance, operations, Then we're leaving a lot of discretion and. The developer may have their own ideas of how this gets done. then we're in a reactive state that you just described, now we're having to go back and retrofit, fix it. I personally would like to see more pre-planning on that side of it. let's establish that secure foundation. that would be the standard environment that, The developer can then deploy within, right? It has policy, it has the right settings, right? They don't have to think about, Hey, am I gonna go, you know, do I need to go turn on a feature in Azure for, you know, security? Or, what's the right thing to do with a configuration for the storage accounts, right? or, you know, what do I put in front of my API from a management perspective? if we remove those types of barriers. Right. I think that we're already starting with a less friction at that point.
Viktor
00:06:34.550
Exactly, because I feel that significant number of developers, if they create that storage, you mentioned, right? It's not that they chose that storage specific. That's kind of like any storage. I'll take this. and, and if you give something else. That person is just as happy, right? Just not if you do it half a year later in way.
Darin
00:06:57.524
I think it's interesting that they waited until the end, which isn't abnormal, but that's the way I did things in the eighties.
Darin
00:07:07.386
Okay. Fair enough. It's abnormal. Shouldn't be called, but I'm thinking through this. And we'll get to AI later, but there are enough tools already in place today, and a lot of things you're talking about is, okay, how do we protect the APIs up front? You know, all that stuff to me should be a checkbox in the platform engineering option. It's like, I need an API. Great here it's, and it's got everything already baked, but for the actual code that somebody's writing, there's already enough. Off the shelf tooling that people should already be using. Right. do big enterprise, and this is where I wanted to pull in because is a consulting firm, so Sure. You probably have stuff that you have for your, your own internal stuff, but what are you seeing from your clients talking about you, you were doing this review for this client. It's like, it seems like they're still stuck in the eighties doing things the eighties way.
Ben
00:08:03.430
Yeah. And every business is different. we see a lot of disparate tools, right? Not necessarily standardized. there's been approaches, right? And, and this is I think also where maybe there's not been enough collaboration necessarily between teams, right? So I mean, I go back to that. Secure foundation, right? That there's a concept of landing zones or well architected framework, That help you pre-plan all of these things. Right? once that's configured right, you turn it over to the developer. Now they don't have to think about that standard control, Of what they need to configure on that storage account or what goes in front of the API that should be that check box. when it comes into the tooling side, stuff is still very much. all over the place, right? We see a lot of fragmentation when it comes to, different tools for static code analysis. Not necessarily tight integration. not always a well-defined plan when it comes to qa, right? I think everyone's in a different state when it comes to doing their QA side of things. Some of it's baked into the developer's job. Other people, you know, treat it as a completely separate discipline and team. There's not necessarily been that effort and thought put into how do I build a, a consistent, center of excellence around qa. And QA frankly should include security aspects.
Darin
00:09:22.228
Okay. You just said so many buzzwords there. I understood them all. Center of excellence, that one tends to set me off, so I'm, I'm just taking a deep because usually a center of excellence is nothing but a center of wasted budget. but again, I'm, I'm going back to my point. People that are in charge now, meaning the, the C levels and the engineering managers are probably my age. They're gonna be in their mid fifties to 60. Am I wrong in that because I haven't set sat around maybe 45 to 60.
Darin
00:09:58.473
So assuming that. Assuming they're not just a people manager, they've actually been a technical manager and they came up being an engineer. They've been through all this pain and suffering throughout their whole career. Why are they doing this on yet another generation?
Darin
00:10:24.981
Joking aside why aren't people actually getting it? I mean, it's never been easier. Okay. We've got lots of tools, but it's never been easier to actually launch a secure application.
Ben
00:10:36.025
I agree. I think that there's also never been probably as much pressure to release features quickly, we have a lot of organizations that are, backed by private equity. Looking at, the funding perspective, trying to get to market, trying to capture the market share at that moment for their product. sometimes I think that might be also a driving factor, right? It's not necessarily about budget, but it's about trying to do it fast and, you know, not until really there's a problem, is it raised at this point.
Darin
00:11:11.527
So if we could get rid of all PE and get rid of all VC and just go back to bootstrapping, you're saying all of our problems would magically disappear.
Darin
00:11:20.204
It feels like there's a grain of truth in that somewhere. Not a very big grain. Probably about the size of mustard seed. we're talking about speed. In all seriousness, back to pe, throwing money at it. I mean, how are we balancing the speed with security? I mean, again, some people would say, throw this security tool at it. Do this security tool, run this scan, and hey, bingo, bingo, you're secure. We know that that's not true.
Ben
00:11:48.032
it's definitely still a combination, Of people, processes and technology. Shortcuts get happen, teams get behind, pressure gets put on to release it, sometimes those shortcuts. Do believe lead to that. And I think that's where, if we start putting in those guardrails right from the beginning and, you know, there's other operational ones, right? scanning, of the code, in line as, as it's being, pushed, right? Looking at those, you know, we, we have all of these different agent tools now that are part of, you know, advanced security with GitHub or, you know, tie in with, Azure DevOps those certainly can help get the visibility earlier on, but then when, you know, a mistake necessarily is made in there, we gotta be able to also stop that. some of the bigger things like, you know, some of the more, I guess, capable features today are right, being able to stop certain things right at push, where it doesn't meet the criteria or, you know, some sort of, railings put in place there.
Darin
00:12:54.170
But again, this is nothing new and it has never been easier to do this. And that's sort of like my frustration with this is, and it's my generation's fault. It's maybe victory is right. It's like it's my time, baby. I, you know, I I know what to do next.
Viktor
00:13:11.326
here's the question. How often do security teams, how often security teams. consist of developers themselves. The reason why I'm asking that is that I feel that we are passing through the same phases with different teams over time in terms that, back in the day, if you a tester, you're try code, you click, click, click, click, click. Yeah. And then people complain it takes half a year to test something, but what can you do? Kind of that, that's what we do. Right. And that changed over time. Not necessarily for everybody, but it did. Right. And we had operations kind of, oh, if I'm operations, I copy and paste from Word documents. Right. I don't write scripts. I don't write go, go code. I don't do anything but copy and paste command. So I'm very good at that. Right. But that's what, that's what I do. And that changed. Right. Call it. DevOps or, or platform engineering, whatever it is, right? And I feel that, that, that is or will be happening with every group. And I'm curious how, how common is it today with security teams that no, no, no, I'm going to create this, that will make your job not be slower and yet secure at the same time, and things like that. Is that even happening?
Ben
00:14:33.806
It is happening. I can tell you that I've seen the evolution of our own security teams here and, and the stuff that they're doing. They are riding a lot more. Code than they ever have. they're using tools for workflows to help automate processes. it's becoming more and more common across both the infrastructure teams and the security teams that, you need at least the ability to understand code, PowerShell or other, at least CLIs that are out there so that you can automate. What's in front of you? are they writing full apps? No, but, it's also probably not that far away either. with all of the, benefits coming from the AI side of it?
Darin
00:15:17.803
Now you've called out GitHub, Azure DevOps, PowerShell, one of my favorites. it feels like you're primarily in the Microsoft world. Am I incorrect in that statement?
Darin
00:15:31.153
congratulations. I'm so sorry. All at the same breath. how has that changed? Because I haven't, here's my claim to fame. In my 40 years of programming, I have never developed with any Microsoft product ever other than VS Code. I have never touched a language from Microsoft or anything else. I've worked in other languages you'll never want to hear of. Let's put it that way. What is the reality like in Microsoft today?
Ben
00:16:05.279
Reality is Microsoft doesn't care if you use a Microsoft oriented programming language. They're very heavily focused on being as much open source as possible. pick your, uh, your platform. they want to be the platform that you run your code on. and you know, they have their own stuff, right? Their own ides and other tools in there, but at the end of the day, they, just want you to run your, business on them.
Darin
00:16:31.885
That was a very polite way of saying, yes, Microsoft is good, Azure is fine. And I, I think, I think it's right. I think Azure when, I'm not gonna make this a Microsoft bashing episode, but, once Steve Ballmer had left. Who came in after him? Was that, was that Satya? I've lost track. There was one other person in between.
Darin
00:16:56.522
Yeah. So. here's the key point to me, again, showing my age. I remember when Mark Russinovich launched, power Tools and sis Internals. Uh, I remember when Sis internals wasn't part of Microsoft and. now he's effectively the head of Azure, so that's the reason why. It's like, oh, okay, I could deal with Azure 'cause I know the quality of person. I don't, I've never met him, but I know the quality of person that's over that. It's not just somebody that uses Excel to determine what business gets done today. So I have, I have hope, but it also feels like that Microsoft is also behind the eight ball. It feels like depending on which day of the week it is and what feature it is, one of the three hyperscalers is behind the eight ball.
Ben
00:17:44.294
I mean, I don't know how many more DNS outages anyone can survive these days, but that seems to still be a underlying problem with every single cloud provider. but yeah, I think the big shift for Microsoft was definitely. Post bomber. I had a chance to meet Mark back in, I think in 2015. Um, it was at a Azure. Architects, like airlift type of scenario. And, you know, he came out and he was presenting on where Azure was, the workloads that were running on it. And the first thing he said was, this may shock you, but I do not want Windows to be the primary platform running on top of Azure. and he said by the next two years, we want more than 50% of the workloads to be Linux-based. And they actually got that number, in that timeframe. So that does speak to the open source side of things with them. you know, do they still want you to buy their products? Absolutely. but I feel like there's a lot of hope across the board if, you're thinking about Microsoft, think of them as an operating platform.
Darin
00:18:45.834
I do have to admit though, uh, you can't see it, but I still have a Microsoft Ergonomic 4,000 keyboard, and that's my favorite, and I've had it for 20 years and it's the greatest keyboard ever. I'll fight you to the death on that one. It's interesting though, again, as your role as a CTO in C-S-I-O-C, or cso, however you wanna say it, and you're talking to your clients. What are you telling them today? I mean, it's one thing to be a CTO, right? And you go around and do the CTO thing, but having both roles, I imagine you are torn a lot. I.
Ben
00:19:20.629
Yes. when I get to those torn pieces, right. I do rely upon, My coworkers to help me balance that because I have coworkers that are only in security. I have coworkers that are only in development. So how do we get there? the thing that I need to always kind of balance for me is what's right for the business, at the end of the day. Security's there to reduce the risk, the CTO side's there to help ensure that we're developing technology or deploying technology or whatever it is that aligns with the business outcomes. I still need to have that very business centric approach to everything. Security is super important and we can't mitigate those risks, in there or ignore them. Those need to be upheld. We also don't wanna put so many barriers in place to doing business that technology becomes not able to be leveraged because, hey, we got all of these, pieces that everyone has to go check a box. Right? I don't want to be a huge gatekeeper. I, I would like to provide the guardrails, have. Play within those guardrails, uh, when, you know we have an exception, right? Let's talk about that exception and figure out why we need it. If there's alternatives, get the smartest people in the room that can help mitigate that risk with either some other control or, you know, another approach. I don't want to remove the freedom of development as a ciso and I also don't want to remove the secure side of it when it comes to, securing the business and, data and, the systems itself.
Darin
00:20:52.536
Given a choice, would you? Split the role of CTO and CSO, or would you keep 'em together? There's like an orange jumpsuit on one side and you know, staying outta jail on other.
Ben
00:21:05.648
I am okay with it. The role at ProArch at a different organization. I don't know if I could, if I would feel the same way. I have great support around me. it could be very challenging, in a enterprise business, having both roles. I could see too much being pulled there.
Darin
00:21:28.189
and think just financial services. That's always the easy one to go after. It's like you want people fighting on both sides of that fence. 'cause if it's one person that becomes too risky, I think.
Ben
00:21:40.804
I would agree if you, if you're talking about a compliance oriented business that has those sides, you definitely need separate roles. Right. You're, you're talking about someone who has, complete regulatory, responsibility. I think you do have to have separation there.
Darin
00:21:54.346
let's go ahead and step into it. 'cause we tried to stay away from AI up to this point. How is it changing your recommendations as both a CTO and a ciso? Right? Because I imagine your answer is a little different for both.
Ben
00:22:09.179
Both frightened me in in many ways, right? I am gonna go off on, I, I guess on a slight t tangent, but I
Ben
00:22:18.452
when I was at, ignite this year, they had a, presentation about robots and then they also showed how they were really. Quickly advancing training of robots and allowing them to navigate the real world using LLMs. and the knowledge in there, right? They showed a robot, for example, that they told it to, pick up the Coke cans out of the, bin and it didn't know anything about Coke cans, but obviously LLMs do and use that knowledge to be able to go pick things up. that side of it scares me a bit because I'm. If we're talking about, you know, robots and, interacting with the physical world, you know, I think we're a few years away from that and the same side. AI excites me quite a bit because I think that it is a big business enabler. I think that there's, for me personally, I don't want to go back to the days of not having. my various, services that I'm using for, productivity. Like, I also certainly don't want to go back to the days of having to, write as much as I used to have to write. I mean, I still have a lot of content and things I need to work on, but I would spend hours every day, writing and, feel like I, I, was having to go back and, try to fix those things. So now, the security side of it, right now we have all these new things that we have to go look at when it comes to building platforms that have ai. the testing mechanisms change, the teams change. You have to start thinking about, when you're doing application testing, for example, pen testing on an app, right? We are looking at having to build different skills, different frameworks, different approaches to testing applications that, for example, we were doing one type of testing six months ago. Right Now, they come back in and ask for a, test of their app. Well, turns out we have to take it a totally different approach because they introduced AI into it. So now we have to look at the different attack surfaces. We have to consider supply chains. We have to consider, you know, how are you interacting with identities? What sort of tools, how's your data flowing through this? So it adds a lot of complexity, in the back end and someone still has to, take care of those things.
Darin
00:24:38.378
I mean, okay, again, I'm gonna go with you free roaming robots. I'm not down with that. That's just not gonna happen. Cars we're already there, but. see how it plays out. I'm sitting here thinking through what you were just talking about, and it seems like Viktor and I have talked about this. It's all about in the software development life cycle. We're speeding up one section. Right now, effectively the development section, but most other things aren't being sped up at that same rate. So we're not delivering as fast as we can ship. Again, AI or not. We could have, we were speeding up when we had Power builder and Visual Basic too. I how can, 'cause this is a CTO problem. How as a CTO can you make sure that, okay, as we're speeding up this one section, how are we making sure that we're actually trying to speed up the whole pipe and not just one little section?
Ben
00:25:32.504
How we've been approaching it is with a set of blueprints, it's not exactly similar to what we were describing with setting, the Landing Zone Foundation, but as we are developing capabilities Whether it's gonna be the agent side or the AI side, right? What are the pieces that we need to do? And this is gonna be completely iterative as well as we approach this, because what we're doing today to secure something, in AI may be different six months down the road, right? Or, or. One year down the road. we're still in a very much of a learning phase, but that roadmap helps define as we release the capabilities, what are the next pieces that we need to get? So as we're just entering in, for example, maybe, I'll take for example, just a chat bot type of feature, right? We're not talking about an AI agent that's making changes to anything yet. we want to have visibility into how interactions are happening within that. So. We wanna make sure that we have the tools that can surface the visibility and the code that can do that. We wanna make sure that potentially, if we're interacting with it, with potentially sensitive information, that that information's being saved someplace. If it's a compliance obligation in the business, Maybe we need to be able to save all of that someplace where it can't be modified or edited from a logging perspective. So we're trying to build layers of. You know, starting with a core, I guess, capability of, you know, what do you need at the very basic level of ai. But as you start moving up that, ladder of capabilities in ai, then I think you start having to get more and more security features in place so that you can understand, you know, what happens after that prompt. Right. What sort of orchestration tools, data, actions occur and how are you handling the. Privilege and permissions of everything within there. that becomes a big, big challenge I think.
Viktor
00:27:31.145
do I understand right that the focus, at least in this scenario, is more on how do we know what it's doing? Then how do we teach it, how to do it better? In a
Viktor
00:27:46.809
because that still will not, solve the problem. Darin was talking about the pipeline, right? Because I'm a developer, Hey, let's work on this feature. I want this and that, uh, blah, blah, blah. It does the stuff. You review, da da, you're done in a day, right? What takes a week? Or whatever, right? and then the rest of the pipeline is still slow, or at least the same speed as it was before. And that includes including security reviews, what's not right. So nothing really gets done faster 'cause we are just accumulating more and more development done, waiting to be processed by the same speed as we were doing before. Right.
Ben
00:28:32.492
Well, yes and no. I think that there's pieces where you can interject, additional types of, automation or, actions like for example, right in. The test cases that we're using for, testing AI agents for security, right? We're using AI to build those test cases. we're not trying to build those by hand. So while maybe there's a new type of, tests that we're having to do, we are leveraging AI in some of those, capabilities there to speed that up.
Viktor
00:29:06.233
that still feels, in a way, similar to before, before we had a problem that, okay, so you just did a month of development and. You did it really badly, right? because it's not secure. It's not this, it's not that. Right? And kind of go back, and then everything breaks loose and then we don't get anything done. Right? now we have human orchestrating that's development rather than doing it right. And it feels like almost that we might be repeating the same mistake. Kind of like, okay, so now. It's not me doing it wrong, but AI doing it wrong, and just like nobody taught me how to do it the right way. but told me at the end what I shouldn't have done. We might be facing the same with with agents, right?
Viktor
00:29:58.809
Yeah. Agent talks to LLM. It has potential capability to get the information, and at least in my experience, if the information is given, it often does the right thing more often than not. The problem is that you did not tell me, so I did this. And that tell me can be okay. Developer did not specify what they want or, the context was not augmented with the company. Rules about security
Ben
00:30:31.701
maybe that's where I think this year maybe we'll see some better. say tools, right? Tools that can help you govern, because I don't necessarily think it's about, Tools that can help you. right. You, you wanna set your standard and let people operate within that. We wanna stop this process of, Hey, go back and fix it. So if we can get that governance in place, there's probably a lot of innovation on that front. I, I do know that Microsoft's very heavily focused on the, the governance of ai. Across, that, because the agentic piece has really bubbled up the trust side of things. previously, eh, there was some concerns about data leakage. Now it's more about can I trust that this is going to. Behave consistently and not drift, or do something differently, right? Over time. So, AI has definitely evolved and the testing of AI has evolved, right? We, we see drift happen when testing output of, Applications. So for example, right, you're, you're talking about, you know, is this thing consistently giving this type of output based off of its business use cases? Maybe it needs to have a high level of, accuracy or maybe it has to have, less bias or whatever else as new L LMS get released. Those. Outputs have variations and patterns of inaccuracy start changing, right? Or maybe it becomes more accurate, whatever it is, but we need to get to a point where we can just set a guardrail overall. And when things start going outta that, then we have to interact a little bit more.
Viktor
00:32:03.182
But feel that the problem is not, in this case, in lms, lms, have the public information, no idea what. The private information is I feel that that's the problem, It's like I just joined your company, you did not tell me anything and I know how to deploy to Azure and I will deploy to Azure, and then if it happens that, you in this company use AWS but never told me about it, that's not my problem type of situation, right?
Darin
00:32:32.534
I am gonna go a different direction with that. Lm, I think you're talking, I'm gonna use this word sunsetting of an LLM. We saw OpenAI is sunsetting a handful of LLM versions soon. We've already seen that with philanthropic. I'm sure we'll see it with everybody else. And going back to, we pose a question or a statement to an LLM and we get back an answer that's usually within plus or minus, some variant, and we consider that acceptable and now all of a sudden that model is no longer available to me. this is a variation of the, we need to upgrade the framework from version three to version four. But I think a lot worse 'cause it is completely non-deterministic. At least with a, a framework upgrade. We know that, oh, okay, we fix this, we fix this, we break this, it's all fine.
Ben
00:33:29.121
I think it's a new set of measurements that constantly have to be looked at. Right. And, teams are gonna have to create variations right. When a new model comes out. And, and maybe it's not right away, but yeah, we've had, I guess, how long on, on. Six months, seven month window with chat GPT and their, their 5.0 model. the roadmap has certainly gotten very short for making changes on your backend, right? frameworks don't typically go that quickly, right? Seven months is, is a pretty short window. So, I think you almost have to start getting into a testing mode and seeing how your, what prompts that you're, you're used to being like. Fed into it, right? And, and how the app's being consistently used is that output can still kind of remain the same when that model gets appreciated in six to nine months.
Darin
00:34:20.161
Do you see a lot of people taking that into consideration when they first start their development and they know their development cycle's gonna be six months.
Darin
00:34:31.561
They're just trying to get it done today, but not realizing that today is going to soon become tomorrow, and tomorrow everything changes again.
Ben
00:34:38.221
Yeah. And, and, and maybe it comes into the point of trying to be a little bit more selective on the models that you choose, right? Choosing a smaller model, like a mini might be more advantageous because it's not gonna change as much, right? There's not as many other parameters in there. I don't see anyone doing that really. they might think about it retrospectively, but, that problem is rampant across all, tech though, right? People are still, you know, Microsoft gives a, a end of life and three years later, you know, once the end of life happens, then they reach out and say, Hey, you know what, can you guys help us with a scent of life? So it's, it's just very slow, right? No one wants to move until they really have to.
Darin
00:35:19.794
Right, and then they extended or they do extended support pricing or whatever, until they finally. Get rid of it. Maybe. I imagine Windows XP is still supported some way, somehow, somewhere. I don't know if they're gonna be able to do that with the models. I don't know that the model vendors are gonna be able, 'cause Microsoft does not have a foundation model of its own. Correct.
Darin
00:35:41.786
They are just a, an engine running all the other models, which by the way, great business model not to use model in 20 different ways. I would imagine Open AI is gonna come to them as like, 'cause oh four Mini, which is one I've used for a long time, is finally being put out into pasture and it's gonna go away. Or is it gonna go away? I can't see how any runtime could keep running a model that is no longer supported by its vendor. Am I wrong in that?
Ben
00:36:14.710
No, you're probably right. And maybe, maybe we'll see that change this year. Right. If, the adoption is enough and the pushback is enough from the large enterprises, I, feel like it's always the large enterprises that make the, big companies change their opinion on how long something can be supported, right? you get enough pushback from, Walmart, right? Then they'll, say, Hey, you know what, we'll support this for another, few years. Right? And no worries. But until that happens, and maybe adoption needs to be, you know, higher down the road another year or two, I don't think anyone's gonna go there yet. You just have to deal with the, the drift.
Darin
00:36:51.963
What would you say to A CTO today that's just now starting to dip their toe into any kind of AI in their org?
Ben
00:37:00.108
do it alone. talk to people constantly figure out what's happening. it's just so rapidly evolving. I mean, like even. Even just the security side of it alone is evolved completely different. I have totally different conversations than I had six months ago, on it.
Ben
00:37:21.228
so people were more concerned last year with, hey, this model is gonna be inconsistent with its output. people are more concerned now about. If we start opening this up and, we've allowed, community development with our, in our organization of, whether it's agents or whatever it is, right? We've, we've seen this happen across lots of different platforms. It could be Power bi, it could be, platform or any of the things that we're, you know, it's becomes commoditized within the organization. You start ending up with a lot of sprawl. AI is going to that direction, right? And so the conversations now are who's gonna govern this? Who's gonna control it? How do we set some guardrails here that, you know, when this thing starts misbehaving, we're not surprised and it does something that we don't expect it to be doing. no one cared about that six months ago. All they cared about was, Hey, you might be exposing something sensitive, or, you know, it's gonna give out some, bad advice or bad in, output. But today it's a little different.
Darin
00:38:25.848
this could run away worse than anything else in the past. I think other than, you know, spinning up a VM that was, I picked the wrong Quad X or eight X thing. I mean, the cost of tokens, I, I want to say they're gonna come down, like I'm thinking, you know, how AWS over time has always lowered prices. I haven't looked at Azure that often, so I don't know if they've lowered it the same rate. I don't see that happening with tokens.
Ben
00:38:51.493
I've heard a couple different things in the last year. one, certainly people's perception of how much it costs to run. AI workloads, whether it's lack of use or something else. Right there, there's a lot of mis projections on cost of running apps with ai. the flip side of that is, I think some of the, Azure prices have gone down, um, it's more of the, they, do like a depreciation, right? You're running an older version that gets cheaper. and that has happened. I saw another, I think it was just last week or maybe it was a week before, but there is a projected drop of revenue for. AI platform providers for basically hosting this stuff. And so I think that there is going to be some sort of, and maybe, maybe we're, on the, uh, upward trend here, and pretty soon that maybe it'll start going down. So the big cloud providers are projecting that cost will go down in that sense. But what they're telling everyone is, don't worry about that going down. There's other services and different capabilities that, you know, we'll, we'll kind of backtrack that.
Darin
00:40:02.700
You said earlier, mis projections were your clients or the people you were talking to mis projecting both directions.
Darin
00:40:11.900
Interesting. So the ones that had overshot, I imagine they were pretty happy. For the ones that had under guessed, what was their basic reaction we need to shut this down.
Ben
00:40:27.245
Not necessarily, I think they went back and started reevaluating the value of it. is it providing the value that we thought right? Is It's great that we can roll this out and, we thought it was gonna cost us, $8,000 a month Now it's costing us 15. do we feel like this is giving the value to our employees or our customers that we thought it would give in here? if it's more integrated into the product, they're just absorbing it and talking about it. But if it's an internal tool, those discussions are happening.
Darin
00:40:59.138
And we talked about what you would say to. A leader, I think I said CTO. What would you say to a CISO that now is being dumped on with ai?
Ben
00:41:07.800
Plan to get involved and understand how. AI will be used in your organization. And that really comes into, you know, and, and how it is being used today, right? So inventory, right? go across this, the suite of everything, right? talk to everyone, sales, talk to finance, talk to, you know, not just your developers, but people that are using the. Commoditized, products like copilot or copilot agents, et cetera. and then really start looking at how your data is being used, as well as within the applications, right? If you don't have visibility, then you're gonna, you're gonna start missing things in there. And then I'd say probably one other thing there is, think about it from a threat perspective. If you have. This application and something goes wrong. What does that look like and how are you going to respond to it? Because if it's a customer facing app or an internal app, we're doing other things, especially as we start moving towards the agent side, right? And the ability to operate tools and make changes. we're gonna have. Potentially new risks that we've never thought of, aren't known today. So, keeping well informed on that is gonna be super important. but also having visibility into what the system's doing today is gonna help you in the future, right? Sometimes you don't know what you don't know, and that's been a longstanding model and security, right? Try to log everything if you need to go back, you got it. really gonna need to do that with ai.
Ben
00:42:46.800
Yeah, shadow AI is, going to be in everything next year or this year. Literally like everyone's gonna have AI in their stuff, so, just because you're a security leader, if you have a platform, assume that's probably gonna have AI in it. And then that means where's your data going? That's in there.
Darin
00:43:07.224
Boy, this is all very depressing from the and CISO perspective is there any hope?
Ben
00:43:14.438
Yeah, I do, I think that there's, there's gonna be foundations for, for safe AI adoption. I think that there's going to be new technologies, Certainly AI for security and security for ai. Those, those are two concepts that, will be butting heads against each other for upcoming years. I think that there's ample opportunity for everyone to have improved productivity, improved safety, but I also think that we're gonna have some new risks that we certainly don't know what's gonna be out there.
Darin
00:43:48.674
All of Ben's information is gonna be down in the episode description. Again, ProArchrc can be found@proa.com. That's P-R-O-A-R-C-H. So I think ProArchrchitect, ProArchrc, not ProArchrch. ProArchrch would be like a interior design company, I guess. I don't know what that would be. Uh, promark.com. Ben, thanks for being with us today.