Trevor 00:00:00.000 the analogy that people have used with me is kind of the six lane highway ending in a two lane bridge. we've seen this explosion of code, but we still are shipping code at a slower pace today because it's, well, one more security vulnerabilities, more time to review that code. The throughput has declined over time and so I think that's one of the big problems is things are just shifting to the right
Darin 00:01:22.536 Viktor. How long have you been working with feature flags do you think?
Viktor 00:01:26.291 I dunno, man. Depends. What do you consider feature flag? If all statements in my code, if that is considered feature flag, then I would say maybe 30 years.
Darin 00:01:36.491 Pretty close. that's what I was thinking. That was probably the original og, original og. I'm duplicating myself
Viktor 00:01:42.575 don't, I advanced since then. I started putting, uh, configs into a properties file some 28 years ago.
Darin 00:01:51.225 Right. To me, the first real feature flag system that I used, you can call it whatever you want, was Netflix's ArcHa. Was a, a great feature flag system that they put out in 2012, I think somewhere in that ballpark. it was great. It was like, oh, this is a whole lot better than if statements. Why are we talking about feature flags? It's like this is a solved issue. It was solved decades ago. Literally. On today's show, we have Trevor Stewart on from harness, formally from, as a co-founder of Split io. Trevor, how you doing?
Trevor 00:02:24.016 Good. Good. Thanks for having me.
Darin 00:02:25.976 Feature flags obviously has been in your blood for a very long time. How long?
Trevor 00:02:31.385 so we started split about 10 or 11 years or so ago, and then before that we were using it at a company called RelateIQ, which was acquired by Salesforce. And then before that, my co-founders were using it at LinkedIn and Google. you know, I like to joke that I've spent the last decade of my career around feature flags. I, uh, no, did not expect that I'd be spending 10 years of my life around an FL statement. To your point, Viktor.
Viktor 00:02:54.855 So you, you don't like change, right? You figure it out kind of feature flex are your thing and uh, that's it.
Trevor 00:03:00.840 Um, no, the problem is I actually do like, change a lot and I think that's, candidly, probably one of the most exciting things about feature flags over the next 12 to 24 months. And we can talk a little bit about how they're gonna evolve and where, where I see things going,
Darin 00:03:14.006 Well, that's the key part to feature flags is it allows for change so to me, there's really two big things. The way I use feature flags with archaius was more operational. I got a new feature coming in, I'll turn it on. Or if it goes bad, I'll turn it off. We didn't do so much the other side, the B side to me is experimentation, is there anything beyond just those two big buckets?
Trevor 00:03:37.729 when you think about feature flags, they get used in a lot of different use cases. many people use them for paywalls or for entitlement management. Many people use them for simple kill switches, right? If you need to turn something on and off, if. Something is poorly performing and you need, a permanent flag in place. Some people use them for gradual rollouts, or progressive delivery has kind of become the terminology over the arc of time. of course, experiments are two sides of the same coin. and then, you know, I think as we look forward, I think we're seeing kind of the use of feature flags around prompt testing, rolling out, advanced agents, et cetera.
Viktor 00:04:12.244 I think that all those, many of those use cases are really not needed. People should just stop making mistakes in their code and, uh, always know what to really, what to build.
Trevor 00:04:23.627 I wish that was the case. But with more code being written by a, a AI agents, I think it's even more important that we think about how we, how we safely roll out that code, test that code, and have a safety switch candidly, right? If something isn't working or something breaks, being able to turn that off. but yes, you're, uh, you're spot on. If we wrote perfect code, we wouldn't need feature flags.
Darin 00:04:42.946 Okay, so let's assume I'm working with a person that writes perfect code. Assume that unicorn
Viktor 00:04:47.766 You already worked with me, Darin. I understand. What, what are you trying to assume?
Darin 00:04:52.239 If you could see my face at the moment, you would
Viktor 00:04:54.344 Uh, I'm going for you. I'm going to even switch to rust because Rust never has bugs.
Darin 00:04:59.565 Wow. Okay, so let's just get to the point. How can you convince people to use feature flags if they're snow, not using 'em in 2026?
Trevor 00:05:08.721 well, I think to your question there, and I, I kind of want, would correct something I would just said. I do think even if you write perfect code. you should still use a feature flag. and more for the kind of experimentation side of the coin, right? Which is I want to turn this, new feature, this new capability on for a subset of my customer base and understand the impact I'm having on my customer base. to your question around, Hey, it's 2026, how do we think about. people using feature flags today? I'd say most people are using feature flags. They've become table stakes in software delivery. in conversations 10 years ago when, you know, we would attend a, a large conference, we would talk about feature flags and we were still getting asked kind of what is that concept, right? Is that like a config file? What is this? I would say today, that question never comes up. Most teams are using feature flags in some form, whether it's a homegrown solution or a file that they've built themselves and they maintain, or using a third party vendor. I'd say most teams are using feature flags and are now asking themselves the question of, as we look forward into this next chapter of software delivery, how do we make sure we solve the problems that feature flags have created things around technical debt. the number of if l statements in my code, how do we remove those over time?
Viktor 00:06:14.047 How often do people remove feature flags?
Trevor 00:06:16.897 not as often as they should, one of the things I've been thinking a lot about when we got into this category 10 years or so ago was how do we automate the creation of feature flags? How do we automatically roll that feature flag out for you? And then how do we remove that feature flag for you? 10 years or so ago, we met with a company trying to figure out could we essentially index your code? Search your code, remove that feature flag from your code, submit a poll request. 10 years ago, the, technology wasn't there to allow us to do that. today we're on the precipice of releasing that ability within the product to allow you to, hey, if you want to automatically remove feature flags after a 30 day period, we can go through and we can remove those flags for you. So I think people aren't doing it enough, but I think that problem is almost, solved.
Darin 00:06:59.124 Boy almost solved. That doesn't seem possible. Let's say you're talking to a feature of 500 a feature. Good grief. Now you have me saying feature all the time, a Fortune 500 company, and you're running some experiment. And the experiment, let's keep it simple, is A and B. And it turns out that the answer is B, but the senior exec says no A is the answer. Uh, how do you convince the senior exec that he is wrong?
Trevor 00:07:30.529 I think that's where the data comes into the conversation. you know, I'll speak through a few examples. if you look at one of the larger fast food chains, they use us on their kiosks and they've essentially kind of done like a go large, right? Like up, get you to upsell, press the button. How do they put that button? Where do they put it? The words on it. They tested a number of different things and ended up after, over the course of time, generating several million dollars of incremental sales because of those experiments that they were running. And I think when you talk to the executives at these, whether it's the large financial institutions or the large fast food delivery service companies, they're looking at the data and they're making decisions based on that data. And so the experiment of A versus B is really a data-driven conversation, to kind of help kind of govern which direction to go.
Darin 00:08:12.913 I guess my question was what do you do when the exec doesn't pay attention to the data?
Trevor 00:08:18.808 Yes, I would.
Darin 00:08:20.778 That's what I'm thinking. That's, that's what needs to happen, right? If the data says, let's use your example. Hey, we just made an incremental 5 million this quarter. at no cost. Yeah, but I don't like it. It can't be working. this takes me back again. I'm gonna show my age. Uh, anybody that watched Gilligan's Island, there was the one episode where Gilligan was flying, right? He was floating off the ground a few feet and everybody kept telling him, no, you can't fly. It's impossible. As he's standing there hovering over the ground and when he's like, oh yeah, then he falls down. is that the problem that we have with a lot of execs or are, are the execs really as my age, is starting to age out of companies and the younger ones are actually pay attention to data?
Trevor 00:09:01.699 I think the concept of the Hippo, right? The highest paid person's opinion in the room. I think we're all familiar with the Hippo and we've, there are conversations about the hippo and how do we open up more voices and how do we essentially support teams? And, you know, I don't run into that problem candidly. with executives that we're talking to candidly, they're all just looking for data. They're looking to understand the decisions their teams are making, making sure that The resources, the time, the projects they're putting to market are delivering the outcomes they need for their business. You know, I gave you an example of a fast food chain, but if we look at one of the world's largest banks, they use us to run experiments and have generated hundreds of millions of dollars of new customer acquisition because of their experiments. And so I think at the end of the day, executives today are tuning into kind of the outcomes of their teams and the results and letting that kind of drive their decisions.
Darin 00:09:47.350 What's interesting how you took that there Initially you were talking about they're wanting data, but then you just said outcomes. Again, to me, I'm gonna, I'm gonna keep pushing back on this a bit. we all went through the initial thing of gimme one more data, gimme more data. But you couldn't figure out what the data meant. And now we're getting to the point. Hello ai, to where we can get summarizations a lot better, or even just the classical tools are giving better summarizations without ai. it just because of time? here we are. How many decades are we into this whole computer thing now, and we're just finally getting to the point to where this is actually doable.
Trevor 00:10:24.197 I think what we're seeing is more of the normalization of looking at metrics and bringing metrics to conversations. I think that that is, you know. More normalized in terms of, we've talked about, board level conversations over the arc of time. I think everybody's asking for data and looking for how have metrics changed, what are the metrics we're driving towards? I talked a little bit about my, co-founders coming from LinkedIn and, our experiences building solutions like this in previous incarnations and all of those instances, data was driving those decisions, right? And data bubbled all the way up to the top when you looked at the CEO and their, what they needed to report to their board and to Wall Street. All the way down to how each and every single feature laddered up to those metrics. And so I think that's at its core, I think we've seen the normalization of data. We've seen the normalization of experiments, and we've seen the normalization of, Hey, with finite resources, how much can I take on right? And making sure I'm delivering the, the biggest impact with those resources.
Darin 00:11:16.603 does a team look like that's doing this well, and I, I wanna broaden it out. Not just the implementers, but you know, the full team, the full from the execs, all the way down to the bottom coder.
Trevor 00:11:29.784 when you look at the teams today, and I, I'll, I think we're honing in on the experimentation side. Let's move away from kind of core feature flagging and stay focused on experimentation. I think that there are, data scientists on the team, that are helping you understand the data. Helping you ask questions of the data, helping you kind of understand kind of how the metrics are changing. there are data engineers that are making sure that that data that is being sent is clean, that it has all of the necessary data. Of course, you have your front end or application developers. those are kind of your core working teams, right? With the product managers kind of building the feature sets and the capabilities. Many large organizations do have a head of experimentation or someone leading an experimentation program. that individual is coordinating experiments in the hundreds across the organization. they're assessing holdout groups. They're thinking about what markets they should be running experiments in. And so they're working with those teams to essentially kind of build the experimentation program, within say, a Fortune 500. that information is bubbled up on quarterly cadences up to the CPOs and the like of these large organizations. You know, I can in speak to, you know, one of the world's largest banks, the CPO is looking at the results of their experimentation program. Right. And I think that's kind of how you're seeing it ladder all the way up, all the way up to the CPO who's presenting all the way up to the CEO of the bank around the results of their experimentation program and the investments they're making. Um. That's kind of how I see the team structures and you know, there, there's replicas of that at different scales. but that's how I see them evolving.
Darin 00:12:57.141 How are you handling experiments that touch revenue or compliance? I mean, everything we've been talking about is sort of in that space, but those two are like. Uh, to me the, the peewee, sorry, I'm using movie analogies today. the Peewee Herman movie where he is running by the pets and running by the snakes until the very end. It's like, to me, touching revenue or touching compliance is a big deal.
Trevor 00:13:20.886 Yeah,
Darin 00:13:21.531 I mean, obviously you're experimenting there, but then you've also got operationals that are wrapping the flags inside.
Trevor 00:13:28.021 think that's where most of your highest impact experiments are actually happening today are on some of those revenue touching customer facing components now. In terms of the ultimate revenue impact, what we see is customers will be running experiments on kind of those leading indicators, while they might be looking at the, let's just stay on the analogy of an e-commerce brand. They might be looking at the checkout button. They're running experiments all the way through the funnel, To try and drive that checkout. Now, they're of course, products that have a much longer time to value, You might acquire a customer today. And your experiment might be on the acquisition funnel, but the customer might not, let's just say take out a mortgage, for 90 days, right? So you have a much longer period by which you're looking to see the results of that experiment. But that's where experiments are happening, They're looking to figure out kind, how do they drive the business impact. that's where I see them. A lot of the experiments today happening on the consumer facing side.
Darin 00:14:21.943 Choosing to, change the color of a button or a position of a button just doesn't seem like as a big of a deal, as, messing up the onboarding flow or the acquisition flow
Trevor 00:14:32.101 Yeah, those aren't the experiments. You know, if I look at the category of experimentation. 10, 15 years ago, that was what experimentation was. It started by marketers, right? If you think of experimentation, it was like, think of marketing websites, right? It's like we're gonna change the button of the sign up button color. you think WY wigs, you think that kind of drag and drop editor, I would say over the arc of time, as many of those products have begun to be owned by product and engineering teams, experiments are extending full Stack, right? And you're, you're moving beyond a button color all the way through to. You know, let's look at Netflix for example, right? To use your analogy or your kind of anecdote earlier. think about streaming providers. They're constantly moving around the continue watching or the new discover, or they're testing. Do they automatically throw you a preview when you hover over the tile? do they put 9 99 on the front page or do they wait for you to click in? There's a constant set of experiments running and those, that's just a consumer brand. But you can imagine kind of how that extends throughout all the tools we're interacting with.
Darin 00:15:28.535 We talked about tools that are, or teams that are doing this well. Uh, what do teams look like when they're screwing it up?
Trevor 00:15:35.159 I think I'd answer that first by saying like, I don't really know if you can screw up experimentation, because at the end of the day, it's the mindset and the shift of kind of like the failure mentality and the acceptance of failures a good thing when it comes to running experiments. It's probably one of the hardest things to get over when you start thinking about building an experimentation program is if I run an experiment and it fails. I have to admit failure to your question earlier, right? I had this idea and it didn't work. Now I have to admit failure. I think that kind of getting over that arc or over that hump is probably one of the most important things when it comes to experimentation. I think the teams that fail at experimentation are the teams that kind of take that first failure and don't run the next 30 experiments because they're worried of failure, right? They're worried that, hey, our first one didn't deliver the results we were hoping to. And now let's just ship what we think we need to ship. Right? They've now gotten out of that experimentation mindset, and I think it's really important to accept failure. because that's a critical part of experimentation, is we don't know the answers. We don't know what's working well when we ship it. And so we need to run those experiments. I.
Darin 00:16:34.222 Okay. Experiments. I think right now we're being run through one of the greatest human experiments in our lifetime, at least as technologists, is this whole AI thing, especially when it comes to coding. Viktor is all in on it after. A year ago, year in change ago. He's like, this is garbage. I don't know why we're even bothering now. Viktor correct me if I'm wrong, I think you have Claude agents probably running 24 hours a day.
Viktor 00:16:58.321 No, don't,
Darin 00:17:00.121 ran, you ran outta money then. Okay.
Viktor 00:17:02.101 no. Mostly because I, I still don't believe in unsupervised agents and I need to be there to correct it, so it runs approximately 12 hours a day.
Darin 00:17:11.476 Okay. But that I, I guess Trevor brings up the point. Victor's not happy with unsupervised agents, but couldn't experimentation and feature flags sort of help with that.
Trevor 00:17:22.516 Well, I think this is where, and I alluded to it at the beginning, I think we're seeing a. A rebirth of feature flags and experiments with the AI era, and I use that term loosely and broadly. when you look at how people are using us today, they're essentially creating, prompts in their configs, right? You mentioned the word config earlier, Viktor, but when you think about feature flags, feature flags can have A-J-S-O-N object associated with them that toggles that application experience. And that's where we're beginning to see customers utilize prompts and putting those into the config setting and being able to roll that out to a percentage of their customers. So we're beginning to see the evolution in the use of, feature flagging to accelerate and bring safety to some of the AI development.
Viktor 00:18:02.667 How, does it work in practice? Do you kind of, I dunno, fire up cloud code or cursor or whatever and say, okay, this the information about the feature I want to develop, and then can whatever you do, do it behind the feature flag or is there some kind of enforcement or the promise of future enforcement of feature flags or it's simply up to me to say, Hey, I, I want to use feature flex, right?
Trevor 00:18:30.156 Yeah. I think today what we're seeing is one, it is, manual today where people are saying, I want to, I want to use feature flags to release this feature or roll out that feature. does that evolve over the next few months? I do think so. Right. I do think you'll start to see tools say, Hey, let me wrap this in a feature flag for you so you can turn it on or off for yourselves. Right. It's only a matter of time before tools like lovable allow allow you to essentially say, I want to turn this, this feature on and off and have a toggle on an admin console. Right. we're following the arc of history and software development. So I, I think it's only a matter of time. that is purely the gating of the feature flag though, right? That is purely the, are we essentially gonna expose new traffic to this, feature or not? That this agent is writing. what I think more about is what we're seeing teams do when they look at feature flags. So feature flags have a component of them called configurations, and those configurations are JSON objects that are associated to the feature flag treatment. What people are putting into those configurations. Are prompts today, right? They're putting in prompts, they're putting in tokens, temperature settings, et cetera. They're putting in all the core information for how the agent they're building should operate, and then they're beginning to run different prompt tests in production and saying, let's turn this one on for 5% and this one on for 5%. We're beginning to see that in how people are using feature flags today, which is kind of a preview for where we're, where we're going, right? When you think about how. feature flags at their core are just the ability to, with configs or the ability to change an application at runtime, right? And so how do we think about kind of continuing to abstract some of these elements so that if you wanna change a prompt, if you want to upgrade your model, you can do that quickly, but have all the guardrails that come with a software delivery mechanism, such as feature flags.
Darin 00:20:09.270 Is that where people are messing up now is they're treating all the AI generated code different than the human generated.
Trevor 00:20:16.825 I don't think people are treating it differently. I mean, at least in the, the practice we're seeing is there's, at least in the enterprise, people are paying more attention to that code, They're reviewing that code. They're candidly spending more time reviewing that code. We're finding more code is being written and we're finding, and I also, I, I lean another one of our products around software engineering insights, and one of the things that we're continuing to find is, yes, more code is being written. But it's requiring more and more time to review that code to make sure that, well, it operates the way it should.
Darin 00:20:44.141 So based on the insights that you're seeing. Anonymized and generalized. Are we spending more time? You just said that we're spending more time reviewing code. Where are we actually shipping code faster because we're spending more time reviewing.
Trevor 00:20:57.686 Yes. And that's kind of the, the paradox that we found ourselves in. Right? And that came from some of the door reports. We've been seeing that while we're seeing this explosion of code. I like to think of it as kind of the, the analogy that people have used with me is kind of the six lane highway ending in a two lane bridge. we've seen this explosion of code, but we still are shipping code at a slower pace today because it's, well, one more security vulnerabilities, more time to review that code. The throughput has declined over time and so I think that's one of the big problems is things are just shifting to the right, right? If you only spent 10, 20, 30% of your time coding, it wasn't a hundred percent of your time, and now you've improved that, say 10% or 20. The real productivity gains are gonna be coming in kind of the pipeline, the delivery, the shipping, that code, the testing, that code, the getting that through to production quicker and safer.
Darin 00:21:42.948 But isn't the promise to everything shifts left, and you just said it is shifting right.
Trevor 00:21:46.978 I think more and more is shifting. Like when you think about kind of the shift left, I think there is, yes, but I think the big problem that we're looking, that we're seeing today is around kind of the delivery throughput. Right. And now we can shift some of that into the left, right? We've been talking with, vendors around how do we bring our security scanning technology, closer to the developer, So that we can shift some of that before, but I think that's where, yes, I think the problem is shifting over into those delivery pipelines. I.
Darin 00:22:11.087 How do you think it's gonna shake out over the next two to three years? Let me rephrase that. How do you think it's gonna shake out over the next two to three months? And then if you can see, two to three years in advance?
Trevor 00:22:19.756 if we say on the topic of feature flags and experiments, I think you're going to see the acceleration of things such as AI configurations. configs will become more mainstream. AI configs or templated configs will become more mainstream. AI evals will kind of enter the software delivery pipelines that we, we see that in some of our more advanced customers where, they ship. Software today, they're thinking about how do we ship our new agent? How do we test that agent? How do we make sure the quality is high? How do we make sure hallucinations are low? Business impact is high, and so you're gonna see a lot of the core principles that we've seen in software delivery begin to apply to some of kind of the agent development.
Darin 00:22:57.168 Speaking of the genetic development, I'm assuming insights, or you may, may or may not see this, are you seeing real differentiation between the. AI generated code coming out of the different tools. Are you seeing one that's better than another?
Trevor 00:23:10.596 not necessarily today in what we're seeing, but yes, we aren't necessarily kinda getting to that level. Really where we're spending time on the AI Insights product is thinking about the adoption of the product, the quality of the code being written, and the, the overall impact on your, your velocity in your organization.
Viktor 00:23:27.449 I feel the problem over there, right shift, left shift right, because of AI and so on and so forth is. More related to different levels of adoption among different teams, Because whichever phase you are in, uh, left or right of whatever it is, I feel that AI give or take can equally help and speed things up. The problem with that delivery chain and that bottleneck you use, the bridge two lane bridges, is, is, is an explanation, right? Is that probably the whomever is in charge of that bridge itself is not on board just yet. Rather than, um, than anything else, right? Because I, I could argue, for example, that AI is just as helpful or even even more helpful to me, when reviewing than as when writing code, for example,
Trevor 00:24:26.018 Or, or they haven't upgraded their systems or rebuilt the bridge, in a way that, they can get that code through to production quicker and safer.
Viktor 00:24:34.109 Now we are coming to the, to the meat of, of what I believe is, is a real problem over there in those bottlenecks. And that's simply that you are trying to put this new shiny thingy into your old ugly box. And uh, yeah, our processes are going to be the same and we are going to ship, uh, do something five times faster. Well, that's a problem.
Trevor 00:24:56.427 I think that's spot on, right? I mean, I think if your systems can only move so fast, or the tools you're using today can only do so much, of course if the through, if the volume is going up dramatically, then the assembly line, unless the assembly line is updated, you're only gonna have the same amount of output. the whole shift left shift right. You know, we use this, uh, phrase internally, our security team does, and I'm not a security expert, but they talk about shifting left to shield. Right. Right. And I think that's where you're starting to see some of those elements to your point of like. Some things are shifting to the left, but if you look at kind of the, core delivery today, there's a hard requirement to rethink the pipes, rethink the infrastructure for kind of the new delivery mechanisms and ecosystem that's needed.
Viktor 00:25:35.822 This whole situation reminds me a lot. This was long time ago, right? When Agile became a thing and then. We became, my team became agile team and team, and we were delivering something every second week. Right. Every sprint. And then the testing still took half a year. Right. But we were delivering every second week. Exactly. Kind of. No, we never failed.
Trevor 00:26:00.755 And you had it behind a feature flag probably. But you were delivering
Viktor 00:26:04.310 No, no, no. Look, okay, let, let's talk real here for a second. Why do you need feature flag if you have testers working half a year on it?
Trevor 00:26:12.896 that's fair. But if you're, if you're sure if you're gonna test, if you think you can test every single case and you can test every permutation, even at different loads in different scales, the reality is some features perform differently under load and you can't test that in a simulated environment. So I think there are cases where you do want to bring that safety switch or that guardrail, to protect yourself.
Viktor 00:26:34.936 Oh, now, now you're going to convince me that we should break the silos as well, and that I should speak to testers.
Trevor 00:26:39.808 Yes, yes, yes.
Viktor 00:26:41.604 Come on. Are we going to include now ops as well? Kind of, uh, you, you, we are dismantling the whole system now.
Trevor 00:26:49.753 when we started split, one of the personas that actually ended up being one of our bigger personas, our two primary personas for feature flags, our engineers are engineering managers. the persona that actually governed the switch is the QA team in many teams, the QA team was the team that was enabling that feature for themselves, testing it, running their tests, et cetera. So it's, it is this, you're, you're right on, right in that there is this persona, there is this army of testers. But I think those testers are consumers of feature flags as well. They're not necessarily, you don't need them. If you have tons of testers, I don't necessarily buy into that. Can't test everything.
Darin 00:27:24.822 Oh wait. You can't test everything. Don't let a big company hear you say that.
Trevor 00:27:30.016 Well, I, I think I actually had a t-shirt that said Test everything, results matter. I think that was one of my slogans for split, but that was more in the concept of put it behind a feature flag. My point was more, you can't have, like testers aren't gonna catch every case, right? If you're not gonna be able to write down every single case, every simulate, every single load, you can't replicate production perfectly.
Darin 00:27:48.577 T-shirt slogans. I saw an old t-shirt slogan recently. that's about 10 years old, and the one you just said there. Are we beginning to see that a lot of these t-shirt slogans are now becoming ai? Like we were thinking, let's get rid of all the business people, keep the coders, all these kinds of things. Now it's like, uh, the coders are gone because AI is taking over.
Trevor 00:28:09.770 Is that a T-shirt slogan?
Darin 00:28:11.790 No, No, no, no, no, no, So Well, no, you're, 'cause we're, because we've seen all, we've seen these really stupid t-shirt slogans. Over time that were good at the time. But now if you think about 'em in the context of AI generating code, good, bad, or indifferent, it's like we were trying to, rid of everybody else. Just keep the coders. 'cause we can run a business, we only need five people in the front office and gimme 10 people in the back office and everything's gonna be great. combine that with feature flags and you've got a business.
Trevor 00:28:41.036 Yeah, I mean I do think businesses are being built with fewer teammates. I think we're seeing, I think AI has, has proven that, you know, I, I have a several friends in my founder network who have all started businesses that have gotten to a couple million of revenue with. Less than five engineers, one sales person, one marketing person, just the Stack is fundamentally different, the tools they're using. And so I think, you know, when we started our company 10 years or so ago, you just needed a much larger team to get to that MVP. You needed a much larger team to get to production level workloads. you needed a team to think about the marketing and the strategy and the first couple customers. today, those, barriers are, have all come down.
Viktor 00:29:21.627 that brings me to the world before AI for a moment. First, let's say that you are really doing it right. You have feature flags, you're doing canary deployments. You are having really solid observability system, platform, whatever, right? Kind of like you, you see what's happening. You can disable, enable things potentially, sometimes automatically. Right. Based on metrics, I'm assuming, does that reduce the, I'm not saying remove, but reduce the necessity for extensive testing. Prior to production, I, I'm not saying remove, just to be a hundred percent clear, right?
Trevor 00:30:07.018 what we've seen with feature flags is the slogan to go back to t-shirt slogans, testing and production. Right. And I think that has become a much more norm, normalized thing over the last 10 years. the reality is it's more efficient. You don't have to stand up a pre-production environment. You know, one of the ROIs or the reasons people invest in feature flags is they can test in production, they can test on internal teams. they can get real production workloads. They don't have to have a mimicked production environment. There's, of course, gonna be some staging testing, but we're seeing people get to production a lot quicker.
Darin 00:30:36.939 the not so secret and everything you just said there is, we're always testing in production, whether you realize it or not.
Trevor 00:30:42.087 Yes. I think we, we all see that. I mean, you, you can't open up your Netflix or, or walk open up your, your McDonald's app or you name it and say, wow, this looks different today. there is always a new capability or a new feature being tested or rolled out.
Darin 00:30:55.373 but you're, you're actually going a little bit further than what I was talking about. We're just testing in production because we didn't test well enough below production, so we're always just, we should just consider production final testing.
Viktor 00:31:07.510 way I see it is, it's not that because we didn't. I, I do agree. And I think that Trevor, you are, um, going in that direction, kind of production is the only real test that matters. Everything else is giving you more confidence to reach the last, uh, stage of product of testing. because we are just pulling ourselves that we can actually test, the real. production workload, we, we can discard obvious things that I do agree right before it reaches production.
Trevor 00:31:39.097 exactly. I mean, I'll give you a real world example where we are releasing a new feature, and we tested it in our QA environment. the team tested it in, QA, felt good about it, did not test it in production. By the time I got my hands on it, it didn't work. There was a, a fundamental issue with it. And so that to me just screams, we need to be testing in production and I think pre-production environments are never gonna fully mimic your production environment. And so. Production testing in prod is the path forward.
Darin 00:32:06.181 So Split was acquired by Harness in 2024, is that right? Okay. So I'm gonna ask one question then we'll get into that 'cause I have some interesting or interesting to me. Questions, today. Do you still use Split? Probably renamed drive everything within the harness platform.
Trevor 00:32:25.662 We do, Split is now the art artist, formerly known as Split, is now feature management and experimentation. Uh, we do use it. to your point, Viktor, I have hundreds of feature flags that need to be removed from my code. So at some point here soon, we will be, uh, automatically submitting poll requests to remove those feature flags from code. But yes, we do use it. more teams within Harness are using it now too. I think you know this, but Harness had a feature flagging product as well, and so they, in acquiring Split, we're merging the two products together and that's what feature management and experimentation is.
Viktor 00:32:56.548 how would the removal of feature flag even work? I mean, I understand how you can remove. The old block or block behind the disabled feature that I understand, but there is some refactoring involved, right? I'm going to declare a variable now, outside of, uh, the flag so that I can assign one value inside one block and another one inside of another block, and so on and forth, kind of. It's not only about removing the block because the code was written differently for feature flags, right?
Trevor 00:33:31.800 Yes. I think that is, you know, when we've took on this project and, you know, we, I think a few things. One part, one of the problem is identifying your stale flags, right? How do we identify the stale flags or the fossilized flags as we've been calling it internally? The flags that are ready to be removed from your code, That's like part one, right? Making sure they're not receiving traffic or they're receiving traffic but it hasn't been edited in 60 days or whatever that your criteria might be. Part two is, you know, it's not as easy as to your point, just submitting the, find the 10 lines that is this new feature and submit a poll request to remove that. That's where it gets a little bit more complex, but that's where, working together with the coding agents, that's where we're seeing the ability to kind of go one level deeper.
Darin 00:34:09.976 I have questions about being acquired now. Can I ask those questions
Trevor 00:34:13.641 Sure.
Darin 00:34:14.041 we Okay, because this, this is always interesting to me. Because I still have never, let's put it this way. In my whole lifetime of career I have for companies, except for my last one, which still exist, every of the other companies that I had options in went belly. It seems like you had a pretty good track record of the exact opposite. You were Relate IQ that was acquired by Salesforce there, you were an employee I guess, or somewhere up up the food chain A little bit Employee.
Trevor 00:34:49.677 that company around the 20th employee.
Darin 00:34:51.867 Okay. so you weren't founder level, but then you went and did split and it got acquired as a co-founder. Uh, but you stuck around harness where? Roughly two years almost from post-acquisition, and you seem to like it still there. So if anybody from harness is listening, please yell at me. I'm just, I'm trying to ask some questions here. that's fairly, I I'm not gonna call it odd, but it doesn't happen most of the time. what makes you wanna stick it out and keep going?
Trevor 00:35:25.191 I'm not just saying this because my boss might listen at some point. no Harness has a unique operating model. There are actually a lot of founders at Harness. there are several founder led, acquired businesses that have been rolled into harness. There are teammates that are former founders that have come to harness and one of the primary reasons I think that. Founders like myself, like operating within Harness is we run what's called a startup within a startup model, we have 17 plus products or modules within our product. Each one of them has a, product leader. Each one of them has a GM leading up to them. And we get this opportunity to continue to build businesses within Harness, right? So as a founder, I'm really still kind of running split, but now FME. As if it was split, right? I just have the benefits of a 500 person sales team, a marketing team that's building the harness brand, a finance and HR team. But I still get the ability to build the product, deliver the vision to drive execution, and that to me is kind of this operating model that we have where we enable founders to run these businesses within harness. That's candidly, I think one of our superpowers. It is the reason that, harness continues to innovate and continues to bring new products to market. And, you know, I think the second part is, you know, I joked about my boss earlier, but, you know, jots a phenomenal operator, the founder and CEO of harness, his vision, how he pushes the, the GMs like myself to think about kind of how our products will evolve. We came to talk about feature flags today. Feature flags is a 20-year-old topic, right? I mean, 10 years ago it was a thing. today it's a, table stakes piece of software that everybody's using, but it is fundamentally changing over the next few years and pushing ourselves as product leaders to rethink what is a established product and established market, and rethink how it's going to evolve. And so, to me that's the best part is you, you get to operate, still operate like a founder in all the different facets of it.
Darin 00:37:09.289 And of course, everybody sits around the campfire and sings kumbaya. There's never any problems
Trevor 00:37:14.309 Oh.
Darin 00:37:14.929 platform or
Trevor 00:37:15.859 Oh, no. There, there are no, there are definitely problems. it's probably one of the things that I, myself and the other GMs think a lot about is how do we make sure that our products continue to be deeply integrated, one of my products, the Software Engineering Insights product, should be more deeply integrated with all of harness products. It doesn't integrate with feature management experimentation today, but it should, right? You can imagine that teams who use feature flags. Move faster. Teams that use feature flags have, going back to our whole testing conversation should have fewer issues, So you should start to see that correlation and that's where I think the opportunity to continues to exist and kind of continue to string these products together.
Darin 00:37:54.105 for the people listening at harness, don't take this. Personal, but if the shoe fits, and this isn't just harness, this is a lot of other companies that have done acquisitions, these companies get acquired that have great names, easy names to remember, and then Split gets turned into, into, was it again? Exactly. I can't remember it right. This, this is a problem in our industry. It is just like, come on, there's gotta be a better, I liked how you had it as formerly known as, Will that ever be fixed?
Trevor 00:38:24.982 It's interesting you you bring that up. I was actually, I've been thinking a lot about feature flags. Experimentation, our module name, how the category is going to evolve. it's called feature management experimentation because that's really how the category has evolved over the arc of time. If you look at the, the players in my space, do I think we keep that name? Probably not right. I think over time we, we lean into a simplified terminology, that is more about feature delivery. there's a more simplified terminology around kind of what you're trying to accomplish, that aligns more to say the other products of harness ci, cd, et cetera.
Darin 00:38:57.970 What's the hardest thing to let go of after the close of the deal?
Trevor 00:39:01.591 No, that's a deeply personal question. Uh, what was the hardest thing to let go of? No, no, no. I think, um, what was the hardest thing? I think anytime you get acquired and you start working within a new business, the operating model and structure changes, how decisions get made, how you're thinking about the problem. I think just adapting to the new norm, I think is probably the hardest thing in any acquisition. And I think in that answer is kind of laid in, what's the hardest thing to get rid of or kind of what a, what's the challenge in an acquisition? for me it was, you know, running my own business and kind of working with my co-founders to now operating within harness. Now I, I, I say that with the lens of, you know, what Harness brings in the startup within a startup model made that transition pretty simple and straightforward. But I think any individual going through an acquisition. The world changes. and our job as founders is to hope, hopefully make that change as small as possible and make sure that we preserve the team and the culture and the energy, the product, our customers. And we make that change as few as possible. But the world does change. And so we have to figure out how to bring ourselves and our team into that change and our customers into that change. And I think we've done a good job at that so far.
Darin 00:40:10.486 What was the easiest thing to let go of? Boy, that's a long pause. You got to the hardest one to pretty quick.
Trevor 00:40:18.967 yeah,
Darin 00:40:19.612 don't have to deal with payroll anymore.
Trevor 00:40:21.337 I was gonna say, I think, I think when you're. A founder, the unspoken stress is probably the, the biggest thing. and that's the, like the existential crisis question that you're asking yourself every night and every morning, which is like, where do we need to go? How does this business survive? you know, there were countless times where I, you know, had that opportunity to miss payroll. You know, I think, fundraising during COVID was, uh, quite an extreme, scenario. The Silicon Valley bank crisis was an extreme scenario. So I think all of those moments, are the things that were easier to let go of. but yeah, I still don't, the journey was incredible. I would do it again in a heartbeat.
Darin 00:40:56.957 You've already sort of answered this. I'm gonna try to clarify. Maybe we can put a, a, a good bow on it. You know, what advice are you gonna give to a founder that's six months in post acquisition? That's sort of like, to me, the sweet spot, that six to nine month. Okay. I'm still in, I've been operating, but now I'm starting to either I'm getting rolled in. Or maybe it's like harness where it's a startup of startups, right? It could be one or the other. what are you telling them? It's like, because as you sort of alluded to, they've sort of lost who they were, their identity, right? You've, you're now this thing within this new organism.
Trevor 00:41:32.662 Yeah, it's interesting you say that. I, um. I don't think you ever lose your identity as a founder or a co-founder. and so I think you always hold onto that. You know, my, co-founders at Split, we always talk about the only title that ever really mattered was, Hey, we were the co-founders. We started, everything else didn't matter. Right? You, that is part of your identity and it will always still be kind of that founder. DNA is is always there. if I had to give some advice, I would say probably the biggest thing is just figure out what your why is. And identify that. And I think once you kind of identify what is your why, in terms of why you're sticking around, why you're excited about what you're looking forward to over the next few years, I think once you've identified your why, it becomes a lot easier. for me, you know, and you didn't ask this, but my why is I, you know, if I look at what we're building at harness harnesses on an incredible trajectory, we just raised our round, our last round, several hundred million of revenue. there's an opportunity to really build a great business and continue to, as we talked about earlier, solve some of the fundamental problems of software delivery that are becoming bigger and bigger. And so those are some of my why's, right? When I, and I think everybody on the team has to find their why, and I think every founder has to find their why in that new world.
Darin 00:42:38.560 What's one thing you would tell engineering leaders or things that you wish they really understood about shipping software securely?
Trevor 00:42:46.006 I think there's, and I get into these conversations with teams a lot around, and it's funny we talked about quality in during this conversation and testing. speed and quality are not a perfect trade off. and I think we posit them as they are against each other. I think if I was able to kind of talk to every engineering leader, I think that, you know, one of our engineering leaders here at Harness uses this phrase, there's two-way doors and one-way doors. two-way doors. Let's move quick. Let's get through the door. And if we make a mistake, we can pivot, we can change one-way. Doors harder to come back from, right? And so I think thinking about how we move faster and we, try and break that trade off of speed and quality.
Darin 00:43:21.970 if people are trying to figure out, it's like we, we just can't ship, what do you tell them? let's go back to Victor's earlier example. Engineering, shipping every two weeks, but there's bottlenecks down, qa, whatever, to actually getting it out and being used. What can we tell people, you know, is there hope?
Trevor 00:43:40.076 I see this all the time where, teams, the problem is too big. and this gets into shipping every two weeks. Right. And this is, again, we talked a little bit about feature flags during this podcast, but feature flags allow you to break down the problem, Everything we ship now, and at least on my teams, is milestone based. what you get from me in kind of that first release is. A sliver of what we're building towards, but we're slowly showing you where it's going and we're getting that out to you quickly. And I think when you break the problem down into micro milestones, you know, you're just able to break down those silos and move a lot faster.
Darin 00:44:11.452 So you've now created yet a new thing, MMDD, micro Milestone delivery. development. What is it? Development What? Test driven development. Yeah. So it'd be whatever.
Trevor 00:44:21.597 It is funny. It's funny you say that. All all of my team's product specs do have a series of milestones at the front in terms of what are those customer facing milestones, and they should be delivered within weeks, I don't like when projects take months or quarters.
Darin 00:44:34.691 that's not a project, is it That's
Trevor 00:44:37.226 shouldn't be.
Darin 00:44:38.561 exactly, well, if, if it's that big, then it's not a project. It's something that's bigger than a project.
Trevor 00:44:43.766 I agree.
Darin 00:44:44.861 So all of Trevor's information is gonna be down in the episode description harness, is@harness.io. And of course if you wanna check out feature management and experimentation, people look harness and every other company in the world. Please come up with better naming please. You can find it again@harness.io and if anybody's upset with me, you can email me or call me. Everything's fine, Trevor. Thanks for coming on today.
Trevor 00:45:12.036 Of course. I appreciate you having me.