Pete 00:00:00.000 My data is late. My data doesn't look right by some definition of doesn't look right and. I'm getting weird errors and I don't know why. there are Gartner terms that are equivalent to those things, like data quality, observability, data, downtime. But broadly, it's kind of those, three things.
Darin 00:01:23.785 Viktor. With the advent of ai. How often have you been actually writing front end things by hand?
Viktor 00:01:31.000 Front end or not just.
Darin 00:01:34.870 Yes.
Viktor 00:01:37.420 I, I'm, I'm spending now all my time in TypeScript, in no js. Um, mainly because that's, you know, that's equivalent of what Python is for machine learning. That's what no j these days is what for mcps?
Darin 00:01:51.600 But you're still trying to stay far away from front end stuff, right.
Viktor 00:01:55.540 No, I don't care. I mean, AI is writing. I'm just kind of like, I'm, I'm, I'm a reviewer now.
Darin 00:02:00.664 on today's show we have Pete Hunt on, he's the CEO at Dexter. Pete, how you doing?
Pete 00:02:06.019 I'm doing great. How's it going?
Darin 00:02:07.684 Good. Now, Dexter, we'll talk about Dexter, weave it in throughout, but Pete was on the original team that brought React js to life, boo. So we can blame you is what I'm hearing.
Pete 00:02:22.999 Uh, is that A-A-U-J-S boo or is that a, a something else? Boo.
Darin 00:02:26.764 That's just JavaScript in general, boo.
Pete 00:02:28.934 Ah, okay. We, we can chat programming languages if you want.
Darin 00:02:32.284 Oh, we can go, we'll go down all the rabbit holes. That
Viktor 00:02:36.304 Uh, we'll end up in rust somehow.
Darin 00:02:41.554 So you were at Facebook, I guess at the time, before it became meta when all this came to life? Right.
Pete 00:02:49.324 I still call it Facebook. Yeah,
Darin 00:02:50.884 Okay. You still call it Facebook? Okay. Good. okay, so whose genius idea at Facebook was it to come up with a framework that has been unleashed on the world?
Pete 00:03:00.741 my good friend Jordan Walk. You can blame him. but you know, it, it was one of those projects that kind of came out of necessity. he was working on the ads create flow at Facebook, which is this kind of like very complex at the time, UI and still probably is, very complex. I haven't bought ads on Facebook in a while, but, it was very, complex UI and a lot of money flowed through it. Like all the self-serve dollars and they had this big tangled mess of JavaScript spaghetti, and there were probably three different teams trying to solve that problem. And, you know, a bunch of really talented people working on it. And, react was the technology that, People ended up liking. I ended up liking, I was one of the first big users of the framework and then before I knew what was happening, I was kind of contributing code back to it and becoming a member of the team. And so it was one of these things that kind of very organically came together. but it was a lot of fun to work on.
Darin 00:03:59.693 Okay. I don't hear the word fun and JavaScript usually in the same sentence.
Pete 00:04:05.357 Well, I would argue that JavaScript is not so fun, but modern type script is pretty fun.
Viktor 00:04:16.217 I can actually confirm that. You know, it, it.
Pete 00:04:19.517 There we go.
Viktor 00:04:20.717 No, no. I mean, the TypeScript, it might be, might or might not be my favorite language, but, uh, it definitely feels much better than working, you know, straight in with, with no Js, right. With JavaScript, it's almost as if it's a rapper that converts. And I'm going to be mean and as well here that converts. Know just the madness into, into something actually it looks like a real language.
Pete 00:04:50.278 you know, I agree, Over the years, the core JavaScript language has gotten a lot better. and I think that when you actually Stack up, type scripts running a no Js versus Python with, pie write or whatever type checker you want to use, I think TypeScript and Node J is actually, is pretty favorable to,
Viktor 00:05:11.362 Oh, I, I choose TypeScript, no dress or python any time of the day. The old, the old version of no dress, the the one that isn't approved if you ask me.
Darin 00:05:20.942 Well, the reason why TypeScript is good is one reason, and it's in the name type I, I dislike loosely typed or non-type languages, but again, I come from a Java background, so there's a whole different bias there. So we're we're talking languages. So you were at Facebook, you worked on React, you left Facebook. Did you leave Facebook before it came? Meta.
Pete 00:05:48.344 Oh yeah. Yeah. I left a long time ago.
Darin 00:05:50.615 And you went off and did something that then got you acquired by
Pete 00:05:55.784 Twitter the reason why I got involved with React in the first place, I was a generalist at Facebook. I was working on, you know, everything from. The MySQL infrastructure for, photo tagging to, your endpoints that support the mobile app to front end code that runs in the desktop web browser. So it was kind of a, wide variety of stuff there. Uh, we acquired Instagram. I was one of the first people to go over to the Instagram team to kind of like bridge their engineers with our engineers and that was a lot of fun. one of the first problems that we had to solve there was. Around trust and safety, And so specifically think about like fake accounts. back then Instagram was already very popular. It had gotten acquired for billion dollars. They had a lot of users has this giant list of regular expressions that was like, if your email address ends in a Z and it's at hotmail.com au. Then reject the signup, you know, stuff like that. Because clearly in the middle of the night, one night there was some spam attack and that was the magic regular expression that stopped the spam attack that day. And you know, the first thing we did was kind of plug. Them into Facebook's really cutting edge, site integrity infrastructure, which is large scale like streaming, semi real time infrastructure that tries to find spam, basically, I don't know how it works now, but like at the time it was a lot of like, real time counters and heuristics based on counters and regular expressions and stuff like that. And we were like, man. like you shouldn't have to sell your company to Facebook to get access to this technology. And so we ended up starting a company to try to bring that type of technology to everybody else. who, you know, wasn't Facebook, right? that's like the world's most common startup playbook back then. It's like Facebook or Google or Microsoft built some internal tool or infrastructure. Let's go bring that to everybody else. and we had a good, you know, three and a half year run doing that. we served a lot of customers. one of 'em was, musically, which, later became a little app called TikTok. so we were kind of one of the first trusted safety tools there. Um, anyway, you know, we wound that, down, but that's really, I think, where. Or I cut my teeth on data and infrastructure specifically. Like I had done a little bit of infrastructure work at Facebook, but it was such, even back then, it was such a big machine at that point, and there was lots of abstractions over the hardware and over the databases, that, you know, it was, it was pretty sugar co. but man, when you're thrown into a startup environment, all you have is, uh, go right to the metal, you know, virtualized cloud environment, you know, like, like really hardcore. And I'm just, I'm just like, we're very spoiled today with our, modern cloud infrastructure, whatever, but still no netting between you and, and downtime. And so I learned a lot about how to architect large scale, scale systems. How to know when to take on technical debt, when not to take on technical debt. And most importantly, like we did a lot of real time stream processing. So I learned a lot about, how to do that, and developed kind of an appreciation for sketching algorithms, which are these like approximate counting and approximate statistical, data structures, which was pretty fun at the time.
Viktor 00:09:07.292 I'm still. Stuck with the image of you trying to understand very, very long, regular expressions and rewrite them with something else. I would quit my job, man, before I.
Pete 00:09:21.747 it's so funny you say that because number one, I think every company or every piece of infrastructure, like a rite of passage, is to have downtime due to a regular express. Like some sort of, somebody writes a regular expression somewhere and it has n squared, performance and it's or n squared complexity and it just takes down your service. That happened to us, and we, we learned our lesson and used re two instead of the built-in, uh, regular expression, engine we were using. And that, made things a lot better. But actually, uh, you know, one of our. most used products was like a regular expression editor, to help people like not have to write these convoluted expressions and instead use a simplified syntax to do it. So it's kind of a funny story.
Viktor 00:10:07.526 I, I'm more interested in to, I assume that that tool would do it kind of like somebody, I, I don't have such a big problem writing regular expression. I have huge problem understanding somebody's regular expression. Kind of like, it's, it's similar to experience I had with Pearl, right? Kind of. Yeah. I could write pearl back in the day, gimme somebody's else's pearl code and I have no bloody idea what's happening.
Pete 00:10:30.295 That's right. Uh, you have to be like in the exact same room you were in listening to the same music you were listening to, getting into your same mind space before you can finally divine what the RegX actually does. Right. yeah. That's why we actually, uh, we had test cases for our RegX is. and that was really important because every time we would go in and make tweaks to them, like if you miss one character, could like match the whole string or match the set of all strings. And if you overmatch in a, anti-spam environment, you'll just like block everybody's posts, right? So that's, not good.
Darin 00:11:02.193 I wanna step out of the story for just a minute. I wanna go back to the acquisition of Instagram by Facebook. While you are talking, I went and looked it up. That was April of 2012. Here we are almost in April. It's actually January of 2026. Almost no, 2026. Last year was
Viktor 00:11:24.936 20. You see?
Darin 00:11:28.066 math and not only math date math. If anybody ever tells you date math is easy, they're lying to you. But anyway, we're almost 14 years after that acquisition for roughly a billion dollars. It was never truly disclosed, but the rumor mill was, is about a billion dollars back at the end of last year, in September-ish, Atlassian acquired DX for the magic number of $1 billion. Now it seems like over, this is your CEO hat on right now, Pete, so just keep this in mind. It seems like after 14 years, just the rate of inflation would've taken that 1 billion up to something else. What do you think?
Pete 00:12:09.408 you know, unicorns aren't what they used to be, right? At least that's what all the VCs say on Twitter. You know, I, follow all those guys and yeah, I just saw somebody the other day talking about how you're not supposed to triple, triple, double, double anymore. You're supposed to, I don't know, five x in your first year or something, or 10
Viktor 00:12:26.449 Yeah. Triple, triple, double, double. That's dead. That's gone.
Pete 00:12:30.049 yeah, yeah. What are you even doing,
Viktor 00:12:32.149 Yeah. How do you, how, how you dare to triple the a RR this year. Carol,
Pete 00:12:39.049 I know, I know. It's ridiculous. So, um, I don't know, I, I've been around for a while. I mean, we're talking about stories from 2012 at this point, and, um, I do feel like every year they always talk about the series A crunch or how hard it is to raise money or how high the expectations are, and. Yeah, I think there's some truth to that every year, but the fact is, is that a lot of deals get done, a lot of, growth multiples, a lot of valuations, and you gotta take everything you read on Twitter with a grain of salt.
Darin 00:13:08.967 Wait, I thought that was gospel on Twitter. Or ex, whatever the case is. So you, I noticed you're still using Facebook and Twitter, so you're, you're let, you know, not letting go of the names. Uh, let's jump back into the story. So you, you did this little startup, you spun, you spun yourself out effectively from Facebook, and then you get acquired by Twitter
Pete 00:13:27.887 That's right.
Darin 00:13:29.807 and for the reason of this whole trust and safety type thing, correct.
Pete 00:13:35.117 Yeah. the timing was very, you know, interesting. we started the company, you know, in 2014 and we sold in 2018, and that was right around the time that, uh. Twitter started to kind of like really prioritize trust and safety and started to kind of take a different approach to it. we worked hard and built a really good product, but like there's, with all these things, you know, there's a combination of, working hard and and luck. And so our, timing was, frankly very lucky too. which, In hindsight is very clear. but yeah, you know, Twitter was prioritizing trust and safety. They were our second largest customer and we ended up, you know, selling to them.
Darin 00:14:11.693 So what was it like working on the infrastructure of. Twitter, I'm assuming you got to go and hug the servers that were serving Justin Bieber's tweets, right?
Pete 00:14:21.408 I don't know if I saw the specific server for Justin Bieber, but they did let us go on a data center tour. One of the things you do on the data center tour is you put the hard drives into the shredder, and that was super fun. I don't know, have you ever shredded a hard drive?
Darin 00:14:35.613 I haven't, but I know what it's like. I.
Pete 00:14:38.478 It is extremely satisfying.
Darin 00:14:42.088 It, it, you can't compare it to paper shredding, let's put it that way.
Pete 00:14:45.558 that's right, that's right. I mean, these industrial shredders are, pretty serious pieces of machinery. Uh, but the data center tour was definitely one of the highlights.
Darin 00:14:53.463 So how long were you at Twitter? So you got acquired in 17 ish, right? Or 18.
Pete 00:14:58.878 is is 18, so I was there from 18 to 22.
Darin 00:15:01.503 Okay, so during the upheaval of everything, so you, what was it like being there during COVID? I shouldn't have said that out loud, but let's ignore the whole acquisition and everything else, like, just magic hand. Wave that away for a moment. I know that's part of the story. What was it like? Because it seems like it was just a mess to put it mildly.
Pete 00:15:20.943 so, I mean, there was a number of things going on. First of all, when I got into trust and safety, uh, it was not like a political thing or politicized thing. I think that since then it's been, People's like trust, like policies with, with regard to trust and safety. Who's allowed on a platform, who's not allowed on a platform, what constitutes hate speech? What constitutes violation of terms of service? It's like very hot button issue right now. When I got into it, it was really about like, you know, stopping child predators and obvious like, kind of like Viagra scams and stuff like that. So, um, you know, over that period that I was there, it got progressively kind of more and more politicized. Which is kind of one of the reasons why it's, it's just not like a, a particularly fun category to be in anymore, at least for me. so I ended up, moving to data, but when we were there, there were a number of challenges that we had faced, right? So, first of all, was an election that, you know, during that period, every election there was like, kind of a election, project to try to like, kind of defend against any, any sort of like, violations of the policies there. Um, we also had, COVID to 19 and there was a bunch of kind of health, policies that rolled out and enforcement of those policies and doing that at scale. Uh. One of the big challenges that is not talked about a lot though is that there's a, at at least at the time, there's a large manual review team based, uh, in a couple of different countries. But Manila was one of the big cities and Manila locked down really hard when COVID started we basically relied on, manual reviewers in office to do the work. And we had to do this like fire drill, where we're like, okay, like we just lost all of our manual review bandwidth, like what are we gonna do? And so there was this really fast moving project there to figure out, you know, a combination of how do we do more with automation and less with human review? And how do we get the human reviewers back online? And so there's like kind of this like shipping laptops project. Oh, you know, the web apps don't load and the residential internet because it's like, you know, too slow. So we have to optimize those. there's also a big security concern, right, when you go from a secure office facility to like a, working from home environment. And so, we did a bunch of work on, security and access control. so there was just a ton of. Uh, craziness over there. back when I was there, I, I learned a lot for sure. like I said, don't really wanna work in trust and safety anymore. but I did learn a lot during that, period.
Darin 00:17:50.932 Yeah, I'm trying to figure out how you would write a Reg X for a hot button topic. I, I don't know how that would match.
Pete 00:17:55.633 Yeah. You, well, I, I mean, I, I could tell you how that kind of thing does work, if that's interesting. It's like technically, I think very interesting actually.
Darin 00:18:07.060 Sure. Now you're not under any NDAs that would get you in trouble at this point, right? You can keep it high level.
Pete 00:18:12.805 I mean, these techniques are not like, they're like kind of well known. Um, so I'm not kind of describing any secret sauce or proprietary technology, but like, there's a number of different ways back then at least that you would detect spam with the rise of LLMs. A lot of this I think is changing though a lot of the existing techniques aren't used because of the scale. And LLMs you know, a lot more expensive than, some of the heuristic based techniques. So there's a machine learning side of the house that would train on labeled data and try to predict, you know, whether a post or tweet or whatever would eventually get flagged and taken down, for violating a policy. Now. You would think that you just engineer the right features, throw it into a, statistical model and have it learn the right coefficients. But the problem is that there's an adversary and there's a, there's latency between, you know, when the post happens and when you get the labels, and when you retrain, right? So if the adversary is actively trying to avoid what you're doing, the labels are gonna be out of date by the time you get them. So the thing with these kinda machine learning based techniques is they are very scalable and they, do solve a large part of the problem. But, my side of the house is mostly focused on emerging threats. Like, Hey, we've got something that started five minutes ago and we have to do something about it now before it takes over, like the whole site. And so in those situations we do a number of interesting things. I'll give you one example. we would break posts up into, bio grams and trigrams. well actually it would be trigrams and skip grams. so the idea here is like, you can think of them as like you run a bunch of text normalization. you have kind of groups of three words or three tokens. And then you look for massive shifts in those over time, So there's kind of a certain baseline of how often a trigram is gonna be used on a, on the site. And if you see a really big increase in the use of a particular trigram, that might be Justin Bieber drop in a new single back then, or it could be a spam attack. but the thing is, you can kind of surface your top trigrams, maybe filtered by certain criteria. to a small number of human reviewers. And they can very quickly flag, oh, this looks suspicious, this looks suspicious. And then you can really quickly put together a Reg X rule or, or some other sort of heuristic that like kind of stems the bleeding a little bit. Now an adversary can adjust over time, but like what ends up happening is that this heuristic starts to feed into those models and starts to produce labels for those models. So the heuristics in the models kind of work together. it's a very interesting problem when you have this adversary trying to actively avoid your systems. So you end up with, with these techniques that seem kind of hacky, but end up actually working really well at scale.
Darin 00:20:49.425 So again, what I'm hearing is if we had no users, our life would be easy.
Pete 00:20:53.358 wait long enough. You know, we're all gonna be AI pretty soon, right? So, will any of this matter.
Darin 00:20:58.444 That's a good question. So you left Twitter in 22 and you went over to Dexter, right? There was not much of a break there, but you just didn't, you know, parachute in as the CEO.
Pete 00:21:10.789 Hmm.
Darin 00:21:11.194 What did you do?
Pete 00:21:13.249 Well, I had been at Twitter for a while, and Like we talked about, it was, it was a lot of work and I met some great people there and I learned a ton. Uh, but the work was definitely intense. And my good friend Nick Schrock kind of had called me up while I was there and he was like, Hey man, I'm trying to recruit a, a head of engineering. Can you help me out? And so I, I referred him a couple of people and I was kinda helping him. Figure out the strategy and stuff like that. And I eventually kind of realized, you know, I'm, might be ready for a change myself now. And it had been, you know, it was the end of 2021, early 2022. Pandemic had been going on for a while in San Francisco. I had just had a kid. and if you're gonna change one thing, why not change everything at the same time? Right. So, ended up, you know, having a kid moving and changing jobs all at the same time. and that was, uh. That's kinda how I ended up at Dexter.
Darin 00:22:09.954 you ended up in engineering because I assume that's still your first love, or do you enjoy spreadsheets more today?
Pete 00:22:17.879 I'd say I like engineering. I like selling.
Darin 00:22:20.934 You like sell, you like selling.
Pete 00:22:22.589 I like selling. Yeah.
Darin 00:22:24.384 Tell me more.
Pete 00:22:26.099 Well, in its purest form And, and, and you might have a, like, I think that it is like fair to be cynical of sales and sales process and marketing, right? But in its ideal form, you know, you're working at a company with a good product for somebody, And it solves a problem for somebody, and the price that they pay for the product is less than the value they get from it. I think that's true for like most tech infrastructure products except for maybe Datadog, which, you know, bankrupts everybody. I'm, I'm just kidding. I'm just kidding.
Viktor 00:22:59.601 No. Yeah, I don't think you were kidding. It.
Pete 00:23:02.481 only sort of, but in many ways sales is about clarity and communication, right? And it's really about understanding what that person needs, understanding what you got. forming a point of view as to whether your thing helps them or not, and then figuring out how to communicate that, in a way that that person understands. I learn a lot about the world doing that type of stuff. you learn a lot about your product. You learn a lot about yourself. You learn a lot about how you communicate. and you also learn a lot about like what other people do, which I think is really fun. So for example, like when we go and we, do kind of like sales conversations with, Dexter users, right? we have people that like launch rockets with Dexter. We have people that ship porta-potties with Dexter and we have life sciences companies and financial services, all these different. Industries that are all building data pipelines, right? you get to go in there and learn a little bit about like where your mortgage comes from or like how prescriptions are tracked and stuff like that. And I just think you get a little taste of like all these different industries. 'cause you ask these data people like, what's your problem? They tell you your problems and, and in many ways. Everybody's got similar infrastructure and data problems, but they're working on such different things. it's kind of cool. It's a cool way to experience the world. I think just going in and talking to lots of people about what they do.
Viktor 00:24:23.395 I'm going to send this clip to my salespeople. 'cause you just said understanding what they need and understanding what we've got. My experience is understanding what they need and then making the organization make it happen. That's my experience with sales. They just ask for this, build it.
Pete 00:24:42.895 Yeah. You know, there's also, it's not quite as simple as that, right? Like, as you know, as a CEO, you get to do the fun parts of selling, and then there's the. Navigating the Byzantine procurement process of an enterprise and negotiating the data retention terms with the security team and going back and forth on payment terms and stuff like that. Like that is a little less fun, a little less creative. but you know, I think there are parts of selling that are really fun and creative and interesting and exciting.
Darin 00:25:12.623 One thing you said there was, even though you have lots of different industries, there's a lot of similar problems. And a lot of simmer solutions to those problems, even though it's completely different data sets. What are some of those problems that you're seeing?
Pete 00:25:28.228 My data is late. My data doesn't look right by some definition of doesn't look right and. I'm getting weird errors and I don't know why. there are Gartner terms that are equivalent to those things, like data quality, observability, data, downtime. Right? But broadly, it's kind of those, three things. Everybody's got an SLA for like, when it has to arrive and what its characteristics need to be and what happens when there's a failure upstream and how do you handle that? Do you move forward with partial data? Do you, not move forward and page somebody? Do you have some sort of automated recovery process for that pipeline? there's a lot of interesting problems in there, but like fundamentally, again, like whether you're testing rocket NGINX or you're delivering porta-potties, it's like. It's the same thing, right? Or it's the same classes of problems, but very different data sets and sometimes different infrastructure.
Darin 00:26:23.109 What if you're delivering a porta-potty to a rocket engine, I, you'd be in trouble. Right?
Pete 00:26:27.733 I think it depends on if you put it in the front of the rocket engine in the back, right?
Viktor 00:26:34.697 I'm, I'm not sure how much companies. Already came to that realization, but now that everybody's scrambling with AI and failing big time in enterprises, not on individual levels, just to be clear, I, I feel that more often than not, those failures are tightly related to data. Kind of like you just cannot extract the information that system needs.
Pete 00:27:01.228 there's a lot to unpack in that line of, questioning, you know, it's like, why do all these data projects fail? I mean, a really common thing that we hear from our stakeholders is, um, and customers and users and, even internally for a while, is like, the data team gets this urgent request from some stakeholder team. Oh my God, we have to know. Thing X about the business or thing Y is down, it's very urgent, and the data team scrambles and they build a dashboard and then like nobody uses it or they look at it once they say, okay, looks right. Thank you. And it's like a very, very common thing that data teams have to deal with. built this product called Dexa Compass that we've been working on to kind of tackle that specific problem. But I think a lot of the, problems in this space are actually pretty similar to what you see in consumer apps. like we were talking about, I used to work on Instagram and Facebook and Twitter, these like consumer apps and getting people to engage on these things. There's like armies of PhDs and engineer and like kind of software engineers and financial analysts deployed to try to get people, to become daily active users of these social products. Now, is that healthy for teenagers? Probably not. and uh, that's I think a totally separate conversation. But, the fact is, is like, you know, it's a well recognized problem that getting people to engage with your product is challenging, right? And building that habit is challenging. And I think that data teams have problems bringing their, data sets to market, right? And there are dashboards to market and like there's a business leader out there. How do you get that person to stop? Paging the data team and asking them questions and start getting them to self-serve on their own data. And I think that that is an area where a lot of these AI companies, have not focused yet. I think there's a lot of focus on multi-agent orchestration, for example, where you have. All sorts of cool agents calling other agents. And, you know, you have evals and you, you have multiple different tools and you're, and you're starting to put together this really, really technically interesting thing. But at the end of the day, if nobody uses it, what value does it really produce? So you actually have to, I think, focus on like bread and butter, like usability and awareness and like, almost like internal marketing for the data and data products that you're producing. I dunno, does that make any sense at all?
Viktor 00:29:23.766 Yeah, I mean that's very similar to other areas where I'm much more. Involved, like I'm heavily, for example, in developer platforms, which feels like the same problem is kind of, yeah. We are building this web that will help developers do something and then nobody's ever touching it because it's really, it was a one off thing on, or we don't really need know what we just, what they need. We just assume that they need this. And, uh. Kind of, at least in that area, I feel that majority of companies are just going in circles and kind of wondering why, why is this not working?
Pete 00:30:02.377 Yeah. We need to get some of that ai. Can someone go buy some of that AI for us? You know that that's the approach some of these companies take.
Viktor 00:30:10.067 Yeah. I mean, but AI. I feel that the problem with AI is the problem that we continuously have with every wave, and that's that, Most companies are aware that, their systems are not as they should be, and then they choose to put a new layer that will somehow automatically fix everything. and that's, oh, if you just, oh, this application, this, this is not working, how it should be, right? And so on and so forth. Oh, why don't we put it in a container image, right? That will solve the problem, whatever it was. Oh, no, it doesn't work either, kind of. Oh, let's put it in Kubernetes. Oh, no, let's put it in cloud. oh, no. Now let's, let's put AI in front of it, kind of, and, and nobody's addressing really issues, right? It's everybody's just building layers on top of things that don't work. This is my very pessimistic view of everything. Just to be
Pete 00:31:02.534 know what that that reminds me of is the microservices industrial complex of like, kind of maybe. 2010 to 2015. And it was like, I'm like a microservices hater. and, um, I don't, I don't know what your guys' position is on this stuff, but back when I was kind of running infrastructure teams at Twitter, like microservices were very in vogue. what I realized was happening was. In order to get promoted at a big tech company like a Netflix, a Google, you know, whatever. For Twitter, you have to architect a large scale distributed system. It's literally written in the promo guidelines. Like you have to demonstrate system architecture, competence. So what do you do? You build a bunch of services. Like do they solve the problem for the business? Yes, but do they they solve it in the most, the simplest and most attainable way? Probably not. And then you learn a lot from that process. Venture capitalist shows up, says, Hey, you built this complex distributed system. Do you want to go like, make that as a company? Sure, let's do that. Like distributed tracing is a great example, right? Like, so now you have these vendors that like need to induce demand for their like distributed tracing or observability product. And so they get everybody to adopt microservices so that then they, they need like distributed tracing. And then, you know, uh, the cycle kind of like continues, right? And so it was like for a while. I think it's calmed down a little bit since the peak in like 2016 or something. But company tech blogs were just content marketing at the end of the day and not good technical advice.
Viktor 00:32:34.411 I think it's actually worse than what you described, right? We can now, uh, I'm parking discussion for some later date, microservice versus monolith, but in those scenarios that you're describing, my variation of that scenario is that we, we. Transition from monolith to MI to microservices, but actually those don't end up being microservices, but end up being some kind of, some kind of distributed monolith where actually you still need to deploy everything at once and so on and so forth, and you don't get benefit of what you had before. You don't get benefit of what you were supposed to be doing. So you get, you get no benefit from either where you were or where you were supposed to be, and then you start crying. Because it's most companies that claim that they're doing microservices, they're not doing microservices independently. Not whether that's a good idea or not, kind of thing. They're not doing it.
Pete 00:33:28.081 Yeah. Yeah. I mean, I think the platonic ideal of microservices is good, right? Like in a world where you're unlikely to have cross-cutting changes between multiple services, great. Like deploy service, I just think it tends to look more like macro services. Like, you know, you'll get the billing service that does all the billing stuff in like a billing monolith, and then you'll have like the web tier that does the, I don't know, whatever the web tier does, right? Like.
Viktor 00:33:50.851 Or you know, people who never learned about, for example, backwards compatibility of APIs, and then you change something somewhere and then you say, now the whole system needs to be rewritten because, uh, I did this thing over there. Come on.
Pete 00:34:03.886 Yeah, it is like, I want to, change a primary key or like add a new type of primary key. We actually had this at Twitter where it was like, I forgot what it was exactly, but we wanted to be able to, oh yeah. It was when we launched. Um. We launched like a voice notes product, or something. and now suddenly, instead of reviewing tweets, you had to review voice notes, which had a different key, like primary key space. And so then we had to push out like this massive. Like refactor, like all these like dozens of services that expected tweet IDs now had to take these like enums, that was like branching on like, is this a tweet or is this a voice note? And it was like, man, if this was just like one monolith, we could just deploy this all today and be done. But now we have to do this like multiphase deployment and negotiate with all the different teams for when we're gonna do this. And it was, it's like a whole thing, kind of silly.
Darin 00:34:52.705 I am going back to what you were saying earlier about how we're getting all these multi-agent things. And the, uh, inverted curve meme kicked into my head of we've got multi-agent right at the very top, and on either end, all you need is Excel. That's what it feels like.
Viktor 00:35:09.799 I have a theory that most of actually ideas that we get, like multi-agent now and many, many others before, they're actually good ideas and they make perfect sense. The real problem is the. Companies being incapable to understand what it's all really all about and doing the right thing and so on and so forth. Multi-agent makes a lot of sense.
Pete 00:35:31.699 Oh yeah.
Viktor 00:35:32.314 nobody needs it.
Pete 00:35:33.589 That's right, that's right. And um, and when they do it, it's usually like encapsulated and looks a lot more like normal software engineering as opposed to some brand new thing. Right? I think it's very similar to microservices, right? Like I think that the big problem that people had, like what people got wrong with microservices was the day you introduce a new service should be a very sad day. But for most engineers at large tech companies, it's a very happy day. and I think similarly, like if you're, if you're gonna move to like multi-agent orchestration, it should be like, man, like we're only doing this because we have to do this. There's no other way to solve the problem, and we're gonna, we're willing to take on the complexity cost of this thing.
Viktor 00:36:12.905 you know, you reach the point where you say, this does not work anymore. We need more,
Pete 00:36:19.286 someone needs to figure out how to, compensate engineers and provide career paths for them that they don't feel this urge to, like, or this need to, build complex systems. Because the unfortunate reality is that a lot of companies, you hit a certain ceiling in your career until you do that. And it's, it is like really, I think a bummer that that's happening.
Viktor 00:36:41.415 That's definitely true. But there is also different aspect, and I'm guilty of it, is some of us not, not to say everybody right, are in this industry because we simply like doing that, like playing with stuff with toys. I, I'm guilty of it. I like playing with new toys, kind of like, uh,
Pete 00:36:59.710 Yeah.
Viktor 00:37:00.220 what I like doing.
Pete 00:37:01.420 And I think as long as you're explicit about it, right, and, and like you wanna give. Space for engineers to do creative work and to take risks and to do innovative stuff. Right. And I, I just think that there's like, like, like react wouldn't have happened if like Facebook was just like, Hey. Use jQuery, this react thing is overkill for this app, right. You know, you want to, there is definitely gray areas to this and there's definitely like kind of an art to figuring out, hey, we want to be experimental and risk on and do some fun and exciting research in this area, but for this area we just gotta hit the ship date and we gotta do the dumbest thing possible because we're resource constrained or whatever. You know what I mean? Like it's definitely nuance.
Viktor 00:37:42.394 I feel that very often the real problem there is that. If, if engineers would have time to play, I'm going to use multi-agent as example, what applies to anything, right? Kinda, Hey, I, I wanna see what is this, does this make sense? I wanna play with it for a week or two or whatever the period of time is and we'll figure it out and then we'll figure out whether this should be used or no, and so on and so forth, right? But I feel that that's not how the things work. I feel that more often than not, it is mandate from above Starting today, we are doing multi-agent,
Pete 00:38:18.693 Right.
Viktor 00:38:19.458 of like and then that command trickles down until it reaches the, the, the level where nobody even knows anymore by that was decided and then we all do it.
Pete 00:38:30.268 Yeah. got in trouble for that one by the way. Um, for something like that.
Viktor 00:38:35.788 Which direction were you from?
Pete 00:38:37.349 I was the, pointy haired boss going down actually, earlier this year. I sent out this memo, it was like in April and it was kind of like, the point of it was guys, Claude Code and Sonnet 3.7 came out in February and it's really good, you guys should check it out. If you haven't checked out these AI coding tools. And it's like, I don't care if you use them, I just think you should try them, because like, you want to have all the information and we're not going to mandate like what tools people use. But like, uh, and it was interpreted as like, oh my God, are we going to like, like somebody asked me if that was going to be on the performance review, like the amount of AI use, and I was like, no, why would I possibly do that? But actually companies have done that. So I, it was a reasonable, it was reasonable feedback, in hindsight, but, I can see, I've heard like horror stories of CEOs like being very like, you know, dictatorial and like, Hey, we gotta take advantage and do technology X. I think there's a difference between like, we must use it and we should explore it. You know what I mean?
Viktor 00:39:36.964 Exactly.
Darin 00:39:38.033 Let me ask the most important question associated with that whole line of thinking. Pete, do you play golf?
Pete 00:39:43.973 I don't.
Darin 00:39:44.783 Well, that's a good thing because your, your people will probably be safe then. You're not gonna be making deals out on the course,
Pete 00:39:51.028 Oh yeah,
Darin 00:39:52.078 which is when usually those kinds of things happen.
Viktor 00:39:55.183 Could come on Darin. There are, there are other activities that work for sales. It can be, maybe our guest is into sailing.
Pete 00:40:03.298 my wife's into sailing, but I'm, I'm not into sailing
Darin 00:40:06.543 So I'm glad you called yourself out, Pete, because I was gonna ask that question. It's like, okay, what, pointy hair boss thing have you done? Did that really go sideways? Like having somebody come back, it's like, are we're gonna be measured on it? That had to have almost shook you to the core I would think.
Pete 00:40:22.083 People were pissed. Yeah. and I ended up having to kind of like, clarify a lot of it. it was one of those things where I, I think that people don't. Like, nobody really wants to be told what to do or like how to do it. And my point was to basically say, Hey guys, like we're gonna pay for your AI tools. Please take advantage of this and try it, but send it out as like an email on like a Sunday night. And I think there was some, uh, like to me that doesn't matter when the email sent, but like I heard some feedback from the team. Like, you can't send a memo like that. It seems like, makes it seem super serious. Like, you're gonna fire people that don't like, use AI tools. I was like, why would I do that? You know, it's like, I don't know. I think you just measure like, people can get to their output however they want to do it, Like we don't dictate vs code versus, uh, cursor versus emax or anything like that. my kind, the Amex kind are, I know a, a rare species at this point, but.
Viktor 00:41:17.319 I never understood that needing companies, and I was in a few of those that kind of like, here we all use Eclipse or whatever. It's kind of.
Pete 00:41:24.879 Yeah.
Darin 00:41:26.079 So two things though. I, I want to call, so anytime I see an email from a CEO on a Sunday night. That automatically rings bad unless you always send something on a Sunday night, do you?
Pete 00:41:39.189 I don't, I don't,
Darin 00:41:39.969 Right. So that's
Pete 00:41:41.094 I, right. And it's a red flag to a lot of other people. And I did not know that. So, uh, lesson learned on that one.
Darin 00:41:49.899 Okay. Secondly, we had a episode recently that at the time of recording actually hasn't even come out, uh, but we were talking about. Actually, Viktor sort of brought this point home of, look, we don't care if you use ai, but the bar has moved that, you know, people are X amount more efficient. Now, some people may or may not be, you know? Yes, you've said that. No, we're not gonna be doing performance reviews, and I'm not trying to put words in. Please, anybody from TER that listens to this, please don't read anything into this. I am, I am backing Pete into a corner, hopefully, politely, but it's like high level. Let's, let's take Dexter out in the middle of it. And you are just a general person. if people are using some sort of tooling within a company and there are other people that are not using that tooling, and the people that use the tooling are two x more, whatever percentage better than the people that don't use the tooling, does that matter to you as a CEO? Or as leadership within a company. Again, we're not talking Dexter, we're just just high level.
Pete 00:42:54.793 well. Man, this, this is a, this is a good question. it doesn't matter what. So there, there's certain, like, like people have, are have certain levels, right? Like there's a difference between like what's expected of a new grad and what's expected of a senior suite and what's expected of like a staff plus, right? And so, and it's not just like a straight line productivity measurement. You know, there's a lot of stuff like. The new grad is gonna crank a lot of code, but like a lot of it might spend some time in review because they have to learn the ropes a little bit. And the senior suite is, writing some of those complex parts of the code base and the staff suite is more like leading others, as like kind of a player coach. So first of all, like productivity is not like everything.
Viktor 00:43:39.514 I mean, how do you even measure productivity of, a developer? now if you say lines of code or anything in that direction, I, I'm, I'm out.
Pete 00:43:49.135 No, no, no. I, well, I, I think the one in my opinion, the one kind of universal, way to measure kind of like an employee's performance is, forecasting ability. Like, you know, expectation kind of management. So if you are, you know, mid-career or late career, uh, and you're working in an area that is fairly well understood, you should be able to say, Hey, this thing's gonna be done in the state. And like, you know, most of the time we should be able to forecast that. or, you know, you should be able to say, listen, we don't know. When this thing is gonna be ready. two weeks, we're gonna be able to give you like a sense for like, here's the, the range in which something is gonna be completed. and that's great. You know, like I think that it's a, you know, I've been an engineer for a long time. I know that like there's a ton of unknown sometimes when you're driving into a legacy system. And I tend to not, like, at least my management approach is not to go in and say, listen, we need this thing done by this date. It's to kind of start a discussion with the team. Be like, listen. When do you think would be practical for us to ship this capability? And then you usually get arranged and then they say, listen, in two, three weeks, we'll dial that in, and then we commit to the date, like, a couple weeks later. Right? and then at some point there becomes an external milestone, like a launch event or a big customer that's expecting something and then it's like, okay. we need that forecast to be accurate. if the forecasts keep being inaccurate, it becomes very difficult to run the business. So, at that point it's, you know, it becomes a question of performance, I think. But that's really where I start. and the expectations, again, change based on, we expect new grads to be wildly wrong in their estimates and staff, software engineers to be pretty spot on.
Viktor 00:45:37.069 How do you avoid then result of all that being always overestimating because you cannot be wrong if you overestimate.
Pete 00:45:46.782 Well, you know, what is a good heart's law Where every, measure becomes an objective or something, You don't tie that directly to compensation. I think that's a, hopefully an obvious thing, but, it becomes at a certain point, like at least in tech startups, everybody's got equity. And at the end of the day, hopefully everybody's got the maturity to understand that like. we want accurate forecasts, but we also wanna be moving as quickly as possible because there is a market out there that's gonna move on without us. If we don't keep up. So ideally the comp, like the employees can share in the upside of, of hitting that deadline, right? You hit the deadline, you close the big customer. That big customer leads to an increased valuation at the next funding round, and that's more money in the employee's pocket, right? And so I think that, you know, it's a lot harder to feel that at larger tech companies, but in smaller ones, I think, uh, most people do feel that.
Darin 00:46:45.320 I can't believe we've talked close to an hour already and we haven't started talking yet. Uh, I, I wanna be respectful of your time. We've talked about Dexter in and around. What is Dexter? What kind of people need it? I know it's porta-potties and rocket ships, but you know what, if I'm doing something else.
Pete 00:47:04.696 well, a lot of companies need data,
Viktor 00:47:06.903 A lot, isn't it? Is there a company that doesn't need data? Doesn't have data? I, my, my description of software engineering is basically it's all data in, data out that that's all we do
Pete 00:47:19.548 Yeah. you know, and, and a lot of businesses like that too, right? Like a lot of, data is very important to sales processes, very important to marketing and. Understanding the health of your distributed system. It's, these are all data problems. They're all data pipelines that, back that. And so, um, I think any company that's like greater than like five or 10 people probably has data pipelines. Dexter is a, is an open source, technology to like build and operate data pipelines. And we have a commercial product, um, on top of that.
Darin 00:47:47.891 why would I want to move from the open source project to the product?
Pete 00:47:52.841 we try to make a really good data orchestration tool, so it's similar to like, you know how Kubernetes you write a bunch of YAML and it makes sure that your service is up and running and it'll move. Pods around and it'll run a reconciliation loop to make sure that like the, the state of the system is, is what you expect. we do a similar thing but specialized to the data domain. So we make sure that the data is hitting its SLAs and is flowing through the pipes. the Dexter like orchestrator does all that stuff in open source. our commercial product, there's kind of like three reasons why people would buy it. the first is it's hosted and we manage infrastructure and that is valuable. the second is we provide a lot of the like, multiplayer and enterprise features that you would expect, like, the Okta integration, all that stuff. You can put open source behind, like an identity wear proxy if you wanted to. But, we provide a lot of the little like roll space, access control and audit trails and all that stuff. And then the third reason is, um, there's a lot of. Adjacencies that like, there's extra tool, like once you've, once you've onboarded onto DAG or open source, your next question is, okay, how do I monitor the health of my pipeline overall? Like even if my data's hitting the SLAs, like beyond that, how can I optimize my system? And you bring in a separate observability tool. You bring in a separate data quality tool, you bring in a separate data discovery tool, and we bundle a lot of that stuff within our commercial product. So, you know, if you want to choose a best of breed. Technology for that. Great. You know, we integrate with all those, but if you wanna maybe defer that purchase for a year or two, uh, we provide a lot of that stuff out of the box with Dexter Plus
Darin 00:49:25.096 DAG surplus. Well, Dexter can be found@dexter.io. That's D-A-G-S-T-E r.io. How did that name come about? Because I hear DAG and I'm automatically thinking dag. Right? The whatever DAG means. Diic or D, what is it? D aic Graph
Pete 00:49:42.726 directed a cyclic graph.
Darin 00:49:44.116 accl. Thank you. is that part of this?
Pete 00:49:47.551 That is the core data structure. Underlying it is we have a directed as cyclic graph of data assets, that resemble, basically of complex pipeline.
Darin 00:49:56.506 So it's not dragster, it's dragster.
Pete 00:49:59.851 you are correct, but many people type dragster.
Darin 00:50:03.886 I don't wanna take a look@dragster.io. I don't even want to know what's there. It may be safe, it may not be. I'll leave that up to everybody else. All of Pete's information will be down in the episode description. Pete, thanks for being on with us.
Pete 00:50:14.911 Thanks guys.