Mark 00:00:00.000 I think if you haven't built a system from scratch all the way through before and then had to worry about the security, the resiliency, the observability, and dealing with the failures and all of that pain that it entails, you think it's an easy task when it's not in any way.
Darin 00:01:17.022 Viktor. How many years have you dealt with microservices do you think?
Viktor 00:01:22.972 I, I dunno. I'm too old to count anymore. I dunno. 2050, I dunno. A hundred. I dunno, man.
Darin 00:01:31.602 You're getting to my numbers now. No, it's not a hundred. At least I hope not. Could you argue that the first mainframe was a microservice? I think you could.
Viktor 00:01:39.417 I mean the mainframe, I haven't worked much with mainframe, but from my perspective, our personal experience, the first microservices was Linux. I know that nobody thinks of it as microservices, but I do kind of, yeah. Small applications, pipe to each other. Talk to, to combine them to accomplish something. Yeah.
Darin 00:01:59.307 On today's show, we have Mark Fussel on with us from Dia Grid and one of the co-founders of Dapper Mark, how are you doing today?
Mark 00:02:07.897 I am doing fantastic. It's, uh, incredible to be here today.
Darin 00:02:11.008 What do you think? Do you think the original mainframe was just a big microservice?
Mark 00:02:15.373 Well, it had networking. So yeah, I mean, if you network things together and join them together, then they have microservices between them. I mean, microservices stemmed from a business need. It was a desire for businesses to ship things a little faster than before. So rather than you waiting for this great big compile thing to work together, you split things apart. So team A could do things independent to Team B. So. Yeah, if you split 'em apart, maybe a name frame is a microservice split apart on different terminals.
Darin 00:02:44.398 So what you're telling me is that, uh, dapper was greatly influenced by the mainframe. That's, that's what I'm hearing.
Mark 00:02:50.968 Yes, it was. Yes, there we were on a terminal one day thinking, wow, how do we turn this thing into a microservice? Well, yes and no. I mean, in many ways it was influenced by just the industry as a whole, shifting towards distributed applications and you know, hosting it on multiple machines as well. So, you know, the trend inside the hosting environments, along with the need to ship fast was the driving factor for dapper.
Darin 00:03:15.652 So for anybody that's not familiar, familiar, easy for me to say with Dapper, and that's DAPR. What is Dapper? Why did it come into existence? Why did it need to exist?
Mark 00:03:28.931 what was happening is that we saw all these teams, and particularly not only in Microsoft, which is where we had created the thing and, and started the initial incubation of the project, but also outside of it. When we dealt with hundreds of enterprises, uh, building the same old technology again and again, and that really was how do I discover other services and talk between them? How do I send messages between them for these event driven applications? How is it that I coordinate across them all? and particularly how do I plug in different types of, say, storage or backing secret stores and things like this. So it came from this need of looking at. distributed systems patterns, common patterns that you see, that people build time and time again and saying, Hey, stop, stop reinventing the pattern. That's a bad idea. Why don't we take those patterns and codify them for you and give you a great set of APIs that you can put. Into your applications and build and then deploy it onto any one of these platforms. Of course, in the early days there used to be lots of distributed systems platforms. You know, there there was a docker compose and Mesosphere. Of course, mostly it's Kubernetes now, so it was born of the idea of let's help you stop reinventing the wheel, reinventing the pattern, put those into a set of APIs and give it to you developers so you can build applications faster on these distributed systems.
Darin 00:04:45.138 You're answering that with reinventing the wheel a few times. You, you kept saying that, but didn't you just reinvent the wheel by doing this?
Mark 00:04:52.383 Well, uh, we, we've reinvented the wheel, but it's stopping everyone in reinventing their own wheel. So rather than everyone having their own wheel, and I'm building my own wheel from scratch, you go buy a wheel. So this is buy a wheel, not build a wheel.
Darin 00:05:05.963 Uh, okay.
Mark 00:05:08.028 Yes. You know? And so that's what it came down to. And so what we saw is like, I mean, let's just take a common example. because you've distributed application and you've got team A who's building sort of a, let's just say an order processing service and another team B that's doing their payment service and they're working together and you wanna build and deploy these into different environments, you gotta be able to find and discover those services and, it gets very painful when you're a developer there. Saying, oh, I have to open HTT ports, I have to open up ports and, can find networking and configure all that stuff. Why can't I just say there's a thing over there called, application B and I can just call a method on it? So all of that abstraction of discovery, calling another service retries security, and all of that, was basically taken care of for you, for one of its APIs, as an example. And so that's what we call service invocation and service discovery.
Darin 00:06:01.000 I think one thing that we have to come up with, at least in tech, we have to come up with another example than order and payments. That's just,
Mark 00:06:09.835 Boring. Yeah.
Darin 00:06:10.690 else. I don't know what it is.
Viktor 00:06:12.865 Shopping
Darin 00:06:13.300 Shopping.
Mark 00:06:14.710 Yeah.
Darin 00:06:15.790 okay, so you reinvented the wheel, you agree with that? which is fine because Okay, we're joking. Not joking. You did reinvent it, but so that people didn't have to build it from scratch because usually in places, they already had a message bus. They already had other orchestration tools, meshes, whatever. But they were having to wired up as a bespoke solution for themselves.
Mark 00:06:40.773 was that. Yeah. But, but DAPA took this thing to another level as well, because it wasn't just providing the APIs for how you do that communication, but it also provided, a connection and a, an abstraction over the underlying infrastructure that they talk to for that API. So let's take a really good example. Say you are using the Dapper Pub sub API, where you want to publish messages and receive messages. Super common scenario. So first off, that very common scenario of I just want to simply have these applications receive messages on some event that happens and I want these other applications to publish them. You know, you typically go and build a mesh, go and find some sort of message broker. So you go out there and you say, oh, we're gonna use Kafka, Kafka's, the thing we're gonna use, and someone knows, well, why Kafka? And that's just because I heard about it last week. And so they start building against this all and start using it all. And before you know it, you've sort of tightly bound. Well, first you've had to build the abstraction called pub sub on top of Kafka 'cause it doesn't exist. So you have to build all of that. And then you've tightly bound Kafka into all your code. And then six months down the line, you find that the platform you wanna move your code to or your choice of design changes. And so now you have to rip out all this code around Kafka and put it all in in your other message broker. So DAPA came up with this idea of a component model where you can simply say, keep your API. Don't change your code, but if you wanna watch the underlying, um, message broker from Kafka to say A-S-S-N-S or to another cloud provider's message BROS or, or Rapid MQ or MQTT, you can just switch that around and your code doesn't change. So this gives you cloud portability and importantly a local development experience that I can then take to a cloud. So that combination of the two of. Don't reinvent the same API and we'll connect you to your underlying infrastructure without you changing your API became very powerful
Viktor 00:08:34.261 How often do companies even think about those things? You know, kind of, oh, Kafka. You know, in my head it's, it's usually kind of, you realize that once you decide to switch from Kafka, not before you are started implementing Kafka, right.
Mark 00:08:51.166 be surprised how often it happens. So let's give you an example. We worked with FICO and they started using Kafka and they got to some scale limit and they decided they didn't like it. And so they started to re-engineer everything that they were building on top of a Ry pulsar. And in doing so, they wanted to provide a choice between the two anyway. so we, we saw that happen with FCO, who's, you know, a big payment provider. We often see people as well who might build against, Say something like, Kafka, and then they moved and told that they're gonna move to AWS and deploy everything there. And though, uh, AWS has a managed Kafka, you know, they often start to use SNS as an alternative around these things. Or, or actually more, another way around is often they start using SNS on AWS and then they want to switch to a Kafka, the managed Kafka service, you know, the MSK and so you get those switches, so it happens a lot. And also when they want to kinda switch between platforms, so we see it quite regularly actually. And, and the code you have to rip all out, you know, is, is a, huge effort within the side of the team. So it becomes, falls into a platform engineering play more than anything. Dapper becomes a very, or is, I should say. A very clean contract between the application platform team and the developers above it all. Either with an API or because you can plug in these different components underneath it all.
Darin 00:10:04.010 So I'm thinking about dapper. You said it came out of Microsoft. How did they let it out?
Mark 00:10:12.960 Well, well, let's, let's try. In many ways we let it out. I mean, we were, Uh,
Viktor 00:10:18.575 was it kind of like you, you, you put it on USB stick and then you went out to resign from Microsoft? No.
Mark 00:10:24.845 no, no, no. From, from the very, from the very, very beginning, from the very, very beginning, we were lucky to be in an A team called the Incubations team, uh, which was part of Azure. It was actually in Mark Vic's org, who's, uh, the C two of Azure. And, uh, we said right from the very beginning, we wanted open. Release an open source projects, in fact, it was very conscious right from the beginning that it wasn't even in a Microsoft repo, it was in its own repo. So the projects had very clear goals to begin with. Implement common APIs, make sure you have great abstractions of infrastructure, make sure an open source project. And so after it got released within a year, we actually donated it into the Cloud Knative Computing Foundation. they accepted it straight in as an incubation project because it took off so well and it's lived there ever since and it's had been a great home and it's matured very, very well inside. CNF. And CNCF has helped it grow into a, a huge community today with thousands of developers and thousands of companies using it. So, you know, we track. Over 30 to 40,000 companies who are actively engaged with using Dapper today. and we have a big community of 9,000 developers on Discord, that actively engage and, build and contribute to the project. So you know that it's done well inside the CF.
Darin 00:11:36.787 Where's it landed in its cycle? I mean, if it's incubating or came in, obviously in incubating. Where's it at right now?
Mark 00:11:43.507 Well, last year in October, it became a graduated project in Titan Chef. and we actually were the first. projects, I should say, to go through the, the new process that CNCF had instigated, which was quite rigorous, and then interviewed a lot of end users and a lot of community members, and made sure that the project is very healthy. So, yeah. So last October it became a graduated project, which is sort of the, the highest level you can get to inside c ncf FI mean, there's about 25 other projects there today.
Darin 00:12:10.508 I think it's the key. You graduated. Yeah. You, so what is the, what's the postdoc for that? I mean, now that you graduated? It's, uh.
Mark 00:12:19.793 Well, that's continued to grow the community and make sure that we, add features and build things out. one of the biggest directions we've taken and taken down in the last, um, year or so is to implement, particularly workflow as a. A workflow engine into Dapper, which is sort of a huge ask and it's been a big endeavor, but it's finally there. And then building on top of workflow, we're building a lot of ENT applications on top of that and AGEN frameworks and that, I mean, the way I look at the Gentech applications is that they're just distributed systems with smarts and l lms. And so now we have a, a very, a code first workflow engine. And when growing the community, you know, we're seeing a big rise with a dapper being used for billing genix systems as well.
Viktor 00:13:04.806 So this is, this is interesting because until this very, this moment, I, I haven't been asking questions because I know that I dunno what to ask. And now, now you're, now you're saying, uh, gentech and brick flows. That's completely new to me. Kind of what are the workflows?
Mark 00:13:20.211 Oh, so Dapper has a code first workflow engine. So if you're familiar with the workflow engine, you know, it allows you to run durable state and dural execution. So, uh, yes, a durable execution. You know, I run and write a piece of code. I write, you know, business logic. It's take a state machine. It's a state machine with. Five steps, 10 steps, a hundred steps. And I wanna make sure that if it fails at any time, it recovers its state and continue where it lefts off. You know, I don't want to be going back to my order processing system and re clicking wifi order again. So, workflow NGINX today have played a, crucial role in business logic all the time. key about DPAs One is it's a developer friendly one. So it's in, you know, five languages. Uh, you could write it in code and that allows you to write effectively durable execution. And based upon that, you can build, of course, agen systems.
Darin 00:14:13.185 I want to go back to the Microsoft days. I'm sorry, I keep bringing this up.
Mark 00:14:16.955 Okay.
Darin 00:14:18.080 how critical was dapper to Azure? You said it was, it was under Mark's organization. It just seems like, again, I guess why, why let it out? Of course, the same thing that could be said of Kubernetes, of letting it out of Google, but
Mark 00:14:33.635 Well, I mean, it was very much about, you know, if you let these things out and have a life of their own, they become broader than just with inside any organization. So I think, you know, any organization, if it wants to hold onto its technology, it can, but it never becomes as widely adopted as if it becomes community adopted. And Kubernetes is the perfect example. Yes. Kubernetes would never have taken off if Google had held onto it. by releasing it. You know, it became a growth thing within, inside the, uh, you know, the community as a whole. So that was the entire philosophy with Dapper. You know, Microsoft wanted to release a runtime that enabled you to host and run code across any systems, and they didn't need to hold onto it to themselves.
Darin 00:15:15.914 But wasn't that what.net was supposed to be before? Sorry.
Mark 00:15:19.889 Well, yes, but then if you look at T net, it's only t net. You know, it's kind of, it's only a do net language. Uh, dapper spans all the multiple languages. So we have java.net, uh, C go, JavaScript, Python, and so it became a multi-language runtime, and that's another thing that made it key. Today we see lots of different organizations that have many different languages. When they build things, in fact, increasingly the amount of Python that comes into the code that goes alongside your Java code, we see that all the time. Take my Java application stick, Python in that, and dapp helps bridge that as well.
Viktor 00:15:53.279 So how does that work? I'm curious, when you have SDK in different languages, is it that you're writing everything in one language and then you have some kind of engine that you know when, when, when you push it to the main branch it transforms that to all other sdk the case, or you are maintaining each as the case separately in a way?
Mark 00:16:14.859 We're maintaining hst K, but that's where the APIs come. Underneath the covers, you know, DAPA exposes, HTTP and GRPC as its core APIs. So if we go back to pub sub, you know, it has an API for pub sub. There's an API for the service discovery and service calling. There's an API to get secrets, and that's just an H tt, PAPI or A-G-R-P-C-A-P-I. So you can do that and in languages that are very HTT P friendly. Like JavaScripts, you know, that layer of the JavaScript SDK is very, very thin, but when you get to something like a Java as a language, which is, you know, very type heavy and things like this, you know, giving a great experience. And so, you know, in the Java world, DAPA integrates really well with Spring Boot and the spring framework. Then it takes hold of that. And you know, there's a Java SDK that translates down to these underlying APIs. And we did the same with Do Net, you know, inside net there's asp. Dotnet core is the great framework and runtime for hosting it. It integrates well with the dotnet SDK, and that in turn fits in with the dapper APIs. And so, yes, they're very little handcrafted, but underneath the covers it's just HT P that you're using.
Darin 00:17:23.768 Well, it's always a TCP call of some short or another,
Mark 00:17:27.308 Yeah. Works. A TB call.
Darin 00:17:28.778 It's a tcp. It all boils and GRPC and HTP still end up being the TCP
Mark 00:17:33.158 Yes. Yeah. Yeah. Okay. Ultimately, yeah.
Darin 00:17:35.288 I, I won't poke on the Microsoft thing anymore. At least I, I'll try not to. It was just, but you gotta give it to me, right? It's like, okay, coming outta Microsoft, it seems like let's dotnet all the things, and then all of a sudden we have something different, which is cool and great. I don't have a problem with it. In fact, I'm not a dotnet developer. I, I am proud to say I have never programmed with a Microsoft language in my career.
Viktor 00:17:59.783 I did, uh, I worked with dotnet and then in one of my jobs I was forced to work in Java and that was suffering
Mark 00:18:09.103 Yeah.
Viktor 00:18:10.073 back in the day. Back in the day. Just, just to be clear, back in the day, things changed. Things changed. Uh, I think it's the other way around afterwards, but yeah, it, it was good.
Darin 00:18:22.118 Okay, mark Magic hand wave. Y Dapper is being introduced into an organization. Somebody went to a lunch and learn. Somebody went to a meetup, saw dapper. I was like, Hey, that would be a great use case for X within our company. We're getting ready to refactor this thing, or we're getting ready to greenfield this thing. What are those things? What's, what's a good first, hello world type thing that you see dapper used for a lot.
Mark 00:18:44.244 Yeah. So the first thing that we see, there's a great case study on this actually. If you go and go to the CNCF DPA case studies, there's a great case study there from Data Galaxy. And if we take them as an example, they had a large monolith application. It was all bound together by compiled code, and they really wanted to be able to separate out a separate process or a separate thing that did a whole bunch of. translation of documentation that they had, and so what they did. Is they wanted to effectively modernize their monolith, as it were, by building a separate piece of code rather than compiling it all into their same thing again, they had a separate team who could build this translation app, which received messages. So all the actual documents that got received were put into a state store, and then a trigger message was sent to this other. translation process. It loaded up the documents from the state store, did all the translation, wrote them all back, and then sent a message back to sort of the original, let's say, monolithic application and triggered it all and said It's all done. And so now they can separate this whole new piece of business logic of translation and, basically analyzing these documents and sort of augment it to their existing application using messaging as a way of coordinating what it was done. Yes. So they basically. Integrated and started to build an event driven application that was attached to their existing application. So we see pub sub a lot become sort of the additional add-on or the way you start to break your application apart, whether you're just adding on like data Galaxy did to theirs. Or whether you are sort of actually taking original application and sort of breaking it apart. So that's kind of one very clear case. another one that we sort of run into a lot is just, um, I think workflow. I would say going back to that, and that is a lot of people a. Have some handcrafted, ugly piece of code with lots and lots of if and l statements and all sorts of retry logic and things that they've sort of tried to get together over the years. And it's just so hard for them to kind of, understand what this code is doing. And so we've seen a lot of people sort of refactor their original code into very clean. Daer workflow engine that gives you this reliable, durable execution of state machines. And so they can segregate it into these different activities and it becomes very clean, reduced code, very clear to understand, and also very reliable code. so those are kind of two very clear scenarios of using workflow to improve the reliability, make it more readable and sort of reduce the complexity of your code. Make it more manageable. Pub sub for communication and yeah, modernizing existing applications or adding new business processes to things. So hopefully that gives, gives you some ideas around this. But there are many, like this, we can go into secrets as well, we see secrets. API gets used a lot. Where people have had to go and pull some secrets from some secret store and manage it all, and they have a little process that does that and they share this process around, and instead they could use a secrets, API to say pull a secret from HashiCorp Vault or, yeah, Azure, A key vault or one of these ones. So that, that's another one as well.
Viktor 00:21:59.823 How much kind of that overlaps with some Kubernetes features. Like for example, you just talked about secrets, right? If I would not know about Daper, I would say, Hey, that's why we have external secrets operator, right? Uh, you write seven lines of YAML and your secret is there, whatever it is, it's mounted. Or like communication between publications, right? We have Kubernetes Secrets, uh, sorry, secret service. So how, how much there is a conflict of sorts. Not a conflict, but overlap.
Mark 00:22:33.468 Well, Kubernetes is an infrastructure play. It's not an application play. So you can, in that secrets management, yes. The secrets API in DAPA can store and load secrets into a secrets store. In terms of Kubernetes, it does, but most organizations don't wanna keep their secrets. That they have inside a Kubernetes Secret store. They wanna keep it inside a cloud service like Azure Key Vault or AWS Secret Service or GCP. So they keep all their secrets there. That's what they want to do. 'cause they can centrally maintain them and manage them all. So, and that's the beauty of DA of dapa. If you use the secrets API one moment, you could point to that Kubernetes secret store and pull from there. But if now you've changed and said, no, no, all my secrets or have to be stored inside a secure vault, that's a cloud service. Doesn't change your code, you just change that component. Point another one, and off you go. So the answers is it takes advantage of those where possible.
Viktor 00:23:26.036 Okay, now I get it. So I misunderstood initially. Basically, if I have my secrets somewhere in vault or whatever that is, unlike secret external Secrets operator that pulls it into Kubernetes Secret. You're pulling it directly to the code.
Mark 00:23:41.602 Yes. Into the application. So I mean, he, what's the example? yeah, yeah,
Viktor 00:23:45.127 you're skipping basically Kubernetes Secrets as intermediary.
Mark 00:23:49.342 Well, yes. Well, I mean, you can use a Kubernetes Secret as a store, but you can use other stores as well. And I mean, and the scenario here is I'm using a secret because say I wanna pull a secret to talk to a database. Yes. And I have to get that secret from somewhere. And so the secrets API in Dapper allows you to retrieve that in a consistent way. wherever you choose to, keep your secrets.
Viktor 00:24:10.336 Yeah, as opposed to pull ESO, pulling it as a Kubernetes secret from Vault and then mounting it. So you're skipping the mount.
Mark 00:24:18.871 Yes, exactly. So that's, uh, you know, that's, that's one of the benefits is, is adaptability of code. And you see a lot of this time, you know, I can adapt my code, I can make my cloud code agnostic. And it's all about the application developer, about, you know, what it does. And, you know, going back to your example as well of, you know, how is it I call between. Two applications on Kubernetes. Well, Daer has this amazing concept of identity. So when you spin up a piece of code in a process, it gives it identity, it gives it, you know, here's code for application one, and here's code for application two. And application one can call application two and its methods on it because it knows how to identify them. And so that name, discovery, that identity gives you very powerful security at the application level, not only in terms of discovery. And the ability to do MTLS code between them all, but you can say application two can only be talked to by application one. and no one else can talk to me. So you get these bounded security contexts that make it really powerful at the application level. So application level identity.
Viktor 00:25:20.446 It's essentially replacement for, uh, of, for example, right?
Mark 00:25:25.554 Yes, but ISS GO doesn't do anything at the application level. Yeah. So yes, you're right is GO does everything at the network level for routing, but Istio O has no concept of saying application A and application B. It just knows about network traffic. That's it and how
Viktor 00:25:38.214 Yeah. I mean, it knows, um, it doesn't know application, it knows container,
Mark 00:25:43.339 Yes. It just knows container. Well, not really, because application identity is the key. You know, it, the, the actual piece of code has a named identity, and that identity is given a spiffy identity. And so you can actually say it's a bit like, you know, you are Viktor and I'm Mark. You know, it's not just you, you know your. Person, you know, you are, you are a, I actually know how to call you and find you and, and send something to you. Here is Viktor. Here's my message. Not just like there's some container somewhere that I have to send something to. and so the process identity with a spiffy id, and how I know how to discover and call something is a very powerful concept that you can't get at service mesh levels.
Darin 00:26:24.870 Let's keep writing that one out for just a second, because if Viktor is offline for whatever reason and you send him a message, what happens? Do you just keep retrying until. Whenever, until you've decided that you're not gonna retry anymore, that he's unfriended you, I guess is what I'm looking for. What's.
Mark 00:26:42.405 DAPA does have, DAPA does have, and this is one of those great things, it does have these things called resiliency policies. So you know, if it has the ability on any of its APIs to say retry X number of times before you give up. Or it has things where if you call something and it isn't there, you can do a, a, a circuit breaker and you can just say. That's not failing. Let's do a circuit breaker. Let's stop. Let me retry again a certain period of time and sort of bring that back in. So the concepts of timeouts, retries, and circuit breakers is fundamental to every dapper API. And this is the thing that every developer has to then start to build themselves. And so dapper gives you these cross-cutting concerns. Not only does it give you the fundamentals of reliability, but it gives you the fundamentals of treatability. When I trace my calls. All the way through my application calls and how it runs, whether it's a pub sub message or a call to another service for service invocation or call to a secret, you get end-to-end traceability all sent out as open telemetry messages that can be sent to any of your telemetry stores. So you can now see this beautiful graph of the application calls you have resiliency built in. And going back to this identity thing, you actually have security boundaries. So this is the thing that application developers always run into. It's easy to get something going, but it's hard to deal with failures. It's hard to deal with security. And invariably the observability in telemetry is always left as an afterthought. And without that you are actually flying blind in production. so when you see this sort the application level and get this for free, that's the power of dapper.
Darin 00:28:16.616 So you're all in on open telemetry.
Mark 00:28:18.671 Oh, totally love it. Love it, love. Open Telemetry is fantastic because now all the events are happening in your application like, save this Secret or Send this message or run this workflow or all, send us Open telemetry messages. I mean we adopted Open Telemetry. Uh, way, way back, right at the very beginning. we also adopted cloud events as well as another standard. So we were all about adopting standards. Cloud events was a very early adoption for how you shape the messages, open telemetry. We adapted even before it, you know, became a V one and put that, baked that deep into dapper spiffy for identity management for applications. particularly also making sure you integrate with other systems like Cert Manager, and other, standards around this. We opened also, um, the, uh, standard trace header tracing, so W three C trace event and how a tracing happens in cross headers. You know, we adopted that so very much about adopting standards and going with where, application developers, expect, you know, the standards to become important to them.
Darin 00:29:20.247 This sounds almost too good to be true.
Mark 00:29:23.572 Well, it is, it is true. It is. That's why it's such a highly used project. I mean, uh, we, we have a huge great community around these things and, you know, the continued adoption of dapper is that, I think what we find is that when people discover dapper, not only does it. Uh, the, the challenge you get is that people like to write these things themselves. Yes. They like, well, I can build that sort of thing, like dapper and, you know, the under, and the answer is though, is you can, but the breadth of it will take you a long time to do that. By adopting Dapr you actually sort of not only reduce your code, but you actually reduce your time to getting into production. And we've shown this through numerous case studies that if you use Dapr you could probably get your application into production, with a very mature piece of code now in about 30 to 50% faster than you would do normally. and it's very adoptable from a Brownfield perspective. It's not about Greenfield. Going back to that. Case study I talked about with Data Galaxy, you know, adopting on existing systems as well as new ones. so yes, it is too good to be true. Um, and that's why the project is, uh, an amazing one to become part of.
Darin 00:30:28.310 we are against CV driven development around here. So that's, that's fine. We don't want people recreating everything. so it's interesting you're saying if, if I use dapper and I'm going to misuse your words probably against you, uh, I can speed up something 30 to 50%. To actually get running on metal
Mark 00:30:45.950 Or to getting it in, getting it into production. It's just that, yeah, going back to the, you know, don't, don't reinvent the pattern again, don't reinvent the wheel. Just, don't have to spend hours like understanding a Kafka library. I mean, let's just take an example. Someone adopts Kafka. Oh, I'm gonna use Kafka for an event driven system. So first off, going back to my earlier sample. Kafka doesn't have the concept of pub sub built into it all. So there you go. You've got, what, two months of development time, just to build a pub sub abstraction on top of that and to think about it and then how it all works on top of your learning for how does Kafka actually work instead, DAPA is a component. That not only has all the best practices for using the dapper SDK, that's been refined now from thousands of companies using it all, but also it has built in the right abstractions and semantics for pubs of messaging. And you can use all of that and run it at scale, and it's used by. Many, many large organizations. So you get all the benefits of Kafka, but the ease of development and less code to maintain and you get to production and say you don't like Kafka, all of a sudden you could switch it out without having to rip out all that Kafka code and change it to, Azure Service Bus or GCP pubs up and there you go. That saves, so you say maintenance time, less code, and, you build on the best practices of other developers who've contributed to the code base.
Viktor 00:32:08.618 if ignored Daper for a second? Why would somebody choose Kafka for popup if, if popup does not exist in Kafka?
Mark 00:32:17.207 Well, it's more of a case of building the publish and subscribe semantics inside there. Yes. So in other words, I just wanna be able to subscribe to topics. It's not that Kafka can't send messages. It does,
Viktor 00:32:27.796 I mean, kind of like if there are plenty of other solutions, like if you use nuts that that doesn't matter. Now whether it's better or not, it's baked in. Why? Why the heck did you choose Kafka as as a company kind of.
Mark 00:32:41.131 Yeah. Well, I mean, that's just, I mean, exactly. Well that's probably because someone chose that. All I'm saying is like some person might have just used Kafka before and then they started want to build these things on top of it all. So we, we often just see people choose decisions because they heard about it before and things like this.
Darin 00:32:58.072 Well, okay, let me ask this question because we, we've thrown out Kafka, RabbitMQ, all the other thing, you know, all these underlying things that can be tied into, but I don't wanna have to stand that up to actually do my development. What? Is there something baked into dapper that I can use just of, of that might be even good for POC, maybe for even qa. Just I don't need the big, heavy lifts of everything else.
Mark 00:33:22.957 Yeah. Yeah, so, so that's exactly where this component swapping comes in. So when you install dapper on your local machine, so you download a dapper CLI, that's a simple, you know, little download on your machine. You run a dapper in its command and it sets up a local development environment on your machine and on your local machine. It deploys. Redis as a container running on your local machine, and it uses Redis for pub sub and Redis, for that messaging. So you now, as a developer, automatically get Redis for all pub sub messaging and using against those APIs out of the box ready to go, and you develop against it all. And so for your local development, you're using Redis. And then when you decide that you want to use deploy into production, you then just swap out that Redis component for Kafka and off you go.
Darin 00:34:11.426 Which you wouldn't use Kafka, but that's fine.
Mark 00:34:13.676 Yeah. When you, you, you'd use, yeah, exactly. You'd use, uh,
Darin 00:34:18.506 Well, well you could, well you could use Redis, right? You could use a hosted Redis for if you wanted to keep it as
Mark 00:34:23.836 Well, you can. Yes. And in fact, we see Redis used a hosted red solution used all the time for messaging. So let's just say, yeah, Kafka may be with a bad example for pops up. People using it was a very bad example. Use Rev mq. Rev mq. So let's just say, you know, I'm using Lo Redis, my local container for all. Pubs are messaging and then I go off and use Rapid MQ and that's it. I'm done. And all I have to do is in my deployment pipeline. My code does not change. All it changes is that little component definition and alpha go
Darin 00:34:55.681 So how do you crack the knuckles of the developer that wants to code to Redis instead of using the dapper APIs?
Mark 00:35:01.501 well, they can, but uh, if they do, then they're sort of. Tying themselves into Redis, as it were, as an API, and they're locking themselves into that, and they're not getting the benefits of portability of their code, and they're having to write a lot of those abstractions themselves. Again, you know, a pubs of API on red, but Redis is still taking you some abstraction around all that. So yeah, they can do it. Nothing preventing them doing that. But DAPA gives you all those other benefits I talked about, resiliency, security, telemetry, and you get all those cross-cutting concerns. On top of using that, and so you'd have to build those all yourself.
Darin 00:35:36.274 So as a tech lead, as an engineering manager, I would tell my developers, look, do not write to the actual implementations. Stay at the API layer.
Mark 00:35:46.204 Correct.
Darin 00:35:46.954 we're gonna have a conversation.
Mark 00:35:48.334 Yes. And this is why we've seen dapper used enormously inside platform engineering teams because now you've got this beautiful contract between what the platform engineering team has. And what the application team wants. And we've seen this time and time again where actually platform engineering as a whole slightly ignores the application developer. And the platform engineering team kind of goes, oh, we are giving you these message brokers and that's what you have to use. And then the application team bakes in, say. rapid MQ into their code and they're forced to tightly couple those two together. In the case of dapper, it provides a very clean API between the platform engineering team, which can control what they want to put in their platform, and the application team not having to change their code. If decisions are made that, you know, a different message broker is gonna be used and things like this, Nowadays, more than anything, we spend a lot of time with the teams that have built platforms using DRA as that abstraction between the application teams and the platform engineering teams.
Darin 00:36:48.871 So if I can ship to production 30 to 50% faster. By using dapper, how much faster can I ship if I have AI writing on my dapper code?
Mark 00:36:58.811 Well, that depends on your ai. That's what it's doing. I have no, I have no club. I'm not in any game to say to you what AI should be doing for you and things like this, and it probably is writing something that you should put in the code anyway, so you know, let's just be careful about what it's doing. But what app has done is it's provided a API. That abstracts away how you talk to different language models. So we introduced this API called a conversation API, and again, with this component model, you can plug in either, open ai, models or the anthropic models or deep seek or Olam or wherever else you want. And so you provide a prompt and then it calls it down and hands it back to you layering some additional features on top of it, like, pi obfuscation and, uh, prompt caching and things like this so you can swap out your underlying models without being tied to them and have that flexibility. So we did that for sort of prompt calling as an API and then started to combine that, of course, into the DAPA agents framework to help you build agents that could also use these language models as well.
Darin 00:37:59.372 So what does that look like? Building an an, a dapper agent or an agent driven by dapper.
Mark 00:38:04.477 Well, going back to my earlier comment that, you know, I firmly believe that all Gentech systems are simply distributed applications with smarts. So being language models, what's actually gonna happened is that, we've actually started to bring together the APIs that dapper has like messaging and service implication, and particularly workflow, and put that into an easy to describe framework for you to build. Agents that are durable. And so you can go up and create a new durable agent and you can give it a free properties such as here's its state store to save its state into here's its, goals and its role, and here's the language models you wanna talk to, few lines of code. And you're often running with, uh, an agent that effectively can use language models and tools, uh, to make decisions depending on what task you've set it. So a very easy to use clean API, that's currently in Python, SDK, and that's where we're experimenting with it. and we're trying to get the Python, SDK, with DPA agents to a 1.0 sometime later this year. Um, so, you know, the, the agent side of these things, building on all the experience we've had for building distributed systems like messaging. Between agents, for example, or durability to save their state. I'm going back to my workflow example. The fact that, you know, if you think about an agent, it's actually doing, it's a state machine, it's doing 20 different steps, and you don't want it at step 15 to die, and then it goes, well, I just can't remember what I just did. What was my memory? What was my state? What was I doing? Because the machine crashed, you want it to carry on. So the vast majority of frameworks today don't have that durability. Agentic frameworks today don't have that durability, and so that's where. DAP agents shines in having these long running durable agents that allow you to take advantage of the DPA APIs for things like messaging as well.
Darin 00:39:54.346 What sort of systems, what sort of business problems? Is dapper not suitable for.
Mark 00:40:01.759 Ooh, good question. Well, I think in terms of any type of vertical industry, and I would say that if you're building on top of any, if you're building on top of Kubernetes, by far, you should use dapper. No question about this. we cross dapper is used by every industry there is possible. So whether it's a healthcare or flight ticketing system or a trading application, I think dapper is applicable to all of those. I think some of the places where Dapper still has work to do is in how it has its storage APIs. you know, it has, the ability to store state today In key value stores, but I think that making sure it's state, um, installed inside, blob storage and, and no SQL storage is something we're working on. So, you know, I still think you generally have to kind of combine dapper with. Let's just say, database API still, if you're just retrieving and doing queries against wall, DAPA doesn't really have, doesn't attempt to try and abstract databases in any way too much uh, other than sort of key value storage around these things. So I think that's one place it would evolve into. But to a large degree, applications that you're running on Kubernetes, DAPA should be a defacto solution for everything.
Darin 00:41:11.948 What if you're not running on Kubernetes?
Mark 00:41:14.233 Well, you can still use dapper. dapper actually is platform agnostic. we actually have people who actually deploy it onto a set of VMs. Uh, you just got to do a little bit more work yourself in terms of how you manage the deployment across them all because, you know, in the Kubernetes world, clearly there is commands you to deploy it and it takes care of the distribution across multiple machines. Uh, you have plenty of people who just take DPA and they deploy it on a VM and they manage it all themselves, but of course that just means more work in terms of getting it there.
Darin 00:41:45.117 So with it being just a runtime, it can be done. It's just not gonna be as fun, I guess, to deal with
Mark 00:41:52.670 Yes, it's exactly, it's not, it's not as integrated with the, the hosting environment. And, you know, you'd have to use some pipeline deployment. You have to use some bash scripts or some scripts deploy to every machine and manage it all yourself. whereas, you know, dapper has in the Kubernetes world a control plane. And, you know, well it has a control plane anyway, but it has a control plane that's integrated, that gets deployed into Kubernetes environment. And because Kubernetes has the ability to, do multi machine management in the sense of that it is, has a, an, an easier integration with the Kubernetes operator and things like this.
Darin 00:42:24.352 I'm gonna give you a magic wand.
Mark 00:42:26.282 Yes,
Darin 00:42:26.662 questions, right?
Mark 00:42:27.662 yes.
Darin 00:42:28.345 if there was one thing about how organizations approach modernizing their stacks. What would be that one thing or maybe a couple of things? what do you think people are getting wrong? Especially in the light of dapper existing and being graduated? It's not like it's new put out by a couple of guys that were living in their grandmother's basement, right? This, this had money behind it to even come to light.
Mark 00:42:56.123 Yes. Well, I would say that, oh, I mean. That's a, a tricky one because so much of this depends upon literally within, inside the organization itself. I think that to a large degree there is lot of, a lot of, let's just say stasis, where people get stuck in the, the system that they have and there's a fear of change. Around these things. So more than anything there is, yeah. How do I go about modernizing what this application looks like? there is a bit of a language dependency here thing as well. You know, in the Java world you get a lot of people who get stuck into Spring Boot, and I think Spring Boot is a very powerful thing there that, influences that. And I would say also in the. They say in the Microsoft Worldnet is a strong influencer in terms of how you should just use T net around these things. And so I think a lot of people look into the language world only and say, look, I'm a Java person so I, I can only use Spring Boot and I'm a dotnet person, so I can only use a T net framework and that sort of things. And I think that they, that's one thing that's, you know, sort of, you have to get past a little bit because DAPA sort of multi-language and crosses those boundaries around those things. I also just, it's just the, I think it's the, also, I wanna write this myself and enjoy kind of the technical challenge around those things. I think that happens a lot. I also think it's also making sure that you spend a bit of time with Daer and you're gonna spend, like, if any new technology you're gonna spend probably a, a good, let's just say couple of days with it, playing with it and understanding its power. But once you have that aha moment, you realize. That it can just do so much for you and take away so much of the pain that you don't yet know about. For example, I think a lot of developers still underestimate the need for observability built into their frameworks, and they underestimate the resiliency that they have to build. And so I think there's a combination of like the. Developers who are more experienced realizing the problems they're gonna run into. and I think if you haven't built a system from scratch all the way through before and then had to worry about the security, the resiliency, the observability, and dealing with the failures and all of that pain that it entails, you think it's an easy task when it's not in any way. And that's, I think, is the difference. So experience probably matters a lot as well in terms of what you've been through. and I think what you've seen that. Played with dapper for two days and crossed that aha movement, then you're, you're not turning back. And that's what we see happen all the time. That tends to be the sort of more experienced developers in an organization pick up, dapper, push it through, and then you have a wave of other developers behind them.
Darin 00:45:37.072 It's because the experienced developers have built it all in the past that was not even half baked.
Mark 00:45:42.517 Yes,
Darin 00:45:43.132 don't wanna do this again.
Mark 00:45:44.677 exactly. Yes. I don't wanna do this again. Why should I do this again? It's like saying to someone, why don't you build your own Kubernetes environment? Like no one would think about doing that Now you know why? Why would I build my own distributed hosting system. Well, I wouldn't do that. That would be a crazy thing to do, the way I like to describe it, the same thing that Kubernetes has done for hosting. Dapr is doing the same for application development. it took Kubernetes, what, seven or eight years or nine years? They said? Seven years to kind of cross the chasm as it were, to kind of go from, you know, early adopters to mainstream. I think Dapr is kind of with its graduation, things like that is at this point now of crossing that chasm. And going across to become much more mainstream in its usage. And I'd like to think in the next year, two years time, it'll have crossed a chasm and it'll become, well, obviously you should do that. Especially if we're building on top of Kubernetes. That's the sort of level what we're getting. But I mean, if it took Kubernetes seven to eight years, I think it might take dapper, it's been in existence for six years already. I think it's gonna take another couple of years as well to do that.
Darin 00:46:45.406 two things I take from that. if you want to have CV driven development, don't ever look at dapper because. Once you see it, you'll never go back.
Mark 00:46:53.094 Yeah.
Darin 00:46:53.804 And a variation of that is if you're stuck with the language. It's sort of like the Eagles Hotel California. you can never leave. Once you're into a framework, that's what it, and that's that. And I think that's gonna be the good thing. Like you were talking about Spring Boot and.net framework. Once you're in there, you think you can't get out. Okay. You can, but it's gonna be really hard. I think once you're in D Dapper though, I don't think you will want to leave.
Mark 00:47:18.299 No, that's true. Yes. And also we've worked very hard to make sure in the last couple of years make Dapper bring it to the Spring Boot community, so, you know, meeting people where they are and bring it into the dotnet community. And so Dapper now has really good integration with Spring Boot and with, testing frameworks inside there, particularly with test containers, you know, the Java community uses test containers a lot, and dapp is amazingly integrated with test containers and Java and Spring Boot. Now we've done a lot to get dapper integrated with Aspire Fornet world. So dotnet aspire is the thing inside there. I mean, do aspire's a developer tool? dapp is a runtime. So the combination of dotnet as far as the developer tool and dappers of runtime is a great combination. So we're, we're working on that still, and, you know, in the Python and the other communities. You know, we're bringing it, to, to the runtimes inside those as well. which are a lot easier really. And there's a lot less, uh, let's just say there's a lot less friction.
Darin 00:48:09.090 so Dapper existed. Why did DIA grid even come into existence?
Mark 00:48:13.940 Well, we realized that we wanted to kind of forge ahead with dapper and actually take it to kinda a new level. And as you may have experienced it, kind of, you know, it doesn't matter what you, um. Can achieve inside some large organizations. Sometimes you wanna sort of forge your own direction and make sure that you can grow it into a, a very, compelling business. And so your own Schneider and myself, you, we decided to leave Microsoft instead of Die grid because we saw this desire to build a platform play. Think of it as a way of hosting and running dapper such that you can have a. An amazing experience, particularly building applications around workflow, uh, that allows you to build this durable execution workflow platform and run your applications there and have hosted DAPA endpoints. We call this product Catalyst. Uh, think of it as a, think of it as like a server like product, but you deploy catalyst into your hosted environment. It has all DAPA APIs available to you, and you can now go along and. Build your application against those, APIs and run your code anywhere so you're not just bound to Kubernetes. You can run it on a vm, you can run it on a, a function, you can run it on a container app, and yet use all dapper APIs as part of all that. And in particular, workflow is becoming key for. Running and building those applications. So that's, uh, where we built Dapper as a platform play. and it's getting, getting great adoption and, you know, usage with inside organizations we work with as well as us, you know, being contributors back to the project itself and continuing to be now d are the primary maintainers and, and contributors into the Dapper Project. So we've kept our grassroots with Inside Dapper itself and our contributions to the community.
Darin 00:49:59.939 So you're the primary contributors, but you're not the only contributors.
Mark 00:50:03.584 Correct. We're not the only contributors. No. There are plenty of other people who contribute from different organizations. You know, whether that's Nvidia or you, Amazon or Microsoft of course. And many others who do inside all that. Uh, you know, we just, uh, we've decided to invest heavily in making sure that, you know, we build. a compelling technology and keep the project alive and going and continue to innovate inside it all, particularly with workflow and that's, you know, the durable execution side of these things because, that is a very powerful technology for building these business applications. And when you combine it with the dapp communication, like pub sub and service in vacation. Yeah, we like to say that no workflow is an island. And so that combination of the DPAs communication, APIs with workflow, you know, it makes it very compelling for organizations. So that's where we invested a lot of our engineering time. Other organizations invested in other things. It's part of DPA as well.
Darin 00:50:59.784 So dapper can be found@dapper.io. That's DAP r.io, and dia grid is@diagrid.io. That's D-I-A-G-R-I d.io. All of Mark's information is gonna be down in the episode. Description. Mark, thanks for being with us today.
Mark 00:51:15.864 Fantastic being here. Thank you for having me.