Tom
00:00:00.000
I don't personally subscribe to the analogy of it being just like a junior developer. I think that, there are things that LLMs do way better and, do way worse, than a junior developer. And I think, key in that is that over time you. expect that The knowledge you're imparting to a junior developer to kind of accrete and accumulate and eventually they become a senior developer and someone you can rely on to work very autonomously, whereas the LLM junior developer is going to need that kind of, first month on the job level of supervision, for its entire existence.
Darin
00:01:30.330
Here we are. It's the beginning of October and with everything that's happened this year, the guests we're gonna be talking to today, we'd actually tried to set it up late last year, but I think waiting till now makes even more sense. When we are talking about API testing and we were thinking about API testing and I'm thinking through some videos that you've done recently in the era of AI and distributed systems. One of the videos you did recently was, do we even need higher level abstractions anymore because the AI can figure it out. You were at that point talking about the Kubernetes API,
Darin
00:02:10.736
think back nine months ago, we weren't even thinking about this. Now it's like this is normal.
Viktor
00:02:17.041
Nine months ago, or a year ago at least for software engineer, was a gimmick. It was amazingly good at creating Tetris game for scratch, and that's about it Now. It's a completely different story. now that genetic agents more or less prove themselves together with models. Right now we are in the next phase, which is, okay, now let's create specialized solutions that actually really excel at something
Tom
00:02:46.788
Hey. Good, thanks. thank you for having me on the show. And like you say, it's been a long time coming, so it's great to be here.
Darin
00:02:51.678
what do you think about what I was just saying? Now wire mock obviously is for mock testing. If you've never heard of it, it's primarily in the Java ecosystem, but I think it can apply anywhere, even if you're not right in Java apps. You could still do mocking, I guess, but we won't go from there.
Tom
00:03:09.771
So, yeah, it's probably worth clarifying that, YOC mocks networked APIs. although its origins are in the Java ecosystem, the core library is a, Java library. it's actually completely agnostic to the tech Stack you're using because the thing you're mocking is a network interface, anything that speaks, soap, APIs, rest APIs, whatever, we'll be able to talk to it. Yoc Open Source has its own networks interface for configuring it. even if you're not a Java developer and you want to use the open source product, you can run it as a process and you can configure it via files over the network. And then likewise, we have yoc Cloud, our, commercial offering, which is hosted and is API and UI driven. it's important to make the distinction between, that kind of mocking, sort of networked mocking versus object mocking, which is very much, tied to the language and technology Stack that you're using. that has to be kind of language specific.
Darin
00:04:00.787
the clue is in the name. What do you think though? You rewind nine months ago, where do you think what you do just in general was doing, whereas now in October of 2025, where do you think it really is today?
Tom
00:04:15.567
the current phase of the AI wave has affected us in a few ways. for one thing, it's forcing people to rethink, the role and use of APIs in how we build distributed systems. AI. agents, you know, LLMs, on one hand they're quite good at interacting with APIs and figuring things out as they stand. But on the other hand, there are styles of API design and, ways that you can build and run APIs, which are optimized for LLMs to use. so that's sort of causing people to rethink, how they build APIs. and that has a knock on effect to us, because we're involved in, amongst other things, helping people prototype APIs, build, simulations of APIs that people use to, develop. consuming and producing parts in parallel, that kind of thing. So there's that aspect of it, just that there's this kind of renewed interest in APIs and API designers and fresh thinking coming into it. and I think the other side of it is obviously we're a developer tool and, AI. Particularly, sort of the latest batch of AI coding agents has really shaken up. the way developers are working with code and working with their systems increasingly the AI agent is becoming this kind of primary front end onto, how people work. we've been looking at MCP in particular, so the protocol that the anthropic. released at the back end of last year that is a, a sort of universal interface between, AI agents and kind of external tools, MCP allows you to plug tools into your AI coding agent that then allows it to do things kind of outside of the bubble that it operates in. So it can, go and look at your GitHub issues or, in our case it can go and interact with your yoc cloud account and you can create mocks in it directly, that kind of thing. this is still kind of, somewhat sort of frontier territory. the things that. people are, actually really gonna get value from that are actually useful, that play nicely with the coding tools out there is still, very much something we're still sort of seeking to discover at the moment. there's some interesting possibilities. particularly the ability to use an AI coding tool as this sort of composite kind of orchestrator for, making changes to different parts of a system and using MMCP to plug into the different parts that you need to affect simultaneously. there's potentially a lot of power in that, that we're only just starting to discover how to take advantage of.
Viktor
00:06:20.773
testing API sounds like almost one of the first cases or easiest cases for AI to tackle, right? Because just paraphrasing now, but you can just say, Hey, here's the sche. And there is no reason on Earth why you wouldn't have schema today anymore. It literally takes five seconds for it to be created. So here's the schema. Go wild, right? just do the variations
Tom
00:06:48.398
It's important to clarify as well. we don't actually do API testing in the sense of, we don't provide tooling for driving tests into A an API. providing a test client we're Flipping it around and we're providing, a mock so that you can test an application, which is calling out to an API, having said that, what, you've just said is also largely true in that context, although you'd be amazed how many APIs, don't have any kind of, well, a lot of APIs do have a schema, but, you'd be alarmed at how many of them are incomplete or inaccurate. I think the difficulty of getting to one, which is really precise and descriptive and maintained and up to date, that's the thing which is still lagging behind in this business and actually something else that AI helps with as well. it's a lot easier to keep these things maintained when you have tools like that than when you are relying entirely on human beings to do it.
Viktor
00:07:31.412
but that's the thing. I'm fully aware how many companies don't have schemas and many schemas are outdated, but. That's such an easy task today for AI to just say, Hey, here's my API, the actual code I. Create update, do whatever needs to be done to get the schema. It's so easy today, right? Because there is no thinking, there is no real interaction with the user. There is no, oh, would you like it like this, or would you like it like that? It's literally translated from one format to another, because that's what schema is, right? It's a translation of the same, API written somewhere.
Tom
00:08:07.837
Yeah, there are some stacks in which that works better than others. I think if you're using, languages with. very descriptive type systems, then that maps fairly well onto schemas. I think if you are, if you're using a dynamically type language, if you're using ES six or whatever and writing everything in node, then getting from that to reliable description of the API can be a lot harder. I think that it's, variable. And then also a lot of the tooling out there is patchy. there are some very well maintained projects, you know, often ones that are sponsored by vendors with an interest in the space and they tend to be fairly good. But then I think there are some language stacks that have things that main just maintained by volunteers who don't have a lot of time. I think this is one of many reasons why in practice it's very patchy. you get some very well documented APIs and somewhere the documentation is actively misleading.
Darin
00:08:48.952
If it is misleading, what can those companies or those people do? Or should they just say, effectively put up a robot's text and say, forget it, go away.
Tom
00:09:01.072
there are plenty of tools at our disposal. I mean, this is something I've talked about quite a lot in the past when you start working with APIs at large scale, there are, organizations that have hundreds or even thousands of microservices now. And, they have a real. large systemic problem associated with, making those APIs discoverable and available documenting them in a way that allows other teams, other users to take advantage of them. I think really you need to lean heavily into, the kind of automation that we're, much more familiar with it in other areas, where we're constantly maintaining a closed feedback loop. So in practice with things like if you want to maintain open API I descriptions of your APIs, then you need to be regularly sampling traffic to and from those APIs and validating it against your open API And then, when there are discrepancies. you need automated systems that alert people to that to say, your description and your API don't correspond to each other anymore. You need to go and do something about it. And like I said, I think this is where AI tools are interesting because you can often, automate a lot of those fixes. we already have of this tooling built into our products, so you can switch on a validation mode whereby it will pass traffic through the product and validate it against the open API that you've provided. there's certain amount of sort of automation of fixes that we can do as well. But I think there's, a step further than that in the near future where you've got, ai, agents that are able to kind of look at that discrepancy and say, okay, I can see how to fix that. I'm gonna. Open a pull request against your open API with a bunch of suggested fixes in it, and then the job of maintaining accurate documentation becomes about, just kind of reviewing these prs that are coming in from agents rather than some, hapless junior developer having to trawl through field by field. every time anything goes wrong.
Darin
00:10:46.710
that doesn't demand to be paid more or have more free coffee or whatever? Right. Okay. And doesn't sleep.
Tom
00:10:52.950
I feel like they do de demand to be paid more at the moment. Judging by the, price hiking that's been going on in the last few weeks.
Viktor
00:10:58.970
Oh, still. I mean, if you pay a thousand a month, you are using it. More than vast majority of people in the planet right now, even junior developers, ask for that amount of money or more.
Tom
00:11:12.187
Yeah. it could turn into a philosophical argument very quickly, I think, shouldn't it? I don't personally subscribe to the analogy of it being just like a junior developer. I think that, there are things that LLMs do way better and, do way worse, than a junior developer. And I think, key in that is that over time you. expect that The knowledge you're imparting to a junior developer to kind of accrete and accumulate and eventually they become a senior developer and someone you can rely on to work very autonomously, whereas the LLM junior developer is going to need that kind of, first month on the job level of supervision, for its entire existence. it doesn't seem like an apples to apples comparison to me, to, compare the two.
Viktor
00:11:47.066
But that's, that's assuming that one of those two doesn't, doesn't improve. Meaning that I take your junior developer, and we meet 10 years from now when that junior developer is a senior developer, and then I try to imagine, okay, what will be happening in this space in 10 years from. And, and I failed my, my imagination fails me immediately. I mean, I cannot predict even what's going to happen a year or two from now.
Tom
00:12:13.006
this is also true. I, I'm in the same position as you. I can't speculate about that, really. the trajectories for improvement kind of differ, don't they? I Whereas the assumption is that the models will improve. generally, their ability to produce the thing that you really want 'em to do will improve over time. the junior developer will learn your domain specifically. I think it's maybe the, difference that they will fit themselves to your environment and your specific problems in a way that, potentially a, generic improvements in ai won't yield the same effect.
Viktor
00:12:39.091
I'm of opinion that the only way how that area will have to continue progressing is if we figure out how models can learn. if that does not happen at some point. This is all going to end up being pointless right. then we are getting into that story that, okay. Yeah. But that person will eventually learn this thing. You never, ever, ever, ever learns. Right? and I would be very, very surprised if that does not happen at some moment.
Tom
00:13:08.141
Yeah. I think it is happening right now, isn't it? there are tools that have areas of memory that sit outside of your project. You know, I guess accumulate a, institutional knowledge sorts. So, I think you're right that these tools are developing those capabilities. Obviously we've had RAG for a while as a, Predominant technique for bringing institutional knowledge in on top of what the LLMs can do generically. So, I think those things are happening, but still seems like we're in the foothills of really understanding how to make that stuff work at the moment. we've been experimenting with, A very basic version of this technique. one of the things we wanted to do with the MCP server was to say, rather than you having to type out these long prompts and or having to have guidelines files that, essentially just articulate the same sort of good practices about how to build a mock API, we want to encode that somehow in the MCP server. So that, you can type a much shorter prompt as a new user and we'll, expand that out. And actually, as it happens, MCP has this prompt concept that isn't used very much, which is for exactly that, although it seems to be not very well supported by the tools at the moment. what we've been experimenting with is this idea of having embedded documentation and just an MCP tool, which. Allows you to go and fetch the documentation. the thing I've been experimenting with recently is. The idea of API crawling. So if you have an API, maybe you have some information about it, but it's a bit sketchy. You know, you have the, the open API document, which is maybe a bit out of date or a bit imprecise, and you want to say to the ai, go and figure this out for me. Go and make a bunch of requests. Try and get to the point where you've made successful requests, and then capture the results of that and update my documentation and update my mock from it. so there is a embedded in our MCP server. essentially this kind of article that's describing the process of crawling, this is how you do it. This is how many times did you try things. these are the success criteria. So now if I type in prompt saying, go and crawl this API from me, here's the base, URL, here's how you're gonna get the credentials. then it will go and say, oh, okay, I've got a document about that. I'll go and read it. expand all of that into the context and then. hopefully do a better job of it than if we just started with the user prompt and nothing else. I think there's a bunch of different ways to sort of skin this cat, and I'm not saying that we've got it perfectly right yet, but that definitely helps get a better outcome for the user than if you're just taking their prompt and nothing else.
Darin
00:15:21.625
Why would somebody really need to go ahead and use wire mock? Like it seems like I'm used to mock objects, but trying to figure out, okay, why, why would I need an engine for something like this?
Tom
00:15:37.760
there's a bunch of reasons why. first of all, it's a complimentary technique to mock objects. there are places where mock objects are absolutely the right thing to use. and I think there are places where they're not the right thing to use. starting at when you're in your inner loop, when you are, writing tests and, test driving some piece of code. there's a very old heuristic in mock objects, which is that you shouldn't mock types that you don't own, that the originators of mock objects talk about as a kind of maxim that they stuck to? 'cause they found that whenever you try to mock things that you don't own, you get into all kinds of trouble with the mock being unrepresented of the real thing. And HTTP clients, A particularly good example of this they tend to be really difficult to mock well with object mocking tools. And end up writing tests, which are, throw out false positives and misleading, harder to maintain, all of that kind of thing. you know, often in modern systems, very distributed systems like microservices, a lot of the complexity is in the integration layers of the code rather than the core domain. And. by mocking things sort of at the network level rather than with mock objects, what you're allowing yourself to do is, write, test the exercise, all of the production code that you'll be running. For real, a really common pattern for using something like yoc is you, you kinda start up like your entire microservice in pretty much the form that you would run it in, in production. it's just that you've, configured some external endpoints that it calls to for, local web addresses rather than the actual, domain names of the, APIs that they talk to. it's amazing how many additional bugs that you can find with configuration, with, sort of the way serializers are set up with, you find concurrency, bugs, things to do with threading and so on, like that, that when you use mock objects, you are sort of a aligning all of that stuff from the information you're gaining. So it's a very powerful technique in that regard. it allows you to, particularly when you, you have the situation where a new API is needed or a new API feature is needed to support some new product feature. So producing and consuming teams might come together and co-design it, but then initially there isn't an implementation. The producing team has to go away and build this new API. And the consuming team doesn't really have anything to work with at that point. And yeah, granted they could write some mock objects that are sort of proxies for how the API will be structured. But again, Prototyping the API in a mocking tool. what that allows to happen is for something which is, far more realistic simulation of the real thing will emerge. Client teams can start integrating with it and testing its validity very early. And, again, you'd be amazed the extent to which you catch defects in your API design, ways in which it just won't. it's not fit for purpose for the feature that needs to be built in a very, early, left, shifted way. 'cause the alternative to that is you wait for the API team to go off and do their thing. They spend a few weeks building this API feature. They ship it into a test environment. Then your front end team, your mobile team, whatever, start coding up a feature and they go, oh, hang on. We completely miss all of these bits of data that we're actually gonna need for this. So can we feed this back into the design? And then they sit on their hands for another few weeks while this cycle occurs again. it's a great productivity aids and it's a great quality aid in that regard.
Darin
00:18:39.522
It seems like to me, if you're a producer, I'm gonna use your word producer of a backend. Should I be creating my wire mock of that before I actually create the new thing? I mean, it seems like I need to create my spec. That then produces the wire mock that then produces the actual.
Tom
00:19:02.232
this is, the way a load of our customers work is when they have these co-design sessions between producing and consuming teams. they'll, they'll build the mock as they're having the discussion. wire mark's unit of work, I suppose is essentially a stub is a request response mapping. So it says if we, see a request that conforms to these criteria, then here's a recipe for producing the response. so there are examples really, and teams will build up a set of these examples that represent the behaviors that they. think they need for whatever feature they're building. in that workflow we'll produce the open API for you off the back of those examples, kind of as you're building it. so you have, sort of an initial draft of the spec will emerge as your. creating these examples, which you can then take away and you can feed it into your governance process. If you have one, you can tidy it up and enrich it and refactor it and that kind of thing. yeah, this is a really productive way to work for, a lot of organizations and particularly where you have potentially multiple teams as stakeholders to an API design. by following a process like this and having the, simulation as the kind of produced artifact reduces a lot of ambiguity about what it is that you're producing. You know, the producing team have got the spec to work from consuming team, have actually got something they can integrate with and, build against. And, where they can, like I say, they can provide that early feedback.
Darin
00:20:15.614
So I'm gonna take us back to the AI part now. Shouldn't we be using AI to help produce the open API spec? It's got a lot more knowledge than probably any human will ever have about it.
Tom
00:20:28.779
Yeah, it can work very well, to do that. AI is very good at, producing, well, it's good at producing small amounts of OpenAPI This is the, thing I stumble across the most is that OpenAPI documents tend to get very large, very quickly, and exceeding token limits or context window limits very quickly. so it's not a completely solved problem at the moment, but having said that, in terms of you say I want you to create new endpoint in the spec for, creating a new draft invoice, something like that. and I want you to include these fields in it and any other metadata that would typically be associated with it. And it will, generally nail that pretty well. I mean, this is one of the things we support through our MCP tooling, so you can just tell it to go and do that directly. yeah, build a mock and the open API just directly in your cloud account. so yeah, this is definitely a good way of, avoiding a lot of grunt work involved in authoring these specs. Because I think in the past, even teams that have followed a design first process where the authoring of the open APIs, first thing that you do, when you're producing new API features, it's very labor intensive. there's a lot of, Traditionally a lot of work getting from that kind of abstract description of what you really want down to, a detailed spec format. So yeah, AI is very helpful in that context.
Darin
00:21:34.419
So it's curious to me that as I've gone through and. Have worked on things that are not mock or wire mock enabled or whatever. I've received ideas from the AI interactions. I was like, oh yeah, I probably should include that. I wasn't even thinking about that. That wasn't even in any discussions we've ever had. Where does that. Put us now. We thought we were seniors, but yet here's this hapless Junior developer that has the knowledge of the world at its fingertips faster than I can get there.
Tom
00:22:09.859
Yeah, although, as a senior engineer myself, I find that the junior engineers in my team still surprise me in that way fairly often. ideally you want a team that brings together a bunch of different perspectives and, is a room full of people who will all be. Saying the things that somebody else didn't think of. and I think the AI kind of adds an extra string to that bow. It is very useful in that regard. I mean, I, I've spoken to a lot of people who don't use AI for generating code or artifacts directly, but, as this sort of ideation partner, so they'll kind of say, I think I want to do this. ask me some questions about it and then give me some ideas about how this could work. really what they're doing is kicking things around and eliciting these things. what are the things I haven't thought of? and then they go away and build that. And I guess, the other positive effect it has is, this isn't a universally positive thing about AI, think it has this sort of homogenizing effect in a lot of ways. one thing a lot of people have talked about is how. The most popular frameworks, the most popular languages, the most popular programming patterns are, the ones that are most represented in the training date of the LLMs are always gonna be the ones that end up being generated or represented by it. But I think where it comes to things like API design, this is definitely a good thing because really you want your API designs To use a bunch of familiar idioms, that if there is a, most popular way of structuring an address object or doing pagination, pagination, there's a number of different ways of doing it, but there are certain sort of popular patterns that are adopted. If you are not sort of intimately familiar with all of those patterns and you're trying to design an API and you offer the AI to do it, you're probably more likely to get something which is. gonna Be familiar to end users because it's well represented in the training data. Whereas, I think one of the problems with API design in the past has been, engineers kind of winging it and making up ways of doing things that are, not consistent with other designs and where consumers of those APIs then have to go and, learn something which is not idiomatic in order to make an API work. So I think I hope will have this positive effect in sort of smoothing that out and homogenizing API design over time.
Viktor
00:24:10.140
Also part of the problem is precisely that models are trained in public data, and public data is crap. Right if you just say, Hey, what is the most commonly way to do something? By what is, which way, which of the five options occur most often on internet? That is definitely not the criteria that would, provide certain level of guarantees that actually that's the right way to do it. Right. the fact that everybody does it like this does not mean it's a good way to do it. Right?
Tom
00:24:44.256
there were certainly cases where that's true. I mean it's the most upvoted thing on, Stack Overflow, isn't it? that was the pre AI version of that problem where you'd have some absolutely terrible answer, some awful piece of code, which has been upvoted a thousand times anyway, Because it kind of worked for everybody that pasted it into their IDE. this becomes, this sort of becomes idiomatic even though it has no. Right. To, I get what you're saying.
Viktor
00:25:04.605
Not only that, it worked for a majority of people, but you know, when I go to Stack Overflow, the first result looks like something I will need. get it. It's most upvoted. And guess which one I upload myself that first one,
Viktor
00:25:17.808
which whichever comes to the top is what everybody's going to throw, copy and paste. And I continue up voting that. That's the game over Karen.
Tom
00:25:25.065
yeah. It's true, isn't it? I mean, and you see that effect all over that kind of gravitation around. the first mover. I think that's kind of the way the internet works in general, isn't it? And a AI is just a sort of, a reflection and an amplification of, all of that. say, I, I think that it's a mixed issue. think in the case of, picking on pagination, 'cause I've been thinking about that recently. But there are several ways of doing it. They're all kind of okay. It's not like any of them is, particularly bad or good, but you kind of wish everybody would just pick away and stick with it and, pick a set of parameter names and stick with them rather than, it being similar but subtly different every time. And you're having to. Do you know what I mean? Write all this bespoke code, which is just about solving the same problem over and over again. And, like I say, it's the thing about just pick something and stick with it. I don't care what the solution is in that case. So I'm hoping in those instances, the AI is kind of homogenizing effect is gonna improve matters. But yeah, I can see that there'll be many other cases where the opposite will be true.
Viktor
00:26:18.639
But then in that case, if AI can modernizes things and if AI pushes for the things that are most commonly used and what not, does that at the same time means that actually no new players can enter the scene anymore because, let's say you have a tool for X, right? You just came up with that kind of tool for x. And there are already five players in X. You will never be recommended, you will never be used by anybody, ever. Right. And you never get that first, equivalent of the first, thumbs up in Stack Overflow. Right?
Tom
00:26:53.392
Yeah, I think there is a danger of that definitely. And, and again, this isn't a completely novel risk because, I think Google and SEO Google ranking was, the kind of equivalent of this in the past. But, I think AI definitely kind of concentrates the problem even further and, provides a, a further advantage to entrenched tools. And I, I'm laughing 'cause I, I've had more than one conversation with investors where they, actually quite like, the fact that this is happening, with respect to companies that are built on open source tools that are already well established in the, the training data, let's say. but yeah, I, I think there is a, if I was a, a new, for instance, a developer tool. Engineer kind of. Yeah. If I was starting out with a new product then, I would be scratching my head a lot at the moment about how do you overcome sort of incumbent gravity that AI creates and find new ways of promoting tools. And I'm sure people are finding ways around this. and the thing is, I suppose ultimately LLMs, in terms of how they choose to promote things or not, kind of work in a. Not totally dissimilar way to Google, it's ultimately an SEO problem to try and get people to try stuff. and I think eventually people will wise up to how, you can do SEO for LLMs and get their tools promoted even if they're not the most established one around.
Viktor
00:28:04.681
But here's a big difference, right? That if I go to Google and search for, how to do ation with APIs, The first one will be the same as what we spoke, But there is still second and the third and the fourth and the fifth. And vast majority of people will not go to those, but some do. Now, if I do the equivalent in models. can you do pagination for me? There is no third, second, third, fourth, fifth, Simply. Oh yeah, of course I can. Here we go. Is it working? Yes. Thank you so much.
Tom
00:28:32.960
You're cutting off the long tail of alternatives, aren't you? Yeah, I suppose that's a valid point. I guess it depends how you're using the LLM, doesn't it? if you're asking it, can you give me some alternatives for this, then it will. But, if you're saying, just do this for me via some sensible means, I don't care, then Yeah, you're right. you'll essentially be telling it. someone was saying this the other day, it's like the, I'm feeling lucky button in Google, isn't it? where you're just saying, just gimme the thing. don't gimme a bunch of options. just whatever you think the thing is, give it to me. that's the implicit sort of default mode in which LLMs work, particularly when you're asking them to generate code, they're saying, okay, I'm gonna take that top of Stack Overflow answer and munge it into your code base unless you ask me to do otherwise.
Viktor
00:29:11.742
I mean, personally already abandoned couple of formats that I'm one of the very few people using. Like nobody knows what it is, right? Uh, nobody uses it and I love it. Now abandoned it myself because it's rubbish. I mean, it's rubbish when using it in cloud code, right? simply, I get five times worse results than if I use b and before that, it was the opposite, right? Amazing, right? Yeah. You, you're the only one who likes it, but now I'm almost forced to abandon. things that I have very dear, but things that are, obscure less common.
Tom
00:29:49.115
Yeah. I'm trying to think of specific examples. I'm sure I've spoken to other people who've mentioned similar things that they can see that their favorite tools are kind of atrophying because of this sort of consolidating influence of ai. So, yeah, I can absolutely believe that.
Darin
00:30:01.975
This homogenization though, man, it's the senior in me, the almost 40 years in this business in me is going, but that's not how you do things. That's not where you get innovation from. But I'm slowly but surely relegating myself to, you know, what? Do we need innovation anymore? At least in when we're writing just normal, normal quote unquote code.
Tom
00:30:26.733
Yeah, I mean, you hear people saying that actually Programming language design features starts to matter a lot less. And, their kind of ability to being LLM generated starts to matter more. it's not so much that the innovation stops, but sort of the locus of innovation moves away from. the language design itself, or at least in the way that we're familiar with it, the sort of language design, becomes less of a human concern and more of a machine concern. I guess in the same way as there was a point in time where, CPUs were designed with instruction sets that were meant to be kind of user friendly, so everyone talks about the PDP 11 being the kind of high watermark of programmer experience at that, low level. And then after that, when the C compiler came along, nobody cares. the compiler did all the messy heavy lifting of, turning the human artifact into the machine artifact. And so then, chip designers were free to come up with these instruction sets that were kind of optimized for compilers for machines to talk to rather than for human beings. And again, this is, the subjects of speculation at the moment, I think. But people talk about the fact that the programming language is being sort of pushed beneath the place where there's a human being interacting with it, and the LLMs interface is now the human interface. So, the innovation is around the tooling from getting from that new human interface down to something the machine can consume.
Darin
00:31:39.391
I mean, it seems like the higher level languages in theory could go away. And we end up with either bike code or just raw assembly. I mean, at the end of the day, it's all assembly anyway.
Tom
00:31:51.779
Yeah, it's interesting, isn't it? Because the only counter to that would guess is that they're still large language models, right? they've still been trained on things which are linguistic constructs that are human created. And I guess if the high level languages went away, then, There isn't a corpus of training data that matches textural descriptions of things up with, this is some code to generate. And I guess also High level languages. You know, they're not just about us being able to type in human friendly texts, but they're also full of all of these abstractions you, obviously if you're writing assembly, then it's, un abstracted, low level code that is, harder to produce generally for that reason. So it feels like something resembling what we have now, is gonna have to continue. But whether there are changes in the way languages are designed or, ultimately changes in idiomatic use of them over time. I'm sure it's probably happening already There are other, people much better qualified than me to talk about it, but yeah, the way in which those languages get used is, undoubtedly gonna change because, in the same way as I described with APIs People are talking about how they're shifting to be, more LLM friendly, doing things like, Providing much more de-normalized, local context in every API interaction, rather than, being, well normalized and non-repetitive in the way that, human program as we like. I expect there's probably a whole load of analogs to that and more within, programming languages as well.
Darin
00:33:10.858
Well, I think we can agree that APIs are here to stay. Probably even more important now than ever.
Tom
00:33:16.698
Yes, I would agree with that. it's hard to see, this sort of, universe of kind of, AI agents, doing everything for us is gonna flourish without APIs in, some shape or form. being the foundational infrastructure layer because, AI agents need to do things and, they need to interact with the world outside of their own, bubble and APIs are the way to do that. But, how this happens is still something that's emerging. the last year or so, the onset of AI agents has really shown How sort of inadequate a lot of, products, APIs really are, so you have your web interface that does everything the product does, and makes it, easy for users to onboard themselves to become a customer, to do something, to complete a transaction. and in a way where they can discover their way through it, they can turn up to the website and know how to interact with it and complete whatever tasks they wanna complete. Trying to do the same thing with an API by comparison is really hard. a lot of companies put all these barriers up. You have to go and register yourself. Maybe you have to go and talk to someone in biz dev or sales to even get an API key before you can start using it. They need to find that you find there are certain features that aren't, available that way. in some quarters there's a battle going on between. the AI people saying, we're just gonna start scraping the website because that way we can get access to it easily. We're not getting rate limited. We've got access to all the features it's self describing in a way the API is not, so the AI can actually interact with it more easily than it can the API. so then you've got people putting, bot blocking and all that kinda stuff To counter that, and it feels like some equilibrium is gonna have to be found in this eventually, where API design evolves so that the API really does have kind of parity with the human interface into every product. to prevent this kind of arms race going on between, website owners and AI agent makers.
Darin
00:34:54.622
I am glad you brought up scraping. We won't dwell there too long, but scraping will never go away unless we get rid of the ui. It it, I mean, could we see a future where. There is only a text prompt and that's it. all we're producing now are APIs that are well used or have good contracts, for lack of a better term, so that the AI agents can do their work.
Tom
00:35:21.235
I think there's a, place for that, but personally very much doubt that that will be the only thing available. there's still going to be a huge amount of the kind of APIs that we already know and, are familiar with. Partly because of things like volume of transactions, right. you don't want The overhead of doing inference in every bit of compute that you're doing. quite often you have, APIs which are doing very high volumes of traffic, doing very, deterministic, often quite simple stuff, but at a very high volume, so you look at a, eCommerce website's, constituent APIs on Black Friday. You don't want a AI kind of anywhere in the mix in that when you've got, just thousands of transactions a second happening. And, I think there's going to be that kind of universe of, API mediated interaction, which, if anything will grow. I think the presence AI and the new, products that it will, drive and all that kind of thing will also drive an increase in, the sort of traditional style of API usage. But at the same time, I think there will be, this additional agent focused new style of API that is still evolving at the moment, and that will be, I guess ideally will be designed to stop you wanting to go and scrape the website with an agent. So if you are a. If you're a travel booking website and you decide that you don't just want tons of agents crawling all over your, website in a way that you can't really control. Instead, you'll say, okay, here is the API For agents, it's specifically optimized for them so they can show up, they can register themselves. They're not blocked by some human interaction, and we are sanctioning this, you can interact with it this way with our blessing. we won't block you or anything like that. But you have to use this rather than scraping our website. And I think that's gonna grow a lot as well. That kind of class of API.
Darin
00:36:52.533
Well, isn't that where weRock could come in and help with this instead of having to build out the full thing? It's like you wanna do this kind of work, go to this end point. It's unfettered. you're not gonna be hurting us, and you're not gonna be Cutting you off.
Tom
00:37:06.152
Yeah, absolutely. think, one of the things that yoc supports is prototyping APIs that don't exist yet. And I think particularly when you're building agents and you don't really know how well they're gonna get on with your API design. being able to rapidly prototype, being able to say, okay, we're going to, formulate this endpoint in this way. Run the agent with it, see if it can actually make sense of it. Oh no. Okay. It didn't, it tried to call it in the wrong way several times. So we're gonna reformulate it differently. Being able to support that kind of rapid prototyping process, validating that your agent gets on okay with it is really important. I think it's a loss easier than shipping. Lots of different versions of a production API before you find out which is the one that's gonna stick.
Darin
00:37:44.653
So wire mock. It can be found@wiremock.org. That's the open source side. There's also the commercial side, which is wire mock.io. Now, you, sort of talked about it, but give us a little bit more of a pitch on the the.io side of things.
Tom
00:38:03.487
so I've been doing the open source project for a long time. quite a few years ago, sort of around the time microservices started coming to the fore, I started noticing that there were organizations who had been using it for kind of what I think of as, as inner loop type development. Sort of unit and component type testing. using code driven mocks to support, the, building of code. teams who were already doing that wanted to start using wire mock in this sort of broader context. So they wanted to, deploy it into integration environments where maybe you had a mix of mocks and real, systems where maybe you wanted to. Test collections of closely related services together, but kind of isolate them from the rest of your environment so you didn't have to stand up a, 500 microservice environment just to test these three services you cared about in the middle. So there's those kind of problems, those using it for performance testing, for other kind of non-functional testing. sort of facilitate these cross team workflows where you had producers and consumers in different teams who wanted to collaborate around an API design. So we started seeing teams building their own infrastructure, their own supporting configuration management systems, even building custom UIs over the top of it to support all of this stuff. And, the commercial product is really an attempt at saying, you don't need to go and keep building this stuff over and over again. we'll give you the tools that you need to use wire mock in a, enterprise context, in a microservices context like that. it's hosted in the cloud, so you can just spin up a mock API and it'll be in a public domain name. as I mentioned before, it supports this prototyping workflow and this bi-directional generation of open API, There's a bunch of advanced, features that you tend to find you need more when you're in mixed environments. So one that's been most popular recently is the ability to do stateful mocking. So capturing bits of state of in one interaction or generating state in one interaction, and then, checking it or consuming it down the line and. What, what we found is that the broader the scope of your testing, so if you're in a mixed environment where you're testing a bunch of services and you've got some real and some mock services and so on like that, then being able to maintain state across multi-step interactions becomes really important. Whereas when you're doing unit ish. Then you can get away a lot more with, stateless mocks. 'cause the scope of your tests tends to be, narrower and focused on single state transitions and that thing. so yeah, it's really about when you wanna scale up your use of mocks into that larger environment. And where, as I'm, as I also mentioned earlier, when problems of, things like maintaining your mocks and making sure that they remain true to the real thing. When you're only doing a bit of mocking, it's kind of easy to do that. But then when you have hundreds or thousands of microservices and everyone's got their own mocks of them, you have this combinatorial explosion of mocks to maintain. And, you know, we also aim to help you with that. You know, when that starts to become a, problem at the scale that you're using at.
Darin
00:40:36.485
So all of Tom's information will be down in the episode description. And again, wire mock can be found@wiremock.org for the open source or@wiremock.io for any commercial needs you might need. Tom, thanks for being with us today.