DOP 109: How to Test Microservices

Posted on Wednesday, Jun 2, 2021

Show Notes

#109: You’ve made the decision that you’re going to drop your monoliths and move to microservices. Have you given any consideration how you are going to test your microservices? Beyond that, have you thought about how you can make testing easy for the consumers of your microservices?

Hosts

Darin Pope

Darin Pope

Darin Pope is a developer advocate for CloudBees.

Viktor Farcic

Viktor Farcic

Viktor Farcic is a member of the Google Developer Experts and Docker Captains groups, and published author.

His big passions are DevOps, Containers, Kubernetes, Microservices, Continuous Integration, Delivery and Deployment (CI/CD) and Test-Driven Development (TDD).

He often speaks at community gatherings and conferences (latest can be found here).

He has published The DevOps Toolkit Series, DevOps Paradox and Test-Driven Java Development.

His random thoughts and tutorials can be found in his blog TechnologyConversations.com.

Rate, Review, & Subscribe on Apple Podcasts

If you like our podcast, please consider rating and reviewing our show! Click here, scroll to the bottom, tap to rate with five stars, and select “Write a Review.” Then be sure to let us know what you liked most about the episode!

Also, if you haven’t done so already, subscribe to the podcast. We're adding a bunch of bonus episodes to the feed and, if you’re not subscribed, there’s a good chance you’ll miss out. Subscribe now!

Signup to receive an email when new content is released

Transcript

Viktor: [00:00:00]
If you're selfish, you should make versionless APIs. I think that that's the core of the problem is that for you, as a person managing some applications that has an API, it's so much easier not to version that API. Less code. Less things to worry about. You're going to make everybody else suffer in the process.

Darin:
This is DevOps Paradox episode number 109. How to Test Microservices

Darin:
Welcome to DevOps Paradox. This is a podcast about random stuff in which we, Darin and Viktor, pretend we know what we're talking about. Most of the time, we mask our ignorance by putting the word DevOps everywhere we can, and mix it with random buzzwords like Kubernetes, serverless, CI/CD, team productivity, islands of happiness, and other fancy expressions that make it sound like we know what we're doing. Occasionally, we invite guests who do know something, but we do not do that often, since they might make us look incompetent. The truth is out there, and there is no way we are going to find it. PS: it's Darin reading this text and feeling embarrassed that Viktor made me do it. Here are your hosts, Darin Pope and Viktor Farcic.

Darin: [00:01:21]
On our previous episode, we were asking, why do we want to use microservices? And the basic reason why I think we can boil it down to one thing, time to market.

Viktor: [00:01:33]
Yes, it's always time to market.

Darin: [00:01:35]
A monolith is not always time to market.

Viktor: [00:01:38]
I mean, whomever is behind that monolith would like to get to the market faster. I'm yet to find the person who says no, everything else being equal. I want to go slower to the market and I insist on everything else being equal. I hear people saying, we must go slower to the market because we have certain quality requirements or this or that, but everything else being equal, nobody is going to say, I want to go slower rather than faster.

Darin: [00:02:07]
But in order to go either slow or fast, we need to test and how do we test, how should we test microservices?

Viktor: [00:02:19]
The confusing answer would be we should test it in the same way as we test everything else. The difference is that some of the things that you might have been considering as nice to have are becoming more important. Specifically, I think that the importance of mocks and stubs or simulations of the loosely coupled dependencies of your service are becoming more important because if you had before, let's say, I will not say one application because it's almost never one, but if you had one application that let's say frontend and backend and then the database, then setting up all three of them for the testing of only backend is not that big of a deal, but if we split those into 10 different pieces, we suddenly have 30 different pieces. Now setting up 30 of those for the sake of testing one is suddenly very inefficient. I mean, it was always inefficient, but now it's more inefficient. And if we add to that story, the needs to be autonomous in a way hey I can develop my stuff independently from yours, then mocks are indispensable. Right? I cannot tell you how I'm pretty sure that you get the same question is, okay. Uh, Docker, containers, this and that, how do I run my whole system on my laptop? And then you you

Darin: [00:03:55]
answer is you don't.

Viktor: [00:03:57]
Yeah, you, you just kind of like remember. Okay. So where I am right now, physically, this is before COVID, I'm in a bank and he's asking me the question. They cannot fit their system into the whole data center. One data center is not enough, and he's asking how to run it locally. It's not going to happen unless we are talking about being a WordPress developer, right? If it's WordPress, yeah, you run WordPress with whatever database it needs in a background, then that's about it. With anything more complex, you cannot run the whole system. At least not for them. At least not while developing, let's put it that way. In staging, yeah, maybe, but while developing, you cannot expect everything.

Darin: [00:04:46]
And I've had that question asked of me before. It's like, well, I need to be able to run my full system that has 150 microservices on my local laptop and I needed it to run in Docker Compose, because that's what I like using, but we're actually going to be deploying it to Kubernetes. I went, then you need to be running Kubernetes locally. You don't need to be using Docker Compose. If your target runtime is Kubernetes, you need to try to be as Kubernetes for as long as possible or as early as possible, I should say. And now you can right? Used to, you couldn't.

Viktor: [00:05:18]
I think that people are failing to see the difference between the vast majority of tests that you should be running locally. When I say local, it could be some cluster, but in your personal development environment, whether that's your laptop or a namespace in Kubernetes, doesn't matter. Like, so for the sake of argument I will be saying locally, right? While you're developing, you want to run tests as frequently as you can. And you don't want to wait until you push that to Git repository so that your CICD pipelines do whatever they need to do. And when I say frequently, I dunno, you run tests every few minutes. Maybe once every half an hour. Let's, let's be generous here. Right? And you need to assume that you have only your application, nothing else. And th there is actually one more assumption, and this is very important. You need to assume that there is a clearly defined communication interface with everybody else. So, you know, if you send a request, if your application is sending a request to some other application, you know exactly what the structure of the request is and what the structure of the response is. Now you cannot not know what the, what is the structure of the request, because otherwise, how are you going to write that the code that sends that request? You need to know that and the response as well, because you are getting that your application is getting that response. It is probably putting it into some data structure and doing something with it. So you, you, it would be impossible for you to write the code of your app without knowing the API of the application you're speaking with. And if you know the API then is it really so hard to write a mock? Whatever that mock is. It could be a database. It could be HTTP doesn't matter. It's so trivially easy to write a mock for requests and yet hardly anybody does it.

Darin: [00:07:11]
Who should be responsible for writing that mock or that stub? Should it be a, the application developer that's actually trying to integrate with the other system or should it be the other system providing a mock that people can just use?

Viktor: [00:07:25]
I would say the other system should provide that mock and those things are usually now can be generated automatically. You know, you can use Flagger to define what you will have in advance, and then you develop that something, and then you create mocks out of it. There are other ways. It's relatively easy and straightforward. Here's the important thing and people might dislike, what I'm about to say, you should test your service, never against the upcoming release of some other service, but against production. When I say production, I don't mean necessarily the physical production, but production version of some other service.

Darin: [00:08:06]
Why do you say that?

Viktor: [00:08:07]
Because that's the only way to establish the autonomy. Right. If I need to test my stuff against the stuff that you're working on right now, then we are going to spend more time going back and forth about what you're doing right now than me focusing on my work and you focusing on your work. Eventually you're going to release your work later than expected because we just spent half of the time trying to figure out what you're going to do and how you're going to do it. Here's an example. Let's say that your application is some hobby application that's writes tweets. Let's say that your application is a way how to automate writing tweets, right? And your application needs to communicate with Twitter through the Twitter API. Which version of that API you're going to use? The production one or the one that Twitter is working on right now, the upcoming one?

Darin: [00:09:05]
I don't have a choice. I can only use what's available to me.

Viktor: [00:09:09]
Exactly, and if, even if you would have a choice, let's say that Twitter developers would be doing it all in open. Hey, we're working right now. Production is version two and we are working on breaking changes version three. We don't know exactly when it's going to be finished. Right. It could be tomorrow. It could be next year, but we are doing it in open. You can know everything we have right now. Would you choose that something?

Darin: [00:09:37]
I would not choose that something as my primary, but I would be using it since it is in the open. Both the two zero and the three zero. I would be coding against both. I would have to, because I have no idea when three is going live.

Viktor: [00:09:54]
Exactly, but then wouldn't it be for you, excluding some special cases, right? When you have something very special case, for you, you, wouldn't, it be more efficient to wait until that something is released or at least it is in almost certainly not to be changed?

Darin: [00:10:16]
Yes, I would wait until it's what they're calling either a release candidate or stable or whatever their phrasing is.

Viktor: [00:10:24]
Exactly or even to redefine it it in a slightly different way, that version three of the upcoming Twitter would, I guess, right. If I would say that you would start maybe using it somehow when they say, Hey, this is not fully finished, but our API is finished. There will be no more breaking changes to this API. API is finished. Go. Right?

Darin: [00:10:48]
Yes.

Viktor: [00:10:49]
So from that perspective, API of that hypothetical version three is actually production in a way, right? It is stable. It's not going to change. Now, the code behind that API is going to change, but you don't care about the code behind API of Twitter. You care about API. Is this finished? Yes. Go. Then I can work against it, but I cannot work against it in advance. In early stages of an API or a new release or feature or whatever, those things change so frequently and then five different teams, five different applications are going back and forth. Oh, I added this field. Oh no, I removed that field. Oh, I changed this from integer to double. It's just too much. Too much waste of time.

Darin: [00:11:33]
So we talked about mocks and stubs. Is there ever any reason to test a microservice against a real live system?

Viktor: [00:11:42]
Yes. I would say that potentially once you create a pull request, which is a signal that, Hey, my work is done, most likely. Now there might be changes depending on the review and the results of doing this and that, but more or less I'm finished. I create a pull request and from there on, we're talking more about automation than me doing something all the time frequently and then I don't mind if you create a production copy of your system, or let's say a replica of the production releases of your system. But again, it's production, right? We're not talking about a staging environment where my new release mixed with future releases of other applications. We're not talking about that. We are talking about my application at one moment might be deployed in an environment where it speaks with production releases of the rest of the system and then we run some integration tests or whatever.

Darin: [00:12:38]
and this goes back to the previous episode about time to market or speed to market that if you are truly releasing independently, and I think that's the key phrase that you have to only be looking at whatever current production is. No ifs, ands or buts, because if you quickly have a dependency on something that's upcoming, now you're the bottleneck.

Viktor: [00:13:02]
Yeah, your stuff cannot be released until the thing that is upcoming is released as well. You need to wait for others. While it makes much more sense for me to release my stuff that is against production version of something and even if minute later you deploy a new release of something else, I will still be communicating with the version of your API that I chose at the time I released my stuff. So I speak with you version two and I deployed it to production. It speaks with you version two. You deploy version three of your stuff. I'm still speaking with version two of you or I can be speaking with version 2.1 of you, or 2.2 or 2.3, but I'm not going to speak with your API 3.0 no matter what you deploy. Now, but once you release what 3.0, then I can potentially change my application to speak for your breaking changes, whatever they are. But most of the time we don't make breaking changes right. How often do we make breaking? Sorry, that was such a silly thing. I just said. How often should we be making breaking changes? And the answer is almost never.

Darin: [00:14:09]
There has to be a really good reason to have a breaking change. Instead of a breaking change, you should just have a new feature.

Viktor: [00:14:15]
Yeah, you can have new functionality. That's not a breaking change.

Darin: [00:14:19]
New functionality and then you've deprecated the old until it's finally gone.

Viktor: [00:14:23]
I mean, completely new, new feature, new API, new API endpoint.

Darin: [00:14:27]
API. Yeah. A new, a new endpoint that's replacing the one that was going to be an in place breaking change. It's like doing SQL updates. Right. I don't do updates. I do creates, or I do just inserts. Right? I'm only doing inserts. I'm never doing an update of an existing record.

Viktor: [00:14:44]
Actually that's funny because, and this probably now completely unrelated with our subject, but not long ago, I had a kind of conversation with disagreement with a few people that they said that, Hey, API version and kind in Kubernetes manifests is one of those things that nobody needs and I freaked out kind of. What do you mean we don't need API version? Do you really want your stuff to break every time Kubernetes makes something new or changes something? That API version in Kubernetes manifest is probably the most important part of the manifest.

Darin: [00:15:21]
I'll use an example that we're dealing with right now. There is a current at the time of recording a current breaking change in the chaos course, or that the chaos course is breaking due to some changes with Istio and it's due to the versioning of the Istio API that the tooling has a problem with, and we've got PRs in and waiting on approvals and at the time of recording, but it's like, man, the API is a wonderful thing, but it can also bite you at the same time. So I understand the people that were arguing with you. I understand where they're coming from, but I would rather have the version because that version is known and I know exactly what I'm going to get, even if it's stupid.

Viktor: [00:16:03]
Exactly and that problem that you mentioned right then, correct me if I'm wrong, because actually you made the PR. You probably know about it more than me, but the whole problem was because that specific module of Chaos Toolkit is not passing in all cases, the version that was specified, and sometimes it's using just the latest and that creates chaos.

Darin: [00:16:24]
Yes. Uh, sort of, but close enough for the recording. Yes. So testing API versions make a big difference. Forget about testing. API versions matter, always. So if you have a versionless okay, let's go off this point for a second, because since we're talking about testing. As an application developer, should you be creating versionless APIs?

Viktor: [00:16:48]
If you're selfish, you should make versionless APIs. I think that that's the core of the problem is that for you, as a person managing some applications that has an API, it's so much easier not to version that API. Less code. Less things to worry about. You're going to make everybody else suffer in the process. Everybody who is consuming your application, your API, is going to suffer, but it is going to be easier for you and that's why most APIs are not versioned. It's simply harder to version than not to version.

Darin: [00:17:22]
Unless you're following a standard and the standard just sort of makes it happen.

Viktor: [00:17:26]
Yeah, meaning, you know, in Kubernetes, yeah, there is a standard. Everybody follows it. Nobody's questioning it, but if you look at enterprise apps, how often did you see a versioned API?

Darin: [00:17:35]
Very low percentage just because, Hey, we're just using it internally. We don't have to worry about it except other people consuming your service. Now, if you can guarantee, if you can guarantee that when you've published your API on day one, that it is never going to change, never, then maybe okay?

Viktor: [00:17:57]
Yeah, nobody can guarantee that. You can hope for that.

Darin: [00:18:01]
We can think we can, but in reality, that's not true.

Viktor: [00:18:03]
And that also has some less obvious downsides, because if you're going to guarantee that API always, basically that means that you can only be adding new fields to the API, never changing any of the existing fields and that means that you will be just adding and adding fields and probably some fields are going to be replacements for others and then you're going to have a logic. Hey, this person did this, then do this and do that and stuff like that. It's maybe easier to visualize it with database schemas, right? If you always just keep adding and adding fields, never replacing them, even though adding fields, even though you're not adding truly new fields, you're just trying not to break what there was before. That ends up huge mess. Always.

Darin: [00:18:53]
You can either choose to version upfront or over the life cycle of that service. You're going to be versioning it just in a very ugly way.

Viktor: [00:19:01]
It costs almost nothing to say, Hey, I have this API. I'm just going to put that single line over there that says v1 and I hope never to have anything but v1. Fair enough. But it didn't cost you almost anything to put that v1 at the very beginning of your process.

Darin: [00:19:19]
So testing microservices, if you're providing a service for other people to consume, you should provide a mock. Whether that's actually a mock or your documentation shows the other consumers how to mock your service because they need to be able to run unit tests and the other reason why I think a mock is important provided by the publisher of the service is they could actually make the mock smarter. Here's the end point. You give me this data. I'm going to give you back this data. You can send me 17 different versions of data. I'm going to give you back 17 different answers and here's what those answers are going to be. So you can write a better unit test, whatever the scenario may be. Stubs, mocks and stubs. They're similar, but different. Mocks I think are better because they represent the actual system. How else should we test it? We talked about integration testing. Always test against whatever currently is in production, whatever production is.

Viktor: [00:20:13]
Correct. There is actually a third option that I haven't mentioned. I think. It's not always mocks, stubs, what so not are that easy and you can run your application locally that connects to your production or production-like something. That is okay alternative, as long as we are talking about production, not whatever you are working on right now.

Darin: [00:20:35]
Some places, though, physically can't get to production because it's gated off.

Viktor: [00:20:41]
In this case, when I say production, I mean production-like system, right? I mean, if you have a cluster that has exactly the same version as productions, in this context, I call that production, even though not really real production.

Darin: [00:20:56]
It is a production-like environment that's running. So when you are actually doing a deployment quote, unquote to production, you're actually sending it to two places. You're sending it to the consumer, I'm not talking person, consumer, but people using the services real production and there's also another production system that's living in the development network, let's call it, that people can go against. That data is being replicated, massaged from live production. Just trying to keep data up to date in it. It may or may not be the true live data, certain parts of the dataset, maybe, but other, if it was PII type data, maybe that's getting cleansed before it comes over.

Viktor: [00:21:35]
And that production is running your older version of your application. So it is a full production. It has, including your application, but you spin up your stuff somewhere else and connect it to those, ignoring your own production version of yourself.

Darin: [00:21:53]
Because you wouldn't be calling yourself, I hope. If you're calling yourself, you've done other things very wrong. In order to run my new version, I have to make sure the old version is running within the cluster too. Nothing could go wrong with that, right? Oh boy. Okay. So how do you test your microservices? Jump over in the Slack workspace. Join up in the podcast channel. You will see an entry for episode 109 and you can talk about it there.

Darin:
We hope this episode was helpful to you. If you want to discuss it or ask a question, please reach out to us. Our contact information and the link to the Slack workspace are at https://www.devopsparadox.com/contact. If you subscribe through Apple Podcasts, be sure to leave us a review there. That helps other people discover this podcast. Go sign up right now at https://www.devopsparadox.com/ to receive an email whenever we drop the latest episode. Thank you for listening to DevOps Paradox.