#91: The Docker of 2021 is not the same as the Docker of 2016, especially when it comes to the tools around the Kubernetes ecosystem. Today, we talk about how Docker Compose should not be used to manage Kubernetes and how you should be developing Kubernetes based applications in 2021.
Tobias has a background as a Java backend developer for many years. He has also dabbled in the whole spectrum from mobile devices and frontend to DevOps.
Viktor Farcic is a member of the Google Developer Experts and Docker Captains groups, and published author.
His big passions are DevOps, Containers, Kubernetes, Microservices, Continuous Integration, Delivery and Deployment (CI/CD) and Test-Driven Development (TDD).
He often speaks at community gatherings and conferences (latest can be found here).
His random thoughts and tutorials can be found in his blog TechnologyConversations.com.
If you like our podcast, please consider rating and reviewing our show! Click here, scroll to the bottom, tap to rate with five stars, and select “Write a Review.” Then be sure to let us know what you liked most about the episode!
Also, if you haven’t done so already, subscribe to the podcast. We're adding a bunch of bonus episodes to the feed and, if you’re not subscribed, there’s a good chance you’ll miss out. Subscribe now!
Docker as a company, if you look at what they're doing like last half a year or something like that, now I would say that they're reinventing themselves in a way, they're focusing a lot on what you just mentioned. ECS. Azure Container Instances. They're focusing on how can we increase developer productivity in a way and staying away from Kubernetes. More or less. Not that they're completely away from Kubernetes, but that is not their focus anymore.
This is DevOps Paradox episode number 91. It's Past Time to Abandon Docker Compose
Welcome to DevOps Paradox. This is a podcast about random stuff in which we, Darin and Viktor, pretend we know what we're talking about. Most of the time, we mask our ignorance by putting the word DevOps everywhere we can, and mix it with random buzzwords like Kubernetes, serverless, CI/CD, team productivity, islands of happiness, and other fancy expressions that make it sound like we know what we're doing. Occasionally, we invite guests who do know something, but we do not do that often, since they might make us look incompetent. The truth is out there, and there is no way we are going to find it. PS: it's Darin reading this text and feeling embarrassed that Viktor made me do it. Here are your hosts, Darin Pope and Viktor Farcic.
Recently we had a question in our Slack workspace. You are a member of the Slack workspace, right? If you're not, you can go subscribe. https://devopsparadox.com/slack. But we got a question from Tobias and we thought about, okay, maybe we could answer this in Slack, but Viktor jumped and was like, wait a minute, let's do an episode about this and bring Tobias on. So that's what we're doing today. Tobias, thanks for joining us.
Thank you for having me.
So why don't you go ahead, introduce yourself a little bit better than me just saying Tobias. Talk about what your day-to-day job looks like and why this question is important to you.
Viktor, let me give one short answer and then I'll let you give a longer answer.
and finish the episode. No.
The answer's no. Thank you. Well, okay. You just gave my answer. Yeah, you don't want to mix Docker Compose and Kubernetes.
But it is possible with tools like Kompose to translate your Compose files into something else that runs in Kubernetes.
It definitely is. There are cases when that might be a good idea, but at least personally, I think that it's more about prolonging the inevitable than doing the right thing in a way. For a long time people were saying, Hey, yeah, I'm going to use Swarm. That's great. Swarm at that level was on the same level as Kubernetes, let's say more or less. Then people kept saying, I'm going to use Swarm. I know that Kubernetes is going to kill it because it's too simple and now they're continuing with those stories. What I'm trying to say with that is that there comes a moment when clinging to a tool is not helpful anymore. That's Docker Compose. It can operate under Kubernetes, but we know it's going away. If you keep Docker Compose format right now and operate it against Kubernetes, you're keeping something that will not exist. You're just postponing the transition towards using Hel m, Kustomize or whichever other formats are more likely to stay. On top of that, there is one more thing. Docker Compose has a tiny fraction of what you can or should do in Kubernetes. You're very limited in what you can do.
If the answer is that we should not use Docker Compose inside Kubernetes, then my first question is okay, but it's very simple for developers to use in their local machines to spin up simple things. And as you said, it's a fraction of what you can do in Kubernetes but that fraction is perhaps easiest to spin up with Docker Compose when you're just a developer and if it's okay to use Docker Compose on your local machine, because then you're making your local dev environment not so similar to production as it could be and that could be a problem. So then you could instead run Kubernetes locally, perhaps, but that could be resource heavy and also too complex for someone who doesn't want to deal with Kubernetes.
True. Actually partly true, I believe. Let's try to do reverse engineering. What you would like or what you should run in production and then we try to get how can we make production like something for developers and make it as simple as possible while being as close to possible to production. That's usually the goal everybody has. Try to strike that balance. We shouldn't make it exactly the same as production because then everybody would need to have a data center below their table, but we cannot also make it completely different than production for the sake of making it extremely simple. Now, assuming that production is Kubernetes which doesn't have to be, but for the moment, assuming that it is and let's say that we are using Helm for the sake of argument in production. Is Helm harder to operate than Docker Compose? It's realistically not because from developer perspective, there is no essential difference, ignoring now that you're used to something. It's very hard to change habits. That's true. But there is no essential difference between let's say docker-compose up and helm upgrade. It's completely different commands. That's true. We need to memorize different commands. But other than that, it's a single command that deploys something and updates something.
Where does the helm deploy it to? To your local Kubernetes cluster?
Yes. I'm assuming that we are talking about macOS or Windows, not Linux, right?
Mac in this case for this company, but otherwise doesn't really matter. I actually tested out different things to run. You can run Kind and you can run MicroK8s.
Yeah, but I would actually go with Docker Desktop.
Yeah. I had a problem that my fan was running loud and it was a Docker process that took a lot of resources all the time. I measured and Docker Desktop is actually one of the best resource wise to use.
Exactly. It's almost transparent. People don't even need to know that it's running there. The only major difference is that you need to go and change the configuration of Docker Desktop and click that button in Kubernetes.
So, it's kind of simple, but it is still something that runs in the background and taking 50% of your CPU.
True, but Docker Compose also requires Docker Desktop. So basically it's saying, Hey, use the same tool.
Nah, Docker Compose is not so resource heavy. It's not the same, but yeah, it requires Docker in the background. Yeah.
It requires Docker in the background and that Docker in the background is more resource heavy when running Kubernetes simply because there are additional containers running inside Docker Desktop. So yes, you will need more resources, but I would argue that assuming that everything else is fine and that's the only issue, which might not be. To me, hardware is the least problematic thing always. If a developer has less than 16GB laptop, they need to upgrade their laptop independently of what we are talking about. Less than 16GB is unacceptable. With 16GB, I used 16GB until not long ago. 16GB is okay. You can give 4GB to Docker. It will work fine.
Okay. Then my second question is about the future of Docker Compose, because you said it's going away, but Docker has created a new open source community to develop the Compose specification and Docker Compose has moved inside Docker so that instead of writing docker-compose, you can write docker compose and then you can use your Docker Compose files to deploy to Azure cloud or Amazon EC2 instances. So, something is happening with Docker Compose, but maybe it will not survive or maybe it will. Maybe you can reuse the specification to be a simple specification for a part of your setup.
I probably didn't express myself well. When I say Docker Compose will not survive, what I really, really, really want to say is that almost everything user-facing Docker related that is meant to do something in Kubernetes is likely going away, like Docker Compose related to Kubernetes, Docker in Docker related with Kubernetes and all that stuff, going away. There are good tools from Docker company, Kubernetes related, like most likely majority of container engines will be containerd, which is a subset, let's say of Docker and that is going to try. But Docker as a company, if you look at what they're doing like last half a year or something like that, now I would say that they're reinventing themselves in a way, they're focusing a lot on what you just mentioned. ECS. Azure Container Instances. They're focusing on how can we increase developer productivity in a way and staying away from Kubernetes. More or less. Not that they're completely away from Kubernetes, but that is not their focus anymore. So it really depends on what is the production setup. If that is Kubernetes, I'm very skeptical about Docker. If that is ECS, for example, I think Docker is potentially amazing solution and amazing solution, not for developers, but amazing solution to manage even potentially production and then that falls down all the way until it reaches development.
So you're saying it's not clear that Kubernetes is always the right answer for production.
It is by no means clear that Kubernetes is always. There is no such thing as always right thing for anything. There is no such thing. Now I do believe that Kubernetes will have in the future, the majority of shares and I also do believe that in the future, many of the things we will not even know it's Kubernetes, like you might be using Google Cloud Run. That's my favorite example. There are many others and you say, Hey, there is no Kubernetes. They're not even talking about Kubernetes in their documentation, but Kubernetes is running behind the scenes. So there is visible Kubernetes and there is invisible Kubernetes.
Yeah. So that's what I was thinking with this. What we present to the developer as what is our abstraction layer to shield them from complexity. Could it be the Docker Compose specification or could it, or should it be, perhaps we will develop something internal at our company or is it just Helm commands?
It really depends what is the production. Is it Kubernetes or it's not Kubernetes? If it is Kubernetes then, because depending on the decision, what is production and assuming that you want to be relatively close to production when as a developer, then yeah, what is production matters a lot. If it is Kubernetes, then Helm or Kustomize are the likely choices of you managing production itself and then give help to developers. It really depends. By shielding developers, I'm guessing that you really mean, Hey, you don't have a need to go through hundreds of lines of YAML. You just want to run that thing, let's say on your laptop.
Yeah. So my question is, could you use Compose to translate into a partly complete Helm structure of files and then do the add-ons behind the scenes so that Compose could be like a subset of what you need?
okay. Good response, I guess. Yep. I said that it had a third question, but I don't remember it anymore. So I don't know.
That's perfectly fine. Darin always has something to jump in.
and I do right now.
This is a bigger question. So with the recent announcement of 1.20 getting rid of docker-shim, we're only going to have containerd well, that's not true, but go with that for a minute. We only have a runtime. That's the better way of saying it.
We only have Kubernetes API to operate now.
To operate with, right.
Let's say it like that
So what is the real lifetime of Docker Compose? If you're working directly with Kubernetes, it seems like a mismatched tool now, or from 1.20 on, it seems like a mismatched tool. Going back to ECS and ACI. If I'm using Docker Compose, I'm basically deploying a monolith to that service. It doesn't have to be a monolith, but it's monolithic-like. It's not microservice-like. So where does Docker Compose fit in the 1.20 and beyond life?
I think it's rather that and I'm not really following Docker Compose integration with Kubernetes for a while now. I discarded it long time ago, so Tobias you might need to correct me. But I think that at least back then, Docker Compose was more about translating Compose Docker-ish format of stuff into Kubernetes API calls. I don't think it required Docker Docker for running.
Yeah, I think it's mostly translating to kubeconfigs behind the scenes, those tools I have looked at. There is some tool called Podman as podman-compose. That's an implementation of Docker Compose with a Podman backend. I guess it's translates to kubeconfigs as well.
so it's kind of a wrapper around Kubernetes YAML in a way, right?
But you didn't answer my question.
Oh, what was it? I'm old. I forget things.
The question was, does Docker Compose even make sense? Look ahead when 1.20 is now GA. So fast forward 12 months or whatever the timeframe is for 1.20. Does Docker Compose even make sense if it's doing nothing but being a translation layer and then Tobias and his crew wrote a wrapper around Docker Compose, which will be another translation layer. This is where it starts to get insane and you should just use Helm or Kustomize or something that's native-ish, if you're really working directly with Kubernetes. What I'm leaning towards here is if you're working directly with Kubernetes, use the tools that work with Kubernetes. If you're working with a layer that's on top of Kubernetes, use that tool that works with that layer.
Exactly. That's to me similar like if you use, again, Google Cloud Run. Yeah. Use whatever tools they provide, even though you can operate it as if it's Kubernetes directly, you're losing some of the benefits. Same thing I guess if you use Kubernetes, use Kubernetes. Now what I do like for simplicity reasons, and this is now going off topic completely. The real problem with Kubernetes for developers is that, Hey, I have Docker Compose with 20 lines of YAML and now you're giving me Kubernetes something that has 200 or a thousand lines of YAML. I'm guessing that that's the main complexity problem. That's the source of the issue. Now there are, and that will be increasing, there are new constructs in Kubernetes or relatively new constructs that simplify all that, like what you can do with Flagger, or Knative let's say, ends up being 20-ish lines of YAML definition that is equivalent to 10 times more without them. So it's not necessarily that you have to do deployment and service and ingress and what else is there? Virtual services for Istio and gateway and all that madness. You can even simplify that both in production and development by changing what you're deploying to Kubernetes.
Yeah. I am not so read up on this, but is Knative is being replaced by Tekton or is it not the same? Is it an another tool?
No. So Knative is about running something so that we can call serverless, let's say, in Kubernetes and Tekton is about building container images. So very different. No, no. What am I talking about? Sorry, I need to correct myself. Tekton is about doing CICD
Yeah, but you can also build your image inside with Tekton or inside Tekton.
Think of Tekton as being Kubernetes way to define CICD pipelines. Now CICD pipelines have steps, do this, do that and one of those steps of course, can be to build images just in any other CICD tool, but not by Tekton itself. Tekton is more like an orchestrator. Use this tool to do this. Use this tool to do that and all those tools are in containers
Can we talk a little bit about how you should build your Docker image? With what tool now?
Oh, yes. Especially since I published a video yesterday, which is not yesterday when you listen to this, but yesterday when we're recording. Yes, the short answer is not with Docker.
But with BuildKit or not with BuildKit?
BuildKit. I need to take another look because last time I used BuildKit, it was a while ago, but let me put the framework what needs to happen and then whether it's BuildKit or something else, that's a different story. You want to build images, and again, going backwards, reverse engineering, in production, for production, real releases, you want to build them inside of containers, running inside of Kubernetes as part of your processes, CICD, whatever process you have and the only tool you have at your disposal is actually Kube API. You could tell Kubernetes, do this, do that, and do this, do that, needs to result in building an image. Now, what you cannot do is you cannot assume that there is a specific container engine installed. You cannot assume that whether you have Docker, or you don't have Docker, and you cannot assume that you have access to host resources. You cannot run in privileged mode. It needs to be a solution that runs in container without having access to your infrastructure, to your nodes. Not infrastructure actually, it can have access to infrastructure, but it cannot have access to system processes on a node. Docker cannot do that. For BuildKit, I need to double check. I'm not sure, but if it can run in a container without being in privileged mode or without mounting any socket, then yes.
The two big ones are Kaniko and Buildah, build A H. Those are the two sort of standards.
and then I've found there are more. k3c is I guess Red Hat's solution to use with K3s.
I haven't tried that one. Ah with K3s, yeah, that might be. I haven't tried it. There will be more of those.
Should we try to avoid writing the Dockerfile, like using Bazel and tools to write it in a competely different way?
I like Dockerfile and I haven't found real replacement that I like for Dockerfile and now important part is that, and this is similar to one of the arguments I had a while ago about Helm using it or no. It's not only about having a better thing than Dockerfile. It's about having a better thing than Dockerfile that is widely used, because I like the format of Dockerfile, but I like even more that I can find Dockerfile for almost anything that I can ever need. So Dockerfile for all I care can stay. It might be improved in the future, but can stay. Now tools that are building images using Dockerfile do not necessarily need to be only Docker. Kaniko can build with Dockerfile. Buildah can build with Dockerfile, even though it allows you a second thing and so on and so forth. So I would say that Dockerfile is a requirement to me right now because everybody understands Dockerfile, no matter whether we're talking about an operator, sysadmin or a developer and there is no compelling reason to change it. Nobody yet showed me. There will be sometime in the future, but right now, I haven't seen a compelling reason not to use it.
So what is your preferred tool between Kaniko and Buildah and perhaps BuildKit?
Between Kaniko and Buildah, I prefer Kaniko because it requires absolutely no access to any system level or resources. Buildah forces you in a way, and that might have changed, so I might always be wrong. Things change very rapidly. But last time I was checking Buildah requires you to mount some volumes and stuff like that. It wasn't as bad as with Docker, but still you had to do it. Kaniko is a pure container. Just container. Nothing else. No volumes. The only volume actually required not even required could be that's a secret, but even that is not mandatory. So I prefer Kaniko over Buildah. There are some things missing that are not covered with Kaniko like multi architecture builds. You cannot do equivalent of docker image build blah, blah, blah and then it creates like five different images, one for arm, one for 368, so on and so forth. So let's say that 5% of what docker build does is not supported by Kaniko, but other than that, it's very good. There is always an issue. If we're having a first contact with Kaniko you're going to want to kill yourself after going through their documentation. It's horrible. It's absolutely horrible. But once you pass the suicidal tendencies, it's actually pretty good.
Don't let that scare you. I'm gonna correct you on Buildah. I actually like Buildah personally better because there's a lot of magic that happens inside of Kaniko. The thing I like about Kaniko is I can do everything in one line. It'll build, tag, and push all in one line. I love that part of Kaniko, but with Buildah it's just like running all the standard Docker commands. Instead of doing a docker build, you'd do a buildah build. Actually, that's the one statement is different than the others, but then everything else is the same. buildah tag. buildah push. I never had to mount any volumes. I'm going back to correcting you Viktor. It wasn't a requirement even a year ago.
You were talking from inside Kubernetes?
Yeah. Inside Kubernetes.
I was wrong. It happens.
Yeah. Yeah. It worked fine. I could translate every Docker command. I basically just replaced docker with buildah except for the actual build line. I can't remember what that one was off the top of my head.
Then Buildah it is. One note that I would add is that no matter whether it's Buildah or Kaniko for running production releases, let's say, or in CICD processes. On laptops, Docker. No need to change Docker.
That was actually one question I had, if you should try to make your dev environments as similar as possible, then you should reuse the same tool to build the Docker as in your pipeline or in production, but maybe it's okay to use Docker because it's simpler for the developer.
If Buildah or Kaniko or whichever other tool you use for, let's say real releases, would require you to use something other than Dockerfiles then my answer would be different, but it's the same definition. I don't care whether you buil d with Kaniko, Buildah, Docker or something else, as long as we are not duplicating code like crazy and by code, I mean, anything that is interpreted by machines. So yeah, if Dockerfile works across the board with multiple tools and one tool is better in one scenario and the other one is better in another then yeah, use two tools. That's perfectly fine for me.
Because the image they produce should be exactly the same.
Not necessarily exactly the same, but same enough, let's say, because I'm assuming that whomever is running docker build from laptop is doing it for local testing, not to create production releases. When you start creating production releases, you will go through a cycle of deploying it to staging environment, testing, promoting to production, whatever else you're doing. So you will know whether that image built whichever ways built is valid or no, but on a laptop, hey, build in any way you want, which is a very different situation than the previous discussion about Docker Compose because with Docker Compose, it was already a question, hey, if I use Helm in production and Docker Compose on laptop, I need to have two different sets of definitions of everything, but if Dockerfile works across the board, hey, go crazy and use Buildah in one scenario, Kaniko in another and Docker in third. That's all okay.
It's okay to replace the tools. It's not okay to replace the definitions that drive the tools.
Did we answer your question or did we make it worse?
I think the answer to the question is that we should try to really run Kubernetes locally and be more similar to production and start fazing out Docker Compose
That is my
The only correction I would say that you should be using Kubernetes across the board if that's your production. Whether you should run it locally or give everybody a namespace in a remote Kubernetes cluster, that's already a separate discussion. Doesn't have to be that you run it locally, but you should have your development environment in Kubernetes. Yes.
In fact, I would probably say don't run it locally. I know some people won't like that. They can, if they need to be completely disconnected. That's understandable, but if they're on a VPN and they have access
So if possible, run it in the cloud, in a namespace.
Oh yeah. Give everybody a namespace. Give them the limits that they can reach and off you go.
Then if they only still have a 4GB or 8GB laptop, that doesn't matter. In a perfect world, everybody's using a Chromebook and they're not running anything locally. You're using something like GitHub Codespaces or Gitpod or something like that for your development environment. Your runtimes for Kubernetes or whatever, is somewhere else. Whether you've got a Chromebook or a maxed out iMac, or even just your phone, you could do work.
What Darin described might be future or not, but using remote Kubernetes for development, that's definitely present. That's really trivial. It's very easy. The only thing that you might want to educate people to say, Hey, it's so easy for you to deploy application now in this namespace and it's so easy to delete it as well. Don't keep it running forever if you don't need it.
The other thing is if there is a system that is being managed by somebody else. Let's say it's a common service, then anytime there's an update of that common service that they could deploy that to all the namespaces. Let's say it was an authentication service that needed to be dropped into everybody's namespace. That would be mean, but it would be probably the right thing to do. It gives you more versus, Hey, it's not working on my laptop. Well, okay. It's not working in my namespace is the new variation of that, but it gives you options.
From developer perspective, there is no difference. You're using kubectl or helm or Kustomize or whichever tool you're using to communicate with the Kubernetes cluster that is somewhere. If I just give you now kubeconfig and you don't look at it, you would not know whether it's local or remote.
The other plus of being inside of a cluster is those namespaces can be set up with the same security and all the other things that are going to be required for production. If you're running that Kubernetes locally, or just on your own little Raspberry Pi stack, you probably will give yourself full permissions and that's a bad thing.
We actually have that problem kind of who is responsible for your security holes. On your developer laptop, you are responsible, but as soon as you hit another environment it's my team that's could be responsible to fix this and get angry emails from the security team of why is this Mongo port open and you have no password for Mongo or something like that. So that's maybe not good, but we try to avoid having that responsibility and to put it on the developer's laptop is then easier for us than to give them a namespace that they can use.
Yeah, but if you use that example of why is this port for Mongo open or whatever the example was, that's a problem you have to face sooner or later. Right? If you don't face it when in development phase, you're going to face it in staging phase or whatever the phases are, or you're going to face it in production. Right? So, I think everybody use their laptops to avoid that problem is not avoiding that problem. You're just postponing that problem.
Okay. So it sounds like you have a lot of homework to think about.
Yeah. It's a lot of different things to consider when we try to improve or move our stack.
Thanks for joining us today Tobias. If people want to contact you, your contact information will be down in the show notes. Also we'll include a link off to a YouTube video that Viktor did back in December about Kaniko in case you're still one of the ones that's building your Docker images inside a Kubernetes cluster using docker build. That's not going to work for much longer people.
We hope this episode was helpful to you. If you want to discuss it or ask a question, please reach out to us. Our contact information and the link to the Slack workspace are at https://www.devopsparadox.com/contact. If you subscribe through Apple Podcasts, be sure to leave us a review there. That helps other people discover this podcast. Go sign up right now at https://www.devopsparadox.com/ to receive an email whenever we drop the latest episode. Thank you for listening to DevOps Paradox.