DOP 91: It's Past Time to Abandon Docker Compose

Posted on Wednesday, Jan 20, 2021

Show Notes

#91: The Docker of 2021 is not the same as the Docker of 2016, especially when it comes to the tools around the Kubernetes ecosystem. Today, we talk about how Docker Compose should not be used to manage Kubernetes and how you should be developing Kubernetes based applications in 2021.

Kaniko - Building Container Images In Kubernetes Without Docker

Docker Compose on K8s

Guests

Tobias Ericsson

Tobias Ericsson

Tobias has a background as a Java backend developer for many years. He has also dabbled in the whole spectrum from mobile devices and frontend to DevOps.

Hosts

Darin Pope

Darin Pope

Darin Pope is a developer advocate for CloudBees.

Viktor Farcic

Viktor Farcic

Viktor Farcic is a member of the Google Developer Experts and Docker Captains groups, and published author.

His big passions are DevOps, Containers, Kubernetes, Microservices, Continuous Integration, Delivery and Deployment (CI/CD) and Test-Driven Development (TDD).

He often speaks at community gatherings and conferences (latest can be found here).

He has published The DevOps Toolkit Series, DevOps Paradox and Test-Driven Java Development.

His random thoughts and tutorials can be found in his blog TechnologyConversations.com.

Rate, Review, & Subscribe on Apple Podcasts

If you like our podcast, please consider rating and reviewing our show! Click here, scroll to the bottom, tap to rate with five stars, and select “Write a Review.” Then be sure to let us know what you liked most about the episode!

Also, if you haven’t done so already, subscribe to the podcast. We're adding a bunch of bonus episodes to the feed and, if you’re not subscribed, there’s a good chance you’ll miss out. Subscribe now!

Signup to receive an email when new content is released

Transcript

Viktor: [00:00:00]
Docker as a company, if you look at what they're doing like last half a year or something like that, now I would say that they're reinventing themselves in a way, they're focusing a lot on what you just mentioned. ECS. Azure Container Instances. They're focusing on how can we increase developer productivity in a way and staying away from Kubernetes. More or less. Not that they're completely away from Kubernetes, but that is not their focus anymore.

Darin:
This is DevOps Paradox episode number 91. It's Past Time to Abandon Docker Compose

Darin:
Welcome to DevOps Paradox. This is a podcast about random stuff in which we, Darin and Viktor, pretend we know what we're talking about. Most of the time, we mask our ignorance by putting the word DevOps everywhere we can, and mix it with random buzzwords like Kubernetes, serverless, CI/CD, team productivity, islands of happiness, and other fancy expressions that make it sound like we know what we're doing. Occasionally, we invite guests who do know something, but we do not do that often, since they might make us look incompetent. The truth is out there, and there is no way we are going to find it. PS: it's Darin reading this text and feeling embarrassed that Viktor made me do it. Here are your hosts, Darin Pope and Viktor Farcic.

Darin: [00:01:32]
Recently we had a question in our Slack workspace. You are a member of the Slack workspace, right? If you're not, you can go subscribe. https://devopsparadox.com/slack. But we got a question from Tobias and we thought about, okay, maybe we could answer this in Slack, but Viktor jumped and was like, wait a minute, let's do an episode about this and bring Tobias on. So that's what we're doing today. Tobias, thanks for joining us.

Tobias: [00:02:04]
Thank you for having me.

Darin: [00:02:05]
So why don't you go ahead, introduce yourself a little bit better than me just saying Tobias. Talk about what your day-to-day job looks like and why this question is important to you.

Tobias: [00:02:18]
I have a background as a Java backend developer for many years and I have dabbled with different things from front-end JavaScript development and being an Android developer to being a DevOps developer, helping teams be more productive. So now I work as a contractor at a big company in the developer productivity team. We try to help the developers by best practices and guidelines and tools around the CICD pipeline. Recently we are moving from our physical Jenkins servers and agents to running inside Kubernetes. The developers have Docker Compose files for making life easy on their developer machines. We also use Docker Compose in integration tests running on Jenkins. Now when we are moving to Kubernetes, I'm tasked with the question what to do with our Docker Compose. Should we make them run in Kubernetes or should we do something else instead?

Darin: [00:03:20]
Viktor, let me give one short answer and then I'll let you give a longer answer.

Viktor: [00:03:24]
and finish the episode. No.

Darin: [00:03:26]
The answer's no. Thank you. Well, okay. You just gave my answer. Yeah, you don't want to mix Docker Compose and Kubernetes.

Tobias: [00:03:36]
But it is possible with tools like Kompose to translate your Compose files into something else that runs in Kubernetes.

Viktor: [00:03:45]
It definitely is. There are cases when that might be a good idea, but at least personally, I think that it's more about prolonging the inevitable than doing the right thing in a way. For a long time people were saying, Hey, yeah, I'm going to use Swarm. That's great. Swarm at that level was on the same level as Kubernetes, let's say more or less. Then people kept saying, I'm going to use Swarm. I know that Kubernetes is going to kill it because it's too simple and now they're continuing with those stories. What I'm trying to say with that is that there comes a moment when clinging to a tool is not helpful anymore. That's Docker Compose. It can operate under Kubernetes, but we know it's going away. If you keep Docker Compose format right now and operate it against Kubernetes, you're keeping something that will not exist. You're just postponing the transition towards using Hel m, Kustomize or whichever other formats are more likely to stay. On top of that, there is one more thing. Docker Compose has a tiny fraction of what you can or should do in Kubernetes. You're very limited in what you can do.

Tobias: [00:05:05]
If the answer is that we should not use Docker Compose inside Kubernetes, then my first question is okay, but it's very simple for developers to use in their local machines to spin up simple things. And as you said, it's a fraction of what you can do in Kubernetes but that fraction is perhaps easiest to spin up with Docker Compose when you're just a developer and if it's okay to use Docker Compose on your local machine, because then you're making your local dev environment not so similar to production as it could be and that could be a problem. So then you could instead run Kubernetes locally, perhaps, but that could be resource heavy and also too complex for someone who doesn't want to deal with Kubernetes.

Viktor: [00:05:54]
True. Actually partly true, I believe. Let's try to do reverse engineering. What you would like or what you should run in production and then we try to get how can we make production like something for developers and make it as simple as possible while being as close to possible to production. That's usually the goal everybody has. Try to strike that balance. We shouldn't make it exactly the same as production because then everybody would need to have a data center below their table, but we cannot also make it completely different than production for the sake of making it extremely simple. Now, assuming that production is Kubernetes which doesn't have to be, but for the moment, assuming that it is and let's say that we are using Helm for the sake of argument in production. Is Helm harder to operate than Docker Compose? It's realistically not because from developer perspective, there is no essential difference, ignoring now that you're used to something. It's very hard to change habits. That's true. But there is no essential difference between let's say docker-compose up and helm upgrade. It's completely different commands. That's true. We need to memorize different commands. But other than that, it's a single command that deploys something and updates something.

Tobias: [00:07:17]
Where does the helm deploy it to? To your local Kubernetes cluster?

Viktor: [00:07:21]
Yes. I'm assuming that we are talking about macOS or Windows, not Linux, right?

Tobias: [00:07:26]
Mac in this case for this company, but otherwise doesn't really matter. I actually tested out different things to run. You can run Kind and you can run MicroK8s.

Viktor: [00:07:37]
Yeah, but I would actually go with Docker Desktop.

Tobias: [00:07:41]
Yeah. I had a problem that my fan was running loud and it was a Docker process that took a lot of resources all the time. I measured and Docker Desktop is actually one of the best resource wise to use.

Viktor: [00:07:55]
Exactly. It's almost transparent. People don't even need to know that it's running there. The only major difference is that you need to go and change the configuration of Docker Desktop and click that button in Kubernetes.

Tobias: [00:08:07]
So, it's kind of simple, but it is still something that runs in the background and taking 50% of your CPU.

Viktor: [00:08:14]
True, but Docker Compose also requires Docker Desktop. So basically it's saying, Hey, use the same tool.

Tobias: [00:08:22]
Nah, Docker Compose is not so resource heavy. It's not the same, but yeah, it requires Docker in the background. Yeah.

Viktor: [00:08:31]
It requires Docker in the background and that Docker in the background is more resource heavy when running Kubernetes simply because there are additional containers running inside Docker Desktop. So yes, you will need more resources, but I would argue that assuming that everything else is fine and that's the only issue, which might not be. To me, hardware is the least problematic thing always. If a developer has less than 16GB laptop, they need to upgrade their laptop independently of what we are talking about. Less than 16GB is unacceptable. With 16GB, I used 16GB until not long ago. 16GB is okay. You can give 4GB to Docker. It will work fine.

Tobias: [00:09:19]
Okay. Then my second question is about the future of Docker Compose, because you said it's going away, but Docker has created a new open source community to develop the Compose specification and Docker Compose has moved inside Docker so that instead of writing docker-compose, you can write docker compose and then you can use your Docker Compose files to deploy to Azure cloud or Amazon EC2 instances. So, something is happening with Docker Compose, but maybe it will not survive or maybe it will. Maybe you can reuse the specification to be a simple specification for a part of your setup.

Viktor: [00:10:07]
I probably didn't express myself well. When I say Docker Compose will not survive, what I really, really, really want to say is that almost everything user-facing Docker related that is meant to do something in Kubernetes is likely going away, like Docker Compose related to Kubernetes, Docker in Docker related with Kubernetes and all that stuff, going away. There are good tools from Docker company, Kubernetes related, like most likely majority of container engines will be containerd, which is a subset, let's say of Docker and that is going to try. But Docker as a company, if you look at what they're doing like last half a year or something like that, now I would say that they're reinventing themselves in a way, they're focusing a lot on what you just mentioned. ECS. Azure Container Instances. They're focusing on how can we increase developer productivity in a way and staying away from Kubernetes. More or less. Not that they're completely away from Kubernetes, but that is not their focus anymore. So it really depends on what is the production setup. If that is Kubernetes, I'm very skeptical about Docker. If that is ECS, for example, I think Docker is potentially amazing solution and amazing solution, not for developers, but amazing solution to manage even potentially production and then that falls down all the way until it reaches development.

Tobias: [00:11:49]
So you're saying it's not clear that Kubernetes is always the right answer for production.

Viktor: [00:11:56]
It is by no means clear that Kubernetes is always. There is no such thing as always right thing for anything. There is no such thing. Now I do believe that Kubernetes will have in the future, the majority of shares and I also do believe that in the future, many of the things we will not even know it's Kubernetes, like you might be using Google Cloud Run. That's my favorite example. There are many others and you say, Hey, there is no Kubernetes. They're not even talking about Kubernetes in their documentation, but Kubernetes is running behind the scenes. So there is visible Kubernetes and there is invisible Kubernetes.

Tobias: [00:12:38]
Yeah. So that's what I was thinking with this. What we present to the developer as what is our abstraction layer to shield them from complexity. Could it be the Docker Compose specification or could it, or should it be, perhaps we will develop something internal at our company or is it just Helm commands?

Viktor: [00:12:59]
It really depends what is the production. Is it Kubernetes or it's not Kubernetes? If it is Kubernetes then, because depending on the decision, what is production and assuming that you want to be relatively close to production when as a developer, then yeah, what is production matters a lot. If it is Kubernetes, then Helm or Kustomize are the likely choices of you managing production itself and then give help to developers. It really depends. By shielding developers, I'm guessing that you really mean, Hey, you don't have a need to go through hundreds of lines of YAML. You just want to run that thing, let's say on your laptop.

Tobias: [00:13:47]
I'm thinking of a front-end JavaScript developer who is failing you and wants to focus on being an expert in JavaScript, but doesn't really care about Helm commands or Docker commands or what. Just want to get the service up and running the least painful way.

Viktor: [00:14:06]
Yeah. You can hide Helm. You can create runme.sh that executes Helm. Let me change my story a bit. It can be Compose or it can be X, Y, Z. It doesn't matter what it is. What does matter is that you don't want to write same code many times. The same thing you're not going to write JavaScript code for two of the same applications, one for production, one for development. You're going to run the same code in development and production, and you're going to use the same configurations and build scripts and so on and so forth or as similar as possible. So in that sense, let's say that Helm is used to deploy to production. You don't want to rewrite that Helm into Compose.

Tobias: [00:14:52]
Yeah. So my question is, could you use Compose to translate into a partly complete Helm structure of files and then do the add-ons behind the scenes so that Compose could be like a subset of what you need?

Viktor: [00:15:10]
If I, and correct me, please, if I'm wrong, I guess that you're describing the scenario in which. If when a new project starts, developer defines initially the application and then somebody might tweak it for production, right? Now, I would rather go other way around. The job of people managing production one way or another is to provide service to their customers. Everybody's job is to provide service to somebody and the job of those people is to make, their customers are developers in a way, and their job is to make things easy for developers. You can use Helm file, for example, to externalize things. You can use Helm charts that extend Helm charts and say, Hey look, for example, this is how we do Node.js applications Helm definitions. You can just extend this chart in a way and choose any of those 25 variables to tweak it for your needs. Oh, you have a different port than default. Override it, which is basically what, in a way, if you create the base chart and then let developers tweak it, you're doing something very similar to Compose because Compose also allows you very limited amount of tweaking when Kubernetes is concerned anyways. So maybe the other way around create some sorts of templates. Now the problematic part is when you say, Hey, I want to start working on a new project and I want to write it in Rust and then everybody's, hey, we never did Rust. But assuming that most projects are in one of the few languages, maybe JavaScript, maybe Java, and then Go, I don't know. Whatever is your flavor. It's relatively easy to create templates.

Tobias: [00:17:08]
okay. Good response, I guess. Yep. I said that it had a third question, but I don't remember it anymore. So I don't know.

Viktor: [00:17:17]
That's perfectly fine. Darin always has something to jump in.

Darin: [00:17:22]
and I do right now.

Viktor: [00:17:24]
Yeah.

Darin: [00:17:25]
This is a bigger question. So with the recent announcement of 1.20 getting rid of docker-shim, we're only going to have containerd well, that's not true, but go with that for a minute. We only have a runtime. That's the better way of saying it.

Viktor: [00:17:41]
We only have Kubernetes API to operate now.

Darin: [00:17:44]
To operate with, right.

Viktor: [00:17:45]
Let's say it like that

Darin: [00:17:47]
So what is the real lifetime of Docker Compose? If you're working directly with Kubernetes, it seems like a mismatched tool now, or from 1.20 on, it seems like a mismatched tool. Going back to ECS and ACI. If I'm using Docker Compose, I'm basically deploying a monolith to that service. It doesn't have to be a monolith, but it's monolithic-like. It's not microservice-like. So where does Docker Compose fit in the 1.20 and beyond life?

Viktor: [00:18:21]
I think it's rather that and I'm not really following Docker Compose integration with Kubernetes for a while now. I discarded it long time ago, so Tobias you might need to correct me. But I think that at least back then, Docker Compose was more about translating Compose Docker-ish format of stuff into Kubernetes API calls. I don't think it required Docker Docker for running.

Tobias: [00:18:49]
Yeah, I think it's mostly translating to kubeconfigs behind the scenes, those tools I have looked at. There is some tool called Podman as podman-compose. That's an implementation of Docker Compose with a Podman backend. I guess it's translates to kubeconfigs as well.

Viktor: [00:19:09]
so it's kind of a wrapper around Kubernetes YAML in a way, right?

Tobias: [00:19:15]
Yeah.

Darin: [00:19:16]
But you didn't answer my question.

Viktor: [00:19:18]
Oh, what was it? I'm old. I forget things.

Darin: [00:19:22]
The question was, does Docker Compose even make sense? Look ahead when 1.20 is now GA. So fast forward 12 months or whatever the timeframe is for 1.20. Does Docker Compose even make sense if it's doing nothing but being a translation layer and then Tobias and his crew wrote a wrapper around Docker Compose, which will be another translation layer. This is where it starts to get insane and you should just use Helm or Kustomize or something that's native-ish, if you're really working directly with Kubernetes. What I'm leaning towards here is if you're working directly with Kubernetes, use the tools that work with Kubernetes. If you're working with a layer that's on top of Kubernetes, use that tool that works with that layer.

Viktor: [00:20:09]
Exactly. That's to me similar like if you use, again, Google Cloud Run. Yeah. Use whatever tools they provide, even though you can operate it as if it's Kubernetes directly, you're losing some of the benefits. Same thing I guess if you use Kubernetes, use Kubernetes. Now what I do like for simplicity reasons, and this is now going off topic completely. The real problem with Kubernetes for developers is that, Hey, I have Docker Compose with 20 lines of YAML and now you're giving me Kubernetes something that has 200 or a thousand lines of YAML. I'm guessing that that's the main complexity problem. That's the source of the issue. Now there are, and that will be increasing, there are new constructs in Kubernetes or relatively new constructs that simplify all that, like what you can do with Flagger, or Knative let's say, ends up being 20-ish lines of YAML definition that is equivalent to 10 times more without them. So it's not necessarily that you have to do deployment and service and ingress and what else is there? Virtual services for Istio and gateway and all that madness. You can even simplify that both in production and development by changing what you're deploying to Kubernetes.

Tobias: [00:21:39]
Yeah. I am not so read up on this, but is Knative is being replaced by Tekton or is it not the same? Is it an another tool?

Viktor: [00:21:49]
No. So Knative is about running something so that we can call serverless, let's say, in Kubernetes and Tekton is about building container images. So very different. No, no. What am I talking about? Sorry, I need to correct myself. Tekton is about doing CICD

Tobias: [00:22:07]
Yeah, but you can also build your image inside with Tekton or inside Tekton.

Viktor: [00:22:14]
Think of Tekton as being Kubernetes way to define CICD pipelines. Now CICD pipelines have steps, do this, do that and one of those steps of course, can be to build images just in any other CICD tool, but not by Tekton itself. Tekton is more like an orchestrator. Use this tool to do this. Use this tool to do that and all those tools are in containers

Tobias: [00:22:37]
Can we talk a little bit about how you should build your Docker image? With what tool now?

Viktor: [00:22:43]
Oh, yes. Especially since I published a video yesterday, which is not yesterday when you listen to this, but yesterday when we're recording. Yes, the short answer is not with Docker.

Tobias: [00:22:57]
But with BuildKit or not with BuildKit?

Viktor: [00:23:00]
BuildKit. I need to take another look because last time I used BuildKit, it was a while ago, but let me put the framework what needs to happen and then whether it's BuildKit or something else, that's a different story. You want to build images, and again, going backwards, reverse engineering, in production, for production, real releases, you want to build them inside of containers, running inside of Kubernetes as part of your processes, CICD, whatever process you have and the only tool you have at your disposal is actually Kube API. You could tell Kubernetes, do this, do that, and do this, do that, needs to result in building an image. Now, what you cannot do is you cannot assume that there is a specific container engine installed. You cannot assume that whether you have Docker, or you don't have Docker, and you cannot assume that you have access to host resources. You cannot run in privileged mode. It needs to be a solution that runs in container without having access to your infrastructure, to your nodes. Not infrastructure actually, it can have access to infrastructure, but it cannot have access to system processes on a node. Docker cannot do that. For BuildKit, I need to double check. I'm not sure, but if it can run in a container without being in privileged mode or without mounting any socket, then yes.

Darin: [00:24:25]
The two big ones are Kaniko and Buildah, build A H. Those are the two sort of standards.

Tobias: [00:24:34]
and then I've found there are more. k3c is I guess Red Hat's solution to use with K3s.

Viktor: [00:24:43]
I haven't tried that one. Ah with K3s, yeah, that might be. I haven't tried it. There will be more of those.

Tobias: [00:24:53]
Should we try to avoid writing the Dockerfile, like using Bazel and tools to write it in a competely different way?

Viktor: [00:25:00]
I like Dockerfile and I haven't found real replacement that I like for Dockerfile and now important part is that, and this is similar to one of the arguments I had a while ago about Helm using it or no. It's not only about having a better thing than Dockerfile. It's about having a better thing than Dockerfile that is widely used, because I like the format of Dockerfile, but I like even more that I can find Dockerfile for almost anything that I can ever need. So Dockerfile for all I care can stay. It might be improved in the future, but can stay. Now tools that are building images using Dockerfile do not necessarily need to be only Docker. Kaniko can build with Dockerfile. Buildah can build with Dockerfile, even though it allows you a second thing and so on and so forth. So I would say that Dockerfile is a requirement to me right now because everybody understands Dockerfile, no matter whether we're talking about an operator, sysadmin or a developer and there is no compelling reason to change it. Nobody yet showed me. There will be sometime in the future, but right now, I haven't seen a compelling reason not to use it.

Tobias: [00:26:13]
So what is your preferred tool between Kaniko and Buildah and perhaps BuildKit?

Viktor: [00:26:20]
Between Kaniko and Buildah, I prefer Kaniko because it requires absolutely no access to any system level or resources. Buildah forces you in a way, and that might have changed, so I might always be wrong. Things change very rapidly. But last time I was checking Buildah requires you to mount some volumes and stuff like that. It wasn't as bad as with Docker, but still you had to do it. Kaniko is a pure container. Just container. Nothing else. No volumes. The only volume actually required not even required could be that's a secret, but even that is not mandatory. So I prefer Kaniko over Buildah. There are some things missing that are not covered with Kaniko like multi architecture builds. You cannot do equivalent of docker image build blah, blah, blah and then it creates like five different images, one for arm, one for 368, so on and so forth. So let's say that 5% of what docker build does is not supported by Kaniko, but other than that, it's very good. There is always an issue. If we're having a first contact with Kaniko you're going to want to kill yourself after going through their documentation. It's horrible. It's absolutely horrible. But once you pass the suicidal tendencies, it's actually pretty good.

Darin: [00:27:47]
Don't let that scare you. I'm gonna correct you on Buildah. I actually like Buildah personally better because there's a lot of magic that happens inside of Kaniko. The thing I like about Kaniko is I can do everything in one line. It'll build, tag, and push all in one line. I love that part of Kaniko, but with Buildah it's just like running all the standard Docker commands. Instead of doing a docker build, you'd do a buildah build. Actually, that's the one statement is different than the others, but then everything else is the same. buildah tag. buildah push. I never had to mount any volumes. I'm going back to correcting you Viktor. It wasn't a requirement even a year ago.

Viktor: [00:28:26]
You were talking from inside Kubernetes?

Darin: [00:28:30]
Yeah. Inside Kubernetes.

Viktor: [00:28:32]
I was wrong. It happens.

Darin: [00:28:34]
Yeah. Yeah. It worked fine. I could translate every Docker command. I basically just replaced docker with buildah except for the actual build line. I can't remember what that one was off the top of my head.

Viktor: [00:28:46]
Then Buildah it is. One note that I would add is that no matter whether it's Buildah or Kaniko for running production releases, let's say, or in CICD processes. On laptops, Docker. No need to change Docker.

Tobias: [00:29:02]
That was actually one question I had, if you should try to make your dev environments as similar as possible, then you should reuse the same tool to build the Docker as in your pipeline or in production, but maybe it's okay to use Docker because it's simpler for the developer.

Viktor: [00:29:22]
If Buildah or Kaniko or whichever other tool you use for, let's say real releases, would require you to use something other than Dockerfiles then my answer would be different, but it's the same definition. I don't care whether you buil d with Kaniko, Buildah, Docker or something else, as long as we are not duplicating code like crazy and by code, I mean, anything that is interpreted by machines. So yeah, if Dockerfile works across the board with multiple tools and one tool is better in one scenario and the other one is better in another then yeah, use two tools. That's perfectly fine for me.

Tobias: [00:30:00]
Because the image they produce should be exactly the same.

Viktor: [00:30:03]
Not necessarily exactly the same, but same enough, let's say, because I'm assuming that whomever is running docker build from laptop is doing it for local testing, not to create production releases. When you start creating production releases, you will go through a cycle of deploying it to staging environment, testing, promoting to production, whatever else you're doing. So you will know whether that image built whichever ways built is valid or no, but on a laptop, hey, build in any way you want, which is a very different situation than the previous discussion about Docker Compose because with Docker Compose, it was already a question, hey, if I use Helm in production and Docker Compose on laptop, I need to have two different sets of definitions of everything, but if Dockerfile works across the board, hey, go crazy and use Buildah in one scenario, Kaniko in another and Docker in third. That's all okay.

Darin: [00:30:58]
It's okay to replace the tools. It's not okay to replace the definitions that drive the tools.

Viktor: [00:31:03]
Exactly.

Darin: [00:31:04]
Did we answer your question or did we make it worse?

Tobias: [00:31:09]
I think the answer to the question is that we should try to really run Kubernetes locally and be more similar to production and start fazing out Docker Compose

Viktor: [00:31:18]
Yes.

Tobias: [00:31:18]
That is my

Viktor: [00:31:22]
The only correction I would say that you should be using Kubernetes across the board if that's your production. Whether you should run it locally or give everybody a namespace in a remote Kubernetes cluster, that's already a separate discussion. Doesn't have to be that you run it locally, but you should have your development environment in Kubernetes. Yes.

Darin: [00:31:44]
In fact, I would probably say don't run it locally. I know some people won't like that. They can, if they need to be completely disconnected. That's understandable, but if they're on a VPN and they have access

Tobias: [00:31:56]
So if possible, run it in the cloud, in a namespace.

Viktor: [00:31:59]
Oh yeah. Give everybody a namespace. Give them the limits that they can reach and off you go.

Darin: [00:32:06]
Then if they only still have a 4GB or 8GB laptop, that doesn't matter. In a perfect world, everybody's using a Chromebook and they're not running anything locally. You're using something like GitHub Codespaces or Gitpod or something like that for your development environment. Your runtimes for Kubernetes or whatever, is somewhere else. Whether you've got a Chromebook or a maxed out iMac, or even just your phone, you could do work.

Viktor: [00:32:34]
What Darin described might be future or not, but using remote Kubernetes for development, that's definitely present. That's really trivial. It's very easy. The only thing that you might want to educate people to say, Hey, it's so easy for you to deploy application now in this namespace and it's so easy to delete it as well. Don't keep it running forever if you don't need it.

Darin: [00:32:59]
The other thing is if there is a system that is being managed by somebody else. Let's say it's a common service, then anytime there's an update of that common service that they could deploy that to all the namespaces. Let's say it was an authentication service that needed to be dropped into everybody's namespace. That would be mean, but it would be probably the right thing to do. It gives you more versus, Hey, it's not working on my laptop. Well, okay. It's not working in my namespace is the new variation of that, but it gives you options.

Viktor: [00:33:30]
From developer perspective, there is no difference. You're using kubectl or helm or Kustomize or whichever tool you're using to communicate with the Kubernetes cluster that is somewhere. If I just give you now kubeconfig and you don't look at it, you would not know whether it's local or remote.

Darin: [00:33:48]
The other plus of being inside of a cluster is those namespaces can be set up with the same security and all the other things that are going to be required for production. If you're running that Kubernetes locally, or just on your own little Raspberry Pi stack, you probably will give yourself full permissions and that's a bad thing.

Tobias: [00:34:08]
We actually have that problem kind of who is responsible for your security holes. On your developer laptop, you are responsible, but as soon as you hit another environment it's my team that's could be responsible to fix this and get angry emails from the security team of why is this Mongo port open and you have no password for Mongo or something like that. So that's maybe not good, but we try to avoid having that responsibility and to put it on the developer's laptop is then easier for us than to give them a namespace that they can use.

Viktor: [00:34:44]
Yeah, but if you use that example of why is this port for Mongo open or whatever the example was, that's a problem you have to face sooner or later. Right? If you don't face it when in development phase, you're going to face it in staging phase or whatever the phases are, or you're going to face it in production. Right? So, I think everybody use their laptops to avoid that problem is not avoiding that problem. You're just postponing that problem.

Tobias: [00:35:11]
true.

Darin: [00:35:12]
Okay. So it sounds like you have a lot of homework to think about.

Tobias: [00:35:15]
Yeah. It's a lot of different things to consider when we try to improve or move our stack.

Darin: [00:35:22]
Thanks for joining us today Tobias. If people want to contact you, your contact information will be down in the show notes. Also we'll include a link off to a YouTube video that Viktor did back in December about Kaniko in case you're still one of the ones that's building your Docker images inside a Kubernetes cluster using docker build. That's not going to work for much longer people.

Darin:
We hope this episode was helpful to you. If you want to discuss it or ask a question, please reach out to us. Our contact information and the link to the Slack workspace are at https://www.devopsparadox.com/contact. If you subscribe through Apple Podcasts, be sure to leave us a review there. That helps other people discover this podcast. Go sign up right now at https://www.devopsparadox.com/ to receive an email whenever we drop the latest episode. Thank you for listening to DevOps Paradox.