#105: The following quote is attributed to Mark Twain, “History does not repeat itself, but it rhymes.” Does this sound familiar? VMs. LXC. Containers. They are all (roughly) the same thing. So why do we keep recreating things that already exist?
If you like our podcast, please consider rating and reviewing our show! Click here, scroll to the bottom, tap to rate with five stars, and select “Write a Review.” Then be sure to let us know what you liked most about the episode!
Also, if you haven’t done so already, subscribe to the podcast. We're adding a bunch of bonus episodes to the feed and, if you’re not subscribed, there’s a good chance you’ll miss out. Subscribe now!
Viktor Farcic is the Open-Source Program Manager & Developer Relations (Developer Advocate) at Shipa, a member of the Google Developer Experts and Docker Captains groups, and published author.
His big passions are DevOps, Containers, Kubernetes, Microservices, Continuous Integration, Delivery and Deployment (CI/CD) and Test-Driven Development (TDD).
He often speaks at community gatherings and conferences (latest can be found here).
His random thoughts and tutorials can be found in his blog TechnologyConversations.com.
If I told you back in the days when we were running in mainframe that you would need to have your application spread across the globe, you would probably say that I'm crazy. If I would say that Hey if the whole server or the cluster goes down, the system should recuperate from that without downtime. You would also say that I'm crazy. But we need those things because the scope of what we're doing today is infinitely bigger than what we were doing 20, 30 years ago.
This is DevOps Paradox episode number 105. Does History Repeat Itself?
Welcome to DevOps Paradox. This is a podcast about random stuff in which we, Darin and Viktor, pretend we know what we're talking about. Most of the time, we mask our ignorance by putting the word DevOps everywhere we can, and mix it with random buzzwords like Kubernetes, serverless, CI/CD, team productivity, islands of happiness, and other fancy expressions that make it sound like we know what we're doing. Occasionally, we invite guests who do know something, but we do not do that often, since they might make us look incompetent. The truth is out there, and there is no way we are going to find it. PS: it's Darin reading this text and feeling embarrassed that Viktor made me do it. Here are your hosts, Darin Pope and Viktor Farcic.
A couple of episodes ago, we had Jacques Chester on, the author of Knative in Action. If you haven't bought that yet, go buy it. Use the code podparadox20 at Manning Publications. You'll save 40%. The teaser for the episode was something that Jacques said. It was Kubernetes itself is a toolkit. It's a collection of things that can be used to construct something. It is not a construction.
Correct. I think it was Kelsey who said Kubernetes is a platform to build platforms, which I think dances around the same idea that Kubernetes itself is just a base that is not very usable and then you have a bunch of tools around it that you need to pick and choose and once you're done with all that, then you get something potentially usable.
By the way, in case you don't know who Kelsey is, that would be Kelsey Hightower.
Oh, yeah. I make too many assumptions, I think.
Kelsey is at Google and basically the end all be all of everything Kubernetes?
Yep. The mother and father of everything Kubernetes.
Kelsey, if you are listening, thank you for listening. What we're trying to say here is that Kubernetes itself is just a platform. It is not a PaaS, a platform as a service.
It's not even a platform. I would say it is a base which you can use to build platform, because if you look at how people use Kubernetes and people often do not even make that distinction. Most of the things people use is not Kubernetes. Those are the things that run on top of Kubernetes. If you choose to replace the default networking with, let's say Istio or Linkerd, that's not Kubernetes or at least not core Kubernetes. That's something that you put on top of Kubernetes. Kubernetes is exceptionally good at extending itself, but alone is not really. I was about to say it's not really good or usable. That wouldn't be true because you can get a lot out of Kubernetes as is but that's usually only in the beginning, right? You need to assemble stuff on top of Kubernetes or outside Kubernetes to get something. Another good example would be, you're not going to use Kubernetes API directly with kubectl. You're going to use Helm. You're going to use Kustomize or something like that. You need to extend Kubernetes in order to get the platform that you mentioned before.
Let me set up a scenario for you. Pointy hair boss, Dilbert reference, right. Pointy hair boss comes in and it's like, okay, we're going to do Kubernetes now and tag you're it. You have to do Kubernetes and it needs to be running and we need to have all of our applications off of mainframe and running on Kubernetes by next Friday. Today is Friday.
That's great because that means that today you don't need to do anything. You know that you're going to fail so don't even waste your time trying.
That's basically, once you have that conversation, that's when you're getting your resume or your CV ready to send out in five minutes.
Exactly. Exactly. I think that that's the problem that we've been experiencing I think throughout the whole history of software industry that it always looks easier than it is to create, I'm going to call it a platform or something usable by stitching different pieces of something. How many people try to in its early days to replace, let's say Jenkins with, Oh, I don't need Jenkins. I can create a shell script that will run in a cron job. I remember those conversations early days. The same thing was happening when VMs emerged and when we started talking about infrastructure as code or configuration management. I'm guilty of that as well. We all have that notion, Hey, this is relatively easy. I would just get this and this and this and that and I'm going to mix it up together and this is going to be a platform that does something. Those scenarios usually happen in early days of something and I think of Kubernetes as still being early days. We didn't have yet even what it's not ten years old, probably like five, six, seven years old, something like that, so relatively early days. Then we somehow think that assembling those different pieces will be a couple of weeks job. It never really is because there are a lot of pieces. There are much more pieces than it looks initially. If I just start listing from top of my head, and I didn't really go through this exercise myself, but if I would like to have a really usable Kubernetes platform, I would, of course need Kubernetes. I would need to add service mesh for that. I would need to probably use Helm or Kustomize and I would need to create some base charts or base manifests that people would need to start extending. I would probably go down the route of using Open Application Model as a way to just simplify the definition of what my applications are. I would need probably to go for Open Policy Model and to define policies for what so not, and I already forgot that I would need some form of RBAC and access control, and I would need some CICD that is somehow doing this stuff or pipelines and then I would realize that, yes, I would probably like a form of GitOps and so on and so forth, right? This is just from top of my head. There are so many things that you need to assemble for that to work or for that to work really efficiently and really well and that's hard. It's not easy. Initially it looks like, okay, all I need is to create a Helm chart that everybody will extend. That's usually the first step. That's what I need. I'm finished.
and you're not. In fact Jacques brought this up when we had him on on 103. A variation of this was a Heroku is not that hard. I can reimplement that in a weekend.
What he said was I can point you at and this wasn't about Heroku but I think he was talking about Cloud Foundry. I can point you at the thousands and tens of thousands of backlog stories that were never implemented just because. Sure, you might get a happy path out in a weekend that solves a very specific issue but it's not going to cover everything. I want to go back to what you were saying though about okay I need OAM, Open Application Model and OPA, Open Policy Agent and and and and. Why are we recreating the mainframe?
Because we always start from scratch to come to the similar conclusions as what we had in the past but in the process we are improving. It is true that we had many of those things in the mainframe and many of the things in between today and mainframe, but it's also true that we evolved. They do not really work based on the same principle. That is different. I picture it now you cannot see my face but I picture it as this not as a circle but that repeats itself but more like ellipse that is slightly always going towards up.
So I'm going to challenge you on that. Why do you think we're improving? It seems like we're only making things harder and the improvements are so infinitesimally small incrementals that we're just wasting time and money.
We are making things harder. That's absolutely true, but we are making things harder because our needs are very different. If I told you back in the days when we were running In mainframe that you would need to have your application spread across the globe, you would probably say that I'm crazy. If I would say that Hey if the whole server or the cluster goes down, the system should recuperate from that without downtime. You would also say that I'm crazy. But we need those things because the scope of what we're doing today is infinitely bigger than what we were doing 20, 30 years ago. That change in requirements and expectations increases the complexity, so it is harder because what we do is more complex. But I would challenge that it's harder. I would say Hey if you would like to do exactly the same what we were doing in mainframe with today's processes in tech then I would say hey this is much easier than what we were doing in the past, if we would be doing the same.
So you say it's easier. Okay. Why do you say that?
It's easier to accomplish the same goals that we had in the past today but the goals are different and with the different goals, different objectives, there is increased complexity. So it is harder to do what we need to do today than it was to do what we had to do in the past. Now I'm not sure whether I'm confusing myself and everybody or not.
Yeah you're talking in circles right now. I don't see how it can be more complex and harder to do and be simpler at the same time.
What I'm trying to say is if we would like to do the same things that we were doing in the past, doing those things is easier today than it was in the past. But we are not trying to do the same things that we did in the past. We're trying to do much more complicated things because our needs are different.
If the needs are different Yeah See you're talking in circles. I'm sorry.
That's cool. That's cool man. Think about this from a very simplistic point of view. If 10 years ago you would create a website, your personal website, your hobby, stuff like that. If you would have 10 visitors you would probably be happy. Today, if you do your personal website, you would already expect thousands. It's much bigger what we're doing today. The same thing for businesses. What was the number of bank transactions 15 years ago and compare that with the number of bank transactions that an average bank is doing today. It's much higher, so the objectives are different. Also 10 years ago Hey if your favorite place where you shop things is down for an hour, Hey that's okay. No problem. You're going to come back in an hour and if it's still down you're going to come back next day. That wasn't a big deal. We all expected that. Now if it's down for a minute, you freak out. You go somewhere else. We could allow ourselves 15 years ago to have a downtime of a full day. We cannot allow a minute today, so the needs are different. The objectives are different. Heck I remember for many years, in the businesses that we were in the past one of my companies we would literally stop our services during whole weekend. So during whole weekend none of our customers could do anything because we were making a new release. I don't know if you experienced something like that but at least that was my case. We literally shut down the system for days and we do the upgrade and then we turn it back on and if we are lucky and I repeat if you're lucky by Monday morning things would work for everybody. Very often we were not lucky and then the whole release process would continue for a couple of more days and during all that time none of our users customers would be able to use our system and that was normal. Nobody complained about that. Nobody said Hey this is unacceptable. Try to do that today. Try to go to let's say Google and say Hey a search will not be available for anything between a day and the week. What would happen?
People would go DuckDuckGo.
Exactly. Maybe Google search is not a good example because they will eventually come back but you get the point right?
The example would be Gmail. Hey we're taking Gmail offline for maintenance for the next six hours. You can't check email.
And with the note if you're lucky it can be a couple of days.
How did we end up here? Oh we were talking about Kubernetes is just foundational.
How things are more complex today and I'm trying to explain that what we are trying to accomplish today is not the same as what we tried in the past. It's very different.
Let's get down to the brass tacks of that. Why is that? It's the expectation of a consumer
Expectations of consumers are different. That's true for absolutely every industry. What you expected from a car 30 years ago is not the same what you expect from a car today. You look at Tesla and I expect that it will autofix itself. I mean it's madness what we expect today compared to what we expected in the past. Car industry had to adapt to those new expectations or the car industry was adapting and then through that adaptation we got new expectations. It's a chicken and egg problem. But it's changed. Now, we have a problem in our industry that the speed of change is much faster than any other industry, probably, and then it's not about what was happening 30 years ago but Hey what was happening 10 years ago. Heck when I look at my career and what I was doing only 10 years ago I feel ashamed almost.
Yeah I mean 10 years ago, what all was there? The iPhone had been in existence for four years. 10 years being 2011. We were probably still deploying If you were doing any kind of Java stuff you were deploying either to WebSphere, WebLogic, Tomcat. I was going to say AS/400 was big but in reality AS/400 is still big today.
A majority of us had a couple of releases a year 10 years ago.
So 10 years ago, yeah probably a couple of releases a year. At that point golly So before that I was contracting at a company. We were doing monthly releases.
Yeah And you probably proud
and that was a death march too. Yeah. Oh yeah. We were really happy with that. But it was a death march. Talk about sprinting. We were sprinting every month just because things would going back to your thing Okay we're going to take things down for a few hours a few days and cross our fingers that it actually comes back up. But now we have this concept which we've had all along but we have this concept of containers. It's just now that containers are different. The packaging is different.
Containers are actually a good example of that change of the requirements and the needs. If we go back at early days of Docker, it was revolutionary and just being able to run my application on a server as a container was amazing. We were all freaking out. Now if somebody tells me Hey you should run this application on this specific server I tell them that they're insane. Now we have schedulers. Now we suddenly do not even care about servers. We care about clusters and even clusters we are thinking about how to spread things across multiple clusters. So even if you look at the evolution from when containers lifted off and became highly popular not even when they started but when they became highly popular and where we are today with schedulers which is maybe a span of five years give or take, the requirements, the expectations already changed drastically.
For people creating new applications, yes. For people trying to lift and shift and I'm speaking primarily to the people using StatefulSets, no. Their life is still miserable.
Yes but they're also lifting and shifting because they're getting some benefits and they're realizing that the expectations are different than when that application was initially designed. So their expectations are also changing. It's just that very often when you lift and shift the bar is so low that the new expectations are higher than the previous ones but still very low compared to whatever else we can do with new applications.
Let's go back to the pointy hair boss example Hey you have a week. You have to do the Kubernetes thing and now you don't have your CV ready so you reluctantly go in and say okay, what do I have to do? Well I've got to start building something. That would be obviously horrible. What are options that exist today to help someone try to take Kubernetes as that base and turn it into a platform that people could use today?
I think that we are still in relatively early stages of having Kubernetes platform and just a very quick disclaimer I just joined a company that does Kubernetes platform as a product so I'm biased but we are in early stages, so there is not much to choose from. This is potentially the most dangerous phase I think. When the tech is still not mature enough that you trust the platforms on top of it so you want to get your hands dirty, so you might be tempted to say Hey I'm going to take one existing platform that will do all those things or not all those things 80% of what I need or I will build it myself and both options have pros and cons because it's still early. The major problem I think with that is that you're going to wake up two years from now and be so locked into your own doing in your own in house platform that you will not be able to escape from that anymore and you will be going much slower than the industry goes because obviously you have two people working on it and then you have a company that has 10 100,000 people working on that something. That's similar to what we saw for example when people were trying to build their own Cloud Foundry. When VMs emerged, everybody tried to make some kind of orchestration of those VMs and then VMware came along and said Hey I did it. You can use this thingy but you couldn't because you were already so entrenched in your own implementations of hypervisors that a you couldn't follow what VMware was doing with vSphere but you couldn't switch either because you just built your own. You built something very custom that cannot be replaced.
It's when build versus buy goes wrong
The most dangerous time for the question build versus buy is when you're in a middle between Hey we do not have mature buy options but we are still not in such early days that mature buy options are still 10 years away. I think that they are a year away. So it's a very short timeframe. The mature options that you can buy from are months away or a year away and that's shorter than what will take you to build your own platform. On the other hand you might not want to or be able to wait for that. That's the dangerous middle ground. Too late to start building your own. Maybe a bit too early to buy something.
This is sort of the conundrum that I'm in right now. I want to purchase an M1 Mac mini. I want to but I also know that WWDC is coming up in about two three months. Is it so painful right now that I just can't wait and my answer today is it's painful but it's not that painful.
Exactly. It depends really on the situation how much you need it. If you really need it, you might still want to go with like traditional Mac if you need it today and absolutely need it. You have to have it then you're going to go with traditional Mac or Intel Mac and you're going to hate yourself half a year from now. I'm not sure about the situation of M1. Maybe actually it's a good option to buy now. I'm just guessing it's
Yeah it would be fine but if I could wait just a couple of months and see what is now either shipping day one or is going to be shipping within a month and a half afterwards I still have a functional Mac that I can use today. It's functional but I could use some of the features that are available in the M1 Mac mini. Am I willing to wait? We were talking about build versus buy. I want to be clear that buy doesn't always necessarily mean pay somebody money. I mean that there is a valid open source project that's available I consider that a buy as well.
Yes. That's what I was thinking by buy as well. Actually I would even recommend everybody to buy the first something always by adopting open source. Buy it for free and then your needs change. You need more out of it then you start thinking about paying somebody money for maybe some enterprise version of that open source. I don't think in today's industry should be buy for money first. Ever. I think it would past that. Maybe a service. If it's a service then yes because it's really low investment.
If you can pay with just a credit card then it's probably worth experimenting with it. Let's tell a cautionary tale though. Mesos is getting close to going to the attic. The Apache Attic. If you've not taken a look at that attic.apache.org. That's where projects go to die, if you will. Mesos versus Kubernetes. We could have argued at some point, probably what three years ago that Mesos was superior. Maybe four years now. My timelines are a little messed up.
And that is yet another dilemma. If you're very early adopter then you're running a risk of adopting something that has no future. I was a Swarm user and what happened to it? We both for heavy Mesos users and that made perfect sense. Potentially that was the best choice when we were making that choice.
You would think so because it's an Apache project
and it was more mature. Maybe not three years ago that you said but five years ago if you compare what Mesos does and what Swarm and Kubernetes do, man, Mesos was a better choice. Now we couldn't predict the future where Mesos will go and where the rest will go and it failed. That's a problem of early adoption. You're making a risk higher risk by adopting something in its early stages. But then you have another problem that actually then people would say Hey of course I'm not going to be early adopter. I'm not going to adopt something that emerged a few months ago. I'm going to wait because it's too expensive. But then you run that risk Hey now I've waited too much. Now I'm behind. I see that all the time. This Kubernetes thing. Should I adopt it now? Man, you're too late. I mean you're never too late but your competition already adopted it and it's close to impossible to figure out that balance between risky adoption and I'm too late adoption.
I don't think you're ever too late to adopt something. Well that's not true. If you were trying to adopt either Mesos or Docker Swarm right now, you are too late.
Yes I mean the words too late is probably wrong one from my side but I would say the risk between being very early adopter and adopting something later than your competition. It is not too late for anybody to adopt Kubernetes but maybe you're left behind already for not adopting it.
So we started out trying to talk about platforms. We ended up at mainframes and now we're talking about being too early or too late and there's never a right answer or a perfect answer. I think this is the key part here. Just make a decision. Stick with your decision and then change your decision when you have to.
Yes. When you have more information, you change the decision which can be the next day or five years later. I remember a long time ago and I cannot find the source for this anymore but I remember that I read some report that says that most companies lose monies for not making decisions, not for making wrong decisions. Wrong decisions almost always statistically result in better outcomes than no decisions.
I would believe that because even though there may be a short-term revenue hit on a wrong decision if you're paying attention to the data from that wrong decision and making your next decision based off of that data unless your numbers people are complete idiots you're probably going to head in a more correct direction.
Exactly and even if you look at early adopters of containers and schedulers, people who adopted Dockers Swarm and Mesos, if we exclude situations where those companies are unable to actually realize mistakes and adapt, if you remove those from the equations, those companies are still better off than those who waited until the winner was announced because you learned a lot from using Mesos. You learned a lot of valuable things that are very very useful in running Kubernetes. You learn much more about how to run Kubernetes by using Mesos than by not using any.
Absolutely. You learned about the pains of networking. You learned about the joys of fail over that came for free. All of those things Mesos had in common with Kubernetes. The Implementation was different but the concepts were the same.
That's I think one of the things that many people don't understand is that it's not that we have one tech and then another tech that are based on completely different principles something completely different. All those different technologies are building on top of each other. If you use Mesos and then switch to Kubernetes, you're switching to a different platform or different technology that is built on top of Mesos. Kubernetes is built on the experience of using Mesos just as Docker was built on experience of using LXC and the namespaces and role primitives that allowed you to create containers before Docker existed. So if you had experience with creating containers and using containers before Docker emerged that's an asset. That's not a loss. That wasn't a waste of time or at least not completely. I'm yet to find the company who says Hey I used containers before Docker emerged and I feel bad for that or that was the wrong thing. Nobody says that.
Okay We meandered all over the place today
Did you expect anything different? Did we ever stay on the subject?
I would hope sometimes we would but I should know better by now.
Knative was on a subject, right? I thought because
No we use well no So 103 was Knative and I was using Jacques as our starting point today that Kubernetes is a toolkit It's not a construction so that's where started
within the subject right one of the rare occasions that he kept us on the subject Thank you
I think I think that's the reason why we were able to uh anytime we have a guest we usually stay reasonably on subject unless they already know us pretty well Well yeah no that's probably not true Okay well if you have any comments or questions go over in the Slack workspace go over to the podcast channel for this episode you'll see a a big post there and a comment on it. Let us know what you think. If you're trying to build your own platform today, why or why not?
We hope this episode was helpful to you. If you want to discuss it or ask a question, please reach out to us. Our contact information and the link to the Slack workspace are at https://www.devopsparadox.com/contact. If you subscribe through Apple Podcasts, be sure to leave us a review there. That helps other people discover this podcast. Go sign up right now at https://www.devopsparadox.com/ to receive an email whenever we drop the latest episode. Thank you for listening to DevOps Paradox.