DOP 55: How to Setup and Operate Multiple Kubernetes Clusters at a Global Scale

Posted on Wednesday, May 13, 2020

Show Notes

#55: What’s it like to actually operate multiple Kubernetes clusters at a global scale? We chat with Carlos Sanchez about his experiences and his love for progressive delivery. You may also be surprised at one of his favorites tools to use.

Guests

Carlos Sanchez

Hosts

Darin Pope

Darin Pope

Darin Pope is a developer advocate for CloudBees.

Viktor Farcic

Viktor Farcic

Viktor Farcic is a member of the Google Developer Experts and Docker Captains groups, and published author.

His big passions are DevOps, Containers, Kubernetes, Microservices, Continuous Integration, Delivery and Deployment (CI/CD) and Test-Driven Development (TDD).

He often speaks at community gatherings and conferences (latest can be found here).

He has published The DevOps Toolkit Series, DevOps Paradox and Test-Driven Java Development.

His random thoughts and tutorials can be found in his blog TechnologyConversations.com.

Rate, Review, & Subscribe on Apple Podcasts

If you like our podcast, please consider rating and reviewing our show! Click here, scroll to the bottom, tap to rate with five stars, and select “Write a Review.” Then be sure to let us know what you liked most about the episode!

Also, if you haven’t done so already, subscribe to the podcast. We're adding a bunch of bonus episodes to the feed and, if you’re not subscribed, there’s a good chance you’ll miss out. Subscribe now!

Signup to receive an email when new content is released

Transcript

Darin Pope 0:00
This is episode number 55 of DevOps Paradox with Darin Pope and Viktor Farcic. I am Darin.

Viktor Farcic 0:05
And I'm Viktor

Darin Pope 0:06
and today we have a guest with us. Viktor. Do your job.

Viktor Farcic 0:13
Okay, I thought you meant I was the guest. Yeah, we have Carlos Sanchez. He's I don't know what to say he's wasting time in different companies. And now his mostly

Darin Pope 0:26
Did you say he's wasting time in different companies?

Viktor Farcic 0:29
Yes. I mean wasting company's time

Darin Pope 0:31
okay.

Viktor Farcic 0:32
I don't know why they employ him though. So anyways, I thought it would be interesting to bring him in because he knows Kubernetes. He runs it at scale. He can tell us about...what caused you pain yesterday, Carlos?

Carlos Sanchez 0:51
Well, first, thank you for having me. Such a great intro. I love it. Fantastic. I think is the best one I've ever had. Almost. Well what causes me pain every day. Kubernetes. Kubernetes causes you pain. It is a love and hate relationship.

Viktor Farcic 1:14
Should we get rid of it then?

Carlos Sanchez 1:21
Probably not until we find something else that give us less pain.

Viktor Farcic 1:26
Okay. So, it really depends on the point of reference what the pain is.

Carlos Sanchez 1:31
Yes. Yes. There's other things that will give you more pain than that. I mean, remember DC/OS? Mesos?

Viktor Farcic 1:40
oh yeah. Why are you doing that? I had to go to therapy for months to forget it and now you bring it back

Carlos Sanchez 1:50
Yeah. We tend to forget things that are painful or remember them as some this nice thing that was there. But not Mesos.

Darin Pope 2:03
So what is something that's painful today for you? Just in general?

Carlos Sanchez 2:08
Well, I think it's a lot of times is just lack of automation. As you keep running more and more things on Kubernetes, you're gonna spend time automating things the right way. The first time you do something new, it takes some toll, let's say moving things to Kubernetes makes you rethink how you're doing things. You have to change things, implement it in different ways. So it's, it requires some some serious work.

Viktor Farcic 2:57
When you say automation, are those the things that before Kubernetes, we had to automate it or simply, we never thought about automating those things before Kubernetes.

Carlos Sanchez 3:08
Yeah, we have some automation before. And then you have to think about automated things in Kubernetes, then you have to write new stuff, right? You have to create new Helm charts, create new pipelines that deploy Docker images and then deploy to Kubernetes and then deploy to multiple clusters inside Kubernetes. So it's definitely new stuff that you have to prepare.

Viktor Farcic 3:39
Can I assume then that you're not just lifting and shifting things to Kubernetes?

Carlos Sanchez 3:43
Not yet. I mean, I think that was the big benefit of Kubernetes and Docker. Right? It was, I can take whatever I was running before just package it in a Docker container and throw it to Kubernetes. And now suddenly, I have like HA legacy application that I don't have to worry about when it dies or something and it gets automatically restarted and moves around in the cluster. I mean, that I think that's what made it really, really popular.

Viktor Farcic 4:17
Yeah, you know, the problem I have with that is that there are obvious benefits of just putting your stuff, whatever it is, wherever it falls into Kubernetes and you get HA and all the good stuff, right. But on the other hand, I have the impression that's a good excuse for people just to keep keeping the old stuff as is kind of, it's almost like preventing you from you know, when you do those small improvements, it can usually it often prevents you from actually doing the right thing, whatever the right thing is. Oh, I can lift and shift and suddenly I don't actually the problem I had is not a problem anymore.

Carlos Sanchez 4:57
Yeah, it's the long standing issue of, you know, how much do I, how much time do I spend? How much value do I get in return? Right.

Viktor Farcic 5:11
Exactly

Carlos Sanchez 5:12
and Greenfield versus Brownfield, obviously everything new you do it on Kubernetes, multi tenant, multi region and a lot of this new, I mean all the capabilities cloud native that everybody loves to say today. But there's a lot of existing stuff that you could just throw away.

Viktor Farcic 5:39
Yeah, that's true, but then why not keep that stuff as is? That's the doubt I have kind of like, so I want to move mainframe to Kubernetes. Why?

Carlos Sanchez 5:54
Well, for me, I think it's a lot of on premise customers that want to move to cloud. Well, they actually don't want to move to cloud. They want somebody else to run their stuff for them. So it's a move towards SaaS type of solutions, right? So you're a customer, you have something on premise, you realize it's a lot of pain and a lot of trouble or a lot of work for to maintain and to operate and to do all these operations that we needed to do in the past. Today, the assumption is to not do that. So you ask companies and products to be SaaS, and everybody's now kind of bought into the SaaS model. Nobody's that as afraid as they were before to run something, something in the cloud. So I think that's that's the motivation for people to move on-premise stuff to cloud.

Darin Pope 7:03
Well, what if they're not willing to even move on cloud? What if they're wanting just to stay on prem? Why would they even want to bring Kubernetes in house?

Carlos Sanchez 7:12
Well, as probably Viktor already said before, some time, companies may die. That's fine.

Viktor Farcic 7:20
There's nothing wrong with that.

Carlos Sanchez 7:24
It's gone. Yeah. What was it? Yeah, survival is optional. Right.

Viktor Farcic 7:29
Exactly.

Carlos Sanchez 7:32
I mean, there's I think we changed. Luckily, in the last 10 years, we changed from, you know, you have to justify moving something to cloud to you have to justify running something on premise.

Darin Pope 7:50
I wish that was true.

Carlos Sanchez 7:53
I mean, there's some things obviously there's a lot of people are really picky about some special software or like your source code your build tools, you don't want that somebody else's computers, but everything else, everything that you buy or subcontract or something, you don't want to run it yourself more and more.

Darin Pope 8:18
Yeah, see, I've got two clients right now that are wanting to take everything out of cloud and bring it back on premise. As you're listening to this, it's May 13. The reason why they're wanting to bring it in on premise is because they want to quit spending money and they have excess capacity laying around in their data center.

Carlos Sanchez 8:39
What if they didn't have that capacity to start with?

Darin Pope 8:44
Well, their plan was just to go buy more machines, because more machines will be cheaper than what they're spending in cloud.

Viktor Farcic 8:51
But actually, that is, I do think that's true. Cloud is extremely expensive if you do it if you use it in exactly the same way as you use on prem. And if you still maintain all the people that you were maintaining when you were managing it on prem. Of course, it's more expensive.

Carlos Sanchez 9:12
Well, I think I think [unintelligible] was the one that said this before, and I think he's pretty right about it. You don't pay AWS or any other company for what you run. They're making money out of what you forgot to turn off.

Viktor Farcic 9:31
Exactly. I like that one.

Carlos Sanchez 9:34
So this infinite capacity thing, which is not infinite. We already saw that happening this last month, with COVID and everything that is happening. That comes at a cost that people just run stuff and don't turn it off.

Viktor Farcic 9:58
I think that that's a similar situation like with Kubernetes that I hear a lot of people complaining kind of like ooh, Kubernetes is much more complicated than what I had before. Just like cloud is more expensive than what I had before. And at the same time ignoring that they are now doing stuff that they couldn't do before. Kind of like, yes, now cloud is more expensive because suddenly I can I can and I do run hundred times more machines.

Carlos Sanchez 10:33
Yeah, it's the same thing with you know, PCs that where, oh, now, um, I buy a PC that's twice as fast as my previous one. Now, the applications take twice as much CPU. Right?

Viktor Farcic 10:49
Exactly.

Carlos Sanchez 10:50
That's, that's what happens.

Viktor Farcic 10:54
That's a good one.

Darin Pope 10:55
Today, you're running Kubernetes at scale. Give us a sense of what's without real numbers without saying who you work for, which we're not going to do. But what what is scale to you? Is it it's multi cluster? Right? That's a fair assessment.

Carlos Sanchez 11:17
Yeah, I think I think there's, and I learned as I learn more and more about this in the last month, you know, I don't think you can run Kubernetes as one cluster only unless you have extremely low profile.

Darin Pope 11:33
What do you mean by low profile?

Carlos Sanchez 11:35
Small, small profile.

Darin Pope 11:37
Okay.

Carlos Sanchez 11:38
I mean small as in, you know, I don't know, I wouldn't run clusters with more than 300 nodes, 400 nodes. I wouldn't do that. There is a limit some limitations on Kubernetes on how far you can your cluster can go, but then you have them Also the limitations of everything you run on Kubernetes. Right? There you have now, service meshes, networking layers, you have persistent volumes, you have all these different things. And those will impose other different limits than just Kubernetes. Right. So as soon as you start adding things into the mix, they come with their own limits. They hit their, those limits in at different points in time. That's one reason. The other reason that I learned in, in the last two companies I work for is that you, you don't want to run all your production clusters in the same accounts on your cloud providers because there's like API requests limits and that you get throttled if you go through those and then all your production stuff goes down or has issues. And whether it is production or its development or staging or whatever. You have to spread things across multiple cloud accounts. Otherwise, that would be your your single point of failure, not Kubernetes or anything else. So API request limits is a big one. Then you want to run across multiple availability zones, of course that you could do with one cluster, but then you probably want to run in different regions for latency. And also lack of capacity. That's what's happening now with the Coronavirus situation. Europe is having more issues getting resources. Microsoft announced like in Azure that we're going to cut off some free accounts getting different types of VMs because there's not enough capacity for them in Europe. And that's keeps you I mean, that brings you to realize that maybe it's not just one cluster three availability zones in one region. Now you have to consider these things for, like disaster recovery or, or growth planning.

Viktor Farcic 14:34
How ready are we as an industry, not your company, my company, but in general, kind of how mature is that to begin with? Is it a valid expectation to say, Go run in multiple clouds, in multiple regions and all that stuff?

Carlos Sanchez 14:51
No, I don't think it is like running multiple clusters. I mean, you don't have to do it. It depends on your use case. But you have to be aware of the risks you're taking. I mean, you can run everything in one availability zone, you just need to be aware of the risks that AWS goes down in that availability zone, which is very low, and maybe enough for you. Depending on the cost associated, right.

Viktor Farcic 15:19
I was more referring kind of like as in solutions that we have right now for multi cluster multi region. What is that these days, mostly Istio based something?

Carlos Sanchez 15:32
Not even that because then I think the ideal world where your Kubernetes cluster just transparently runs across multiple regions. That's not I mean, that may exist today. But the reason you're moving to multiple regions is because you want to avoid a single point of failure. So the last thing you want to do is to run something across multiple regions, because that then becomes your single point of failure, at least today. So I don't think there's any there's good tools today to, to allow us to run like Kubernetes, multiple regions multiple clusters in like transparent way.

Darin Pope 16:20
Right, because that's, that's interesting that you just brought that up because the so even though you're wanting to run multiple clusters through a single pane of glass, you still state that's a single point of failure, because there could be a cascading effect. In case one region went down. You had them tied together, the other one could go down, done incorrectly.

Carlos Sanchez 16:42
Yeah, that could happen. I mean, there's some tools that are trying to bring back this what was the there was a Kubernetes project to do this multi cluster thing. That may happen. But I don't know yet if any of the tools that are out there kind of works.

Darin Pope 17:13
And and from your perspective running multiple clusters isn't that big of a deal? as well, it's just it's a deal.

Carlos Sanchez 17:20
It is a deal.

Darin Pope 17:21
It's a trade off. Yeah. But that that trade off is what it is today. I mean, because we don't have that good answer today.

Carlos Sanchez 17:29
Yeah, obviously, I would love to have some tool that allows me to manage multiple clusters in multiple regions and do it in a nice way so I don't have to do it. I mean, the other reason to go with multiple clusters is that you want to do progressive delivery of your upgrades. So the progressive delivery, what is progressive delivery? That's a great question.

Darin Pope 17:57
So, so we're laughing so you have to tell the story right now to understand what progressive delivery, you're gonna tell what progressive delivery is, but tell the backstory of where that came from.

Carlos Sanchez 18:06
Oh, well, I've been talking about progressive delivery for a while. This was a this is a term that has started with LaunchDarkly. I think that's the first documented mention and also James Governor took the term and I love the term so I totally stole it. And I gave some talks about progressive delivery, canary deployments, rolling updates, this type of things. So that's what I've been working on.

Darin Pope 18:39
So what is progressive delivery now? Okay, you told the backstory now, right?

Carlos Sanchez 18:42
Progressive delivery is a new term, because we love having new terms for existing things. That basically puts together a number of techniques that that allow you to do new deployments or delivery, continuous delivery in a way that it doesn't affect all your running customers, so let's say at once, so things like blue/green deployments, canary deployments, rolling upgrades, feature flags. These all come into the progressive delivery umbrella. So you can deploy it to production users, but not all of them at once. So you want to progressively do it in production. So it's not about doing tier-like development or staging or production. You want production users to access new features, new deployments, new whatever you build, you know, your new features basically, and you want them to expose them in a controlled way, just a few at a time or a region at a time or a percentage at the time. This is what Facebook, Netflix, all these big companies do. Because when you test something, you are only testing for stuff that you know that may break. So when you expose production users to these new features, they will I mean, you're not going to get the real valuable input on what this whether these features work or not.

Darin Pope 20:40
So you're not saying that we need to eliminate lower environments. They're fine to be there, but we're trying to get through them as fast as possible to get to production so we can so we can really start testing.

Carlos Sanchez 20:53
Yes, it's like testing in production. It's a great way to say it too and so you need to have monitoring, obviously, it's not free for all is you have to have monitoring, you have to have alerts, you have to have a bunch of things. So you don't want to break your users. Right?

Darin Pope 21:12
Right. So basically, you're saying we actually have to do our we actually have to do our jobs and do work in order to to attain this.

Carlos Sanchez 21:19
Yes. And based on the, I mean, there's a lot of reports like the DevOps report, the DORA was DORA report. You know, the fastest you get something to production, the less rate of errors and you're gonna get, the faster the faster your company will be. So you want if I, if I write some new feature, I want that feature to get to reach the user as fast as possible. There's going to be some tests that obviously has to run because you don't want to break something, then you're going to use feature flags and you're going to use canary deployments to get that to a number of users. So those may be your internal users first or beta users or some specific percentage of the population or a region that you want. And then you make monitor to make sure that that's working fine before rolling it out to everybody else.

Viktor Farcic 22:22
Did you say initially that that's progressive delivery would be one of the reasons to have multiple clusters. Or did I misunderstand?

Carlos Sanchez 22:32
Well, when you upgrade Kubernetes, not only Kubernetes, but everything that you're running on Kubernetes, which, again, it's all the networking things, your Ingress controllers, your persistence controllers, everything that you run, you don't want to run you don't want to upgrade everybody at the same time. So you have multiple clusters, you can have a staging cluster, but again, it's the same thing as when you are deploying code. It's just infrastructure. You don't you can test it in staging. But maybe you don't catch a problem until you deploy it to production. If all your customers are in the same cluster, or all your users are in the same cluster, then you have a problem.

Darin Pope 23:19
I think one of the examples right now is Google is rolling out Meet to Mail. Like I saw it in the corporate account this week, or late last week, one or the other. And in my personal Google account, my my Google Apps account it showed up today. Right, so now I've got Meet that shows right up in my Mail account that I've never seen before. So this is progressive that okay, this is a feature. Let's roll it out to our paying clients first. And then let's roll it out to the people that don't pay us any money but still tend to leech off of us and they get it last.

Viktor Farcic 24:00
Basically what you're saying with that is that Google is adopting strategy. Let's first experiment on paying customers. And then if they're fine with it, we're gonna give it to those who are not paying. That sounds like the opposite logic of what I would do.

Darin Pope 24:15
Well, I'm assuming that that's what happened. But you know, because right now, getting Meet into the hands of corporate people are probably higher than people that aren't corporate. Right, that the use case is fine, but it's what it is, right?

Carlos Sanchez 24:28
I mean, you need for that. Yeah, definitely. You need feature flags and stuff like that. Because Google, you know, tomorrow, they may decide to kill it. And then they have to remove it from everybody. Right?

Viktor Farcic 24:40
So maybe actually, Google never killed anything, but they just put things under feature flags, and everything still lives somewhere.

Carlos Sanchez 24:47
like Google like Reader.

Viktor Farcic 24:50
Exactly.

Darin Pope 24:51
No, I think Google Plus really died. I don't think it's ever going to come back so

Carlos Sanchez 24:54
or Wave

Darin Pope 24:56
or Wave. I had forgotten about Wave. See, we can go well, that's just uh, okay, let's not go there. The list is too long.

Darin Pope 25:10
The so multi cluster, progressive delivery. What else are you working on that you can talk about? That that you're finding interesting? That you're finding challenging?

Carlos Sanchez 25:26
Bash scripting. That's the most challenging part.

Darin Pope 25:31
Say that again?

Carlos Sanchez 25:33
Bash scripts

Darin Pope 25:35
Bash scripts. Why is that the most challenging part?

Carlos Sanchez 25:39
Well, you don't know how hard it is until you try it.

Viktor Farcic 25:44
So do you one of those cases where you have shell scripts that are thousands of lines of

Carlos Sanchez 25:50
No, not really. I mean, I've been writing bash scripting more than I wanted to for the last years. At the end, you're having Yeah, you have to automate things right that some at some point you're like, I'll just write a bash script for this and that and then, you know, deploy it to Kubernetes or creating something Kubernetes or whatever. And you have all these pieces together today. And you just need to write glue between them. A lot of times it is just bash script, I mean, you have containers that do some initial thing and it is just bash script running on a container. So I've seen that for for a while now.

Darin Pope 26:35
So are you saying that bash scripts do not scale?

Carlos Sanchez 26:37
No. Bash scripts scale perfectly. As long as you put it in a Docker container and then run it in Kubernetes.

Darin Pope 26:48
Okay, so it sounds like your pain has been on the other side of that.

Carlos Sanchez 26:55
I'm serious about the glue stuff. I mean, you have all these tools now. You have to deploy things put together. When you want to start a Docker container that has some something init, install packages, then you create a script at the start, it's whatever, then you'll create other things that glue all of this stuff together. I mean, you want to get the statistics out of a cluster. I mean, you can write programs, you can write shell scripts, you can write Python, Golang, whatever, but you need to write. There's a lot of stuff that you need to write to glue together. You know, your clusters, your monitoring your resources, and that's gets better and better with with time. But everybody's, everybody's writing their own thing. I think there's no like, Oh, this is the way you get your whole ecosystem of things that you need to run Kubernetes in production. Oh, here it is, just run this and it will all work.

Viktor Farcic 28:17
That's may be kind of a problem compared to you know, like 20 years ago, we were all using a single big something from somebody from IBM provided the whole solution or Oracle or whomever. And now we are in a different extreme that it's so segmented the whole ecosystem that just figuring out what you're going to use from 75 different vendors is probably already a year of your life or something like that.

Carlos Sanchez 28:48
It's very easy. You just have to go to the to the CNCF poster of projects and you just pick one

Viktor Farcic 29:01
One for each type of task.

Carlos Sanchez 29:06
Well, you have to pick all of them at the same time.

Viktor Farcic 29:09
Exactly. Kind of you cannot use CNCF without adopting all of them. What's the point?

Carlos Sanchez 29:18
Same thing with the CDF, isn't it?

Viktor Farcic 29:22
CVS?,

Carlos Sanchez 29:23
CDF. Continuous Delivery Foundation.

Viktor Farcic 29:26
Yeah, we don't have that problem because there are only five of them to pick from. While in CNCF, you have 100 or more. I don't know anymore. But yeah, it's I honestly don't understand how people who did not spend five years with Kubernetes are managing to figure it all out.

Carlos Sanchez 29:46
I mean, I think everybody's writing their own Grafana dashboards, their own queries in Prometheus their own, I mean, at least now it's kind of standardized the stack for monitoring Yeah, but But yeah, you want to know, I want to know how many I mean, how much resources are this are we using in this clusters per pod then per container and per namespace. I actually had to write a kubectl plugin for that.

Viktor Farcic 30:20
And it's kind of potentially dangerous because at least the last few years what we're seeing, it's getting better, but it's still not far from that is that people invest a lot of time and effort into something and then a year after somebody else comes, are you still really using that? That thing is not valid anymore. We killed it already half a year ago. Yeah.

Carlos Sanchez 30:46
Isn't that what you do every month?

Viktor Farcic 30:48
Exactly. Exactly. Now, if you don't have a monthly rewrite of everything, you're in deep trouble.

Carlos Sanchez 30:58
I'll blame it on people that keep writing books every month.

Viktor Farcic 31:03
Yeah, I didn't know any. If you are referring to me, I need four months.

Darin Pope 31:11
Maybe let's stop there. Because that's a painful topic right now. Yeah, multi cluster or multi Kubernetes clusters, progressive delivery. Ehhh....big deal. Have to rewrite a book every month? Nope. Nope, that's a little too close to home. Every four months? All right, so, cool. Carlos, thanks for hanging out today. sort of cut us off. We'll always, always come back. In fact, we'll probably invite you back for one of our Friday live streams, or Thursday live streams or whenever our live streams happen based on day job work.

Carlos Sanchez 31:45
Okay, maybe it's gonna be hard for Viktor to come up with a better introduction than today but

Viktor Farcic 31:52
I'll work on it. This time it was adhoc. I will. I will. I would think for a few have it scripted I will have it scripted, yes. It's I don't know yet what it will be, but it will contain the magnificent. That's a promise.

Darin Pope 32:11
And also remember with the live stream. I'm not as concerned. I'm concerned, but I'm not as concerned about your actual swearing. So you could actually go a little more wild on the livestream. If we pre record it, that's different. Livestream? Hey, all bets are off.

Viktor Farcic 32:28
Yeah, I'm using it to my advantage. I know that you cannot censor me in livestream.

Darin Pope 32:32
Exactly.

Viktor Farcic 32:33
So I could do whatever I want. And it has effect.

Darin Pope 32:36
It does have an effect. I'm not gonna say whether it's positive or negative, but it does have an effect. If you are listening via Apple podcast today, go ahead and please subscribe and leave a rating and review. All of our contact information including Twitter and LinkedIn can be found at https://www.devopsparadox.com/contact. And if you'd like to be notified by email when a new episode is released, you can sign up at https://www.devopsparadox.com/. The signup form is at the top of every page, there also lack lacks Good grief, there are links to the Slack workspace, the Voxer account and how to leave a review in the description of this episode. At some point, I'm going to make that much shorter and much more cool. We'll have music but not today. Because Viktor has corrected me. Work on one thing till you get it done. I have one thing I have to get done by Sunday. And then I can start the other fun stuff that I want to do. That sucks, Viktor.

Viktor Farcic 32:57
Well, I don't know how you're giving you advice.

Darin Pope 33:39
You're giving me advice that I'm trying to adhere to, but it's not fun.

Viktor Farcic 33:45
You know, you got it for free.

Carlos Sanchez 33:50
You've got the advice for free? I had to pay for it.

Viktor Farcic 33:53
Yeah, no, no, it's like drugs. First of all, first time is for free. So it's free until you get hooked.

Darin Pope 34:01
Hmm. Okay. So quick recap for Carlos. Oh, Carlos, how can people reach out and contact you? Twitter best way or LinkedIn?

Darin Pope 34:14
https://twitter.com/csanchez. Yes on Twitter. Okay. https://twitter.com/csanchez I'll make sure that's down in the description of the episode as well. And then you can go bug Carlos over there and ask him about how do I progressively deliver a multi cluster Kubernetes, how do I deliver a multi Kubernetes cluster across multiple things? And by the way, I need that in 10 words or less. Thanks. I need to turn this report in.

Viktor Farcic 34:41
You could just come to KubeCon and listen to him explain it.

Carlos Sanchez 34:47
Yeah, I wouldn't bet on that happening this year. But

Darin Pope 34:53
do you have any so I know Carlos, you've done quite a few conferences over over the years. Are you lined up to do any virtuals near term?

Carlos Sanchez 35:04
Yeah, there is several conference planned. But for some reason they get delayed. And they're planning to do them up there in the last quarter of the year. We'll see. We'll see what happens. I wouldn't bet on that either. But

Darin Pope 35:23
Well, our friends over at Snyk Patrick Debois pulled one off a few weeks ago. That was pretty darn amazing in the amount of time that he pulled had to pull it together. So hats off to Patrick. I'd like to get him back on and get a get a post mortem of what were you thinking? And how did you do it?

Carlos Sanchez 35:45
Yeah, that was great. I watched some of the sessions. It is hard to hard to find time now to go through all of them. But yeah, for me, the online conference kind of misses the For me, the conference is all about traveling and meeting people in person. So that's

Darin Pope 36:06
We're not going to be doing that anytime soon.

Viktor Farcic 36:08
In a way being forced, I know maybe it's only me but you know, when it's online then oohhh I can see it tomorrow. And then tomorrow is always tomorrow. Yeah. Well, kind of if you are in a conference is kind of so either I go and listen to that talk or I just sit in a hallway.

Carlos Sanchez 36:26
Yeah, you don't have time to watch conferences between the yoga, making your own bread, reading the books,

Darin Pope 36:38
or in my case, editing courses. All right, Carlos, thanks for hanging out today.

Carlos Sanchez 36:45
Thank you.

Darin Pope 36:48
Viktor. Have a wonderful rest of your day. And for everybody else listening. Thanks again for listening to episode number 55 of DevOps Paradox.