#92: On this podcast, we spend a lot of time discussing backend and infrastructure. Today, we speak with Grady Saccullo, a front end developer for Cycle, a container orchestration platform. We talk about what it’s like to work in a smaller shop in 2021 and how some workflows are very different than much larger enterprises.
If you like our podcast, please consider rating and reviewing our show! Click here, scroll to the bottom, tap to rate with five stars, and select “Write a Review.” Then be sure to let us know what you liked most about the episode!
Also, if you haven’t done so already, subscribe to the podcast. We're adding a bunch of bonus episodes to the feed and, if you’re not subscribed, there’s a good chance you’ll miss out. Subscribe now!
Grady is a frontend developer working on building modern PWA’s and bringing old codebases up to date with current technologies. Currently working for Cycle on building a modern efficient container orchestration web portal.
Viktor Farcic is a member of the Google Developer Experts and Docker Captains groups, and published author.
His big passions are DevOps, Containers, Kubernetes, Microservices, Continuous Integration, Delivery and Deployment (CI/CD) and Test-Driven Development (TDD).
He often speaks at community gatherings and conferences (latest can be found here).
His random thoughts and tutorials can be found in his blog TechnologyConversations.com.
This is DevOps Paradox episode number 92. Frontend vs Backend Development in 2021
Welcome to DevOps Paradox. This is a podcast about random stuff in which we, Darin and Viktor, pretend we know what we're talking about. Most of the time, we mask our ignorance by putting the word DevOps everywhere we can, and mix it with random buzzwords like Kubernetes, serverless, CI/CD, team productivity, islands of happiness, and other fancy expressions that make it sound like we know what we're doing. Occasionally, we invite guests who do know something, but we do not do that often, since they might make us look incompetent. The truth is out there, and there is no way we are going to find it. PS: it's Darin reading this text and feeling embarrassed that Viktor made me do it. Here are your hosts, Darin Pope and Viktor Farcic.
We spend lots of time in the backend or even on infrastructure. Right, Viktor? That's sort of where we live day-to-day.
Yeah, there are some gremlins doing backends. I know that. I don't know how it gets done, but it gets done somehow.
Are you saying we're gremlins and if we get wet after midnight, we explode the town. Have you ever watched that movie Gremlins?
Of course. I mean, I'm an old person. Of course I did. Probably like seven times in a row in cinema.
You say you're old. You could be my kid. Come on.
I'm old. That makes you ancient.
That is true. I am typically heavily caffeinated. I don't think I ever have less than two bottles of cold brew in my fridge at any given point in time. You always gotta be prepared for a long night of debugging and figuring out why Safari doesn't want to run your code. That is definitely a struggle.
As you're going through your day, you're making commits, I assume, and they're being auto deployed. What is your workflow look like? I'm assuming you're still doing components in React in some way, shape or form.
Yeah. So, with how React currently is with React 16.8, I believe is the version that dot eight dot something is what we're running. We are doing local development and it's working on your local machine. You're happy with it then you're doing a build through something like Webpack and then we are containerizing our local deployment and then pushing that up and seeing if whatever we built locally actually works on a multitude of desktops across different platforms, whether that is a mobile platform or a tablet or a desktop. It entails a lot more than just say testing locally and expecting it to work everywhere just because it works locally. That brings a level of difficulty to the table, because it means that all of a sudden you have a bug that was just never there locally, which got introduced in production, because somehow it made it through staging. So it adds a different level of complexity to it.
What are you using for that testing today?
So TypeScript adds some type safety on top of it for testing across browsers. We are introducing now on the idea of using Selenium, which basically allows us to test individual components inside of React and make sure that they're working as expected. But as of now, it's a lot of going through and manual testing at the current moment. It's just because as you're going through rapid development, it can be hard to build out all those tests that actually tests your UI across multiple platforms across multiple browser engines. It becomes this whole other side of development, which the average developer, frontend developer at least, typically does not have experience with, so it becomes its own little subset of a company.
Is not having that skillset actually hampering you getting your work done in the time that you need to have it done?
The common problem I believe is that almost everybody when starting with a project was to skip tests and I think that's a good idea, because you want to go fast and demonstrate value for yesterday. But the problem is that after a while, your application becomes almost untestable. So adding tests to an application that was not written with tests from the start is extremely hard or at least that's my impression of being the major problem.
It totally is. Like you just said. You just said it perfectly, which is you want to start by moving quickly, so you negate all the tests. You just say, let's just write code and run it. But then as you start growing an application, like the front end application that I work on or one of them, it's touching somewhere in the hundred and something thousand lines of code. Just over a hundred thousand lines of code currently and we are missing tests in a good amount of that code and it's because of TypeScript that we've been able to do that because it adds a good amount of safety on top of it. However, now it's a daunting task to go back and say, Oh my God, how am I going to write tests for not only all my code, but now I've got to go back and write tests for all my UI. So it's the idea that once you get something working, I think the best way to always come back and approach it, even though this is something that I have not lived by, is actually writing tests for both the UI and the local components and the code that's running underneath. Writing Jest tests as that's what we use in React typically, or one of the test suites that we'll use to test actually out data going into a component and what's coming back out of it, that's not so bad to write as you go. I personally don't think so. It doesn't add that much time to it. Even if you're moving quickly, it becomes a pretty easy thing to introduce along the way. But UI tests, that becomes a way bigger hindrance as you start to get way deeper, much deeper inside of your development process, because all of a sudden, there's these little tiny components that you've completely forgotten about or got buried nested six levels deep in a folder, which you were never supposed to do anyway ways to begin with, but it lives there now and so are you going to forget about it and are you going to actually write a test about it or are you going to forget about it?
That doesn't sound much different than what we deal with in the backends because we write tesst and forget about them all the time until they blow up.
From that aspect, I think backend and frontend are very similar. You have smaller tests that test specific functions let's say or whatever it's called and then you have tests that test it from outside. How does it behave when somebody interacts with it? Doesn't matter, philosophically speaking, whether that's somebody interacting with UI or a process interacting with my backend, and that's usually the most painful part to do, especially when we start talking about, Hey, since I didn't write us anything but unit tests and that if I'm lucky, then, Oh, now I need to bring up the whole system every time I want to test something because creating mocks and stubs, that was not really part of my initial plan because I had no need for it. But Hey, who is going to mock now those tens or hundreds of lines of code? Nobody and then you, Hey, I need to spin up a cluster whenever I push something to Git repo. It's not going to happen.
That's exactly the same mindset or ideology that why it is important to write tests, but also that when it doesn't happen, it just hits the fan, so to speak. Things blow up and things happen and you sit in there wondering why did this happen at the same time, you're really not at all because you know exactly why it happened. You never wrote a unit test for it. You never actually tested the component out.
The worst thing is that you usually realize the importance of tests at the moment where you had the least desire to write tests. Something goes terribly wrong in production. First thing you say, Oh, I should have tested that. I should have had tested, confirmed that and the second thing you start thinking about, I'm not going to write tests now because I need to fix it immediately and then it gets forgotten. Step number three, I forget about it.
That is the exact one, two, three steps that front end developers take as well. You fix immediately and there's already six more items in your backlog you got to get to and so tests take another backseat onto a whole nother page of backlogs, which you just completely forget about and just drop in the trash bin every few days.
Is that an indictment against our profession?
I wouldn't say that necessarily. I would say it just comes to how much time we actually are given to do the items that we need to tackle. It comes down a lot to, especially in small companies when you're moving so quickly, sometimes this task has to take a precedence and tests tend to sadly not be one of those things that gets pushed to the forefront, even though it really should be in the end. It makes an overall better application, not only for yourself to work on in the future, but for users, your end experience, all of that. But no, I hope that is not an indictment against coders in general, no matter the side of whether it's frontend or backend or whatever you're doing.
So as you're going through your day, you sit down in front of your I'm assuming MacBook, because you're a front end developer. Okay. Okay, got one. You open up, let's see, can I guess your IDE? Visual studio code.
That would be correct. VS Code.
Okay. Two for two. What would be the third thing. You're probably using GitHub.
we are actually using GitLab.
Ah, I missed. Okay. Strike one. Are you using hosted GitLab?
We are using a self hosted GitLab. We also do push to GitHub though for a couple of our projects to actually have in the public sphere as well as NPM. So for that one, it would be GitLab for projects that are run internally, GitHub for this public sphere and NPM also for the public sphere.
Okay, I won't send us down the rest of that path. So you make your commit. Let's say it's in GitLab because it's the private stuff, not the public side. How does it end up where it ends up? Does it go through Cycle? Does it go through GitLab? What's your packaging process and then where does it end up running?
So, the way that we go about our process at Cycle, let's say I have task A for the day. It's going to be building this component, because the old one's out of date and it no longer has our current UI. So it's going to be creating a branch off of obviously whatever our current development branch is and then fixing it or changing it or updating it. Merging it in, going through the whole process of getting merge requests approved. Then we're going to push it up to our internal GitLab, and then we're going to create a build of our portal locally. So we'll build it, which is going to create a Docker image and from there we will push it up to Cycle where basically we are able to self-host Cycle. So Cycle allows us to deploy our build of our platform and our portal together to test onto our dev side of Cycle. Then we get to test this version of Cycle running that was deployed through Cycle and then if it checks out and everything works, then we'll go ahead and push that to staging and from staging do another different subset of tests that are more specific to generalize across the entire platform, just in case that we touched something and created a side effect that was never intended and then from there we will push it up to production.
What is that cycle time typically? No pun intended. What is that time like for you? So something hits dev. How long is it from dev until it's actually in prod?
So depending on what's going on, that can be anywhere from a few days if we're working on, maybe a slightly bigger update, anywhere from a few days down to maybe 45 minutes, if it's like a critical bug issue with the UI that needs to get pushed out and fixed where it's just one little line of code slipped through. It's Hey, did I ever check if this was undefined? No, I didn't and even though TypeScript should have caught it, it didn't, and it passed through, got through dev. It's on staging. We catch it in staging and then it gets fixed. It goes back to dev and between there, that time it's back on dev to when it can be deployed to production can be a matter of just maybe like 30 minutes.
That's interesting. If something made it through dev and it hits stage and stage failed, you start the cycle over again. Why is that?
The idea is we want to be fully atomic in how we are actually creating every build of Cycle. Everything needs to be rebuilt from scratch every time we actually create a portal build. Our portal is packaged with our platform. So the back end and front end are packaged together through a Dockerfile. We create a Docker image and then it gets deployed through Cycle. So we create everything fresh from scratch again in the sense of, if something breaks, we go through that whole process to always ensure that whatever we are pushing out has no correlation so to speak with the last version that was pushed out. Everything's new. We don't bring over any potential changes that were unintended changes that were made in the previous version because of something that happened during that deploy. Everything is deployed fresh and whenever a bug is found or an update is made.
Basically, if it fails anywhere along the line, it's full stop. Start over again.
Yeah, it totally is. We can really ensure that we are fixing what we need to fix that way, and we are not just going by spur of the moment, making one change and then just continuing forward. We like the idea of starting fresh from scratch again, testing, going through that whole process.
What is your merge request process like?
I am actually currently one of the devs who gets to look over merge requests, which is both a joyous and frightening thing all at the same time. It basically entails from whatever you're working on, so again, let's say I create a branch off dev to make a new feature. I modify that thing or add in that feature. I make a merge request to come back into dev. Basically, we hand review all of the code currently. We are working on getting pipelines set up within GitLab to make that process easier, but currently we basically just go through and merge in, or I will pull down the branch or whoever else is looking over merge requests will pull down that branch, test it locally, spend 15 minutes just trying out whatever was changed, fixed, added. Then if everything looks good, then we'll successfully merge it into dev and then from there, we'll create a build and again, deploy to dev. We'll normally do dev builds every few days, depending on what's happening.
So when you do the packaging and that build went to dev, does that same image get promoted because you're being very specific in how you say you're doing your branching strategy. Are you following GitLabFlow? Are you following GitHubFlow? What does that workflow look like for you?
Every time that we do a build, we pull in all of the assets or everything necessary, I should say for building that Docker image and it starts fresh from scratch or we rebuild that image fresh every time. Granted, nothing has changed in that Dockerfile up to the point where it's cached or through all the steps where it's cached up to the point where it actually builds the portal with Webpack. So that whole deployment of our portal basically just hinges on us making a build with a version. The way we do it right now is we version the build to the specific year, month, day, and then build number of the day. At times we can have, let's say 15, 20 builds in a day. That's probably the most I've ever seen, which is pretty, pretty astronomical, but when you're chasing some little bug in production that can happen. Then from there, once it gets pushed up to dev and let's say it successfully passes dev and we push it up to staging successfully passes staging, then I will create a merge branch to come back into master and then actually do a final master build. So our build that goes from our dev is actually not the same image of the build that goes into production, even if nothing's changed between the two images. But again, it's going back to the idea of being atomic and always making sure that we aren't introducing some unintended consequence along the way.
Here's a trick question then. How do you know that what you are deploying to production is what you've tested?
We have to rely heavily on the idea that no one has changed the versioning number of the actual package of our platform. Now, if someone wanted to be a little sneaky and decide to not like this, they could in theory, change the platform version inside of our dev while someone's doing a push to staging and that could potentially be problematic for us, just because we would have potential differences in what our initial push that we wanted to go out was versus what actually is going out to production. So at the moment, it's a little bit of the honor system, which is kind of a scary idea, but at the same time, we are a small, fast team. So we rely on each other in the sense of, I might pull other devs in to do testing and if we are pushing an update out to production, or there is like a substantial update going out to production that we just pushed onto dev earlier that day, the entire team's going to know about it, at least from the development side. So it's a lot of the honor system, which is again, a little bit of a scary thing.
Let's imagine that the company is successful beyond dreams and we are talking about hundreds or thousands of developers. Would you still apply the same logic? Process?
At that point, no. I would say the process would need to be rethought through a little bit. The main difference that I would probably say would change is just creating dev release branches off of dev. So if I wanted to release something to dev or if I wanted to potentially release a feature out to production that day and it has to go through all these process, I'd end up most likely doing something like creating a release branch off dev and then doing my builds off that and if it's successful and everything passes staging then I'll merge that release branch into master, and then back into dev if I made any changes along the way, obviously. In that case, I wouldn't. It's a release branch. Sorry, but in the sense of, I would basically branch off dev and have something that would allow me to have a single source of truth in my Git tree and say that I know for a fact, nothing has changed in here between any of my builds on top of dev. So that's most likely how I would say that that kind of model could potentially scale, but at the moment, we're more looking at the idea of how can we make this happen here now quickly fast and get our updates out to our customers as fast as possible, but you are totally right. That model would not scale correctly, if that's what you're implying a little bit there. It would, for sure, not work well with a large team.
And it shouldn't, just let me clarify my intentions. I don't believe that people should start with I'm five people company. I should start with as if I'm 1,000, I'm Google or something like that. That's just silly. That means that you go bankrupt before you even start. Do you do some kind of user testing, feature toggles, enable something for some users, see how it behaves or it's basically you're finished once you deploy?
Currently we do not do any sort of beta testing for the sense of just making this easy, but we don't do any beta testing. The reasoning behind that is the production build of the platform that goes out, it is because we're dealing with container orchestration and we're dealing with our clients livelihood of their company. If we push an update that has a beta feature that gets enabled by someone who wasn't supposed to enable it or XYZ, if they enable something by accident and it potentially causes all of their environments to go down on Cycle, that could be a massive issue. That could be a huge issue. You could lose, let's say your entire API. Your entire backend goes down because someone enabled a beta issue. So everything we push to production is fully in production and for the time being that does seem to be the way that it's going to go. Just because again, we're dealing with something very sensitive, given we're helping our clients run their applications in the cloud and orchestrate them.
But then I could play devil's advocate and turn it around and say, Hey, if somebody does go wrong, then hundred percent of users are affected in a way. I'm not judging now, but just trying to spin it a bit.
You are totally right there. Our principle behind that is that we allow our customers to forget about these potential issues, whereas with some of the other orchestration platforms out there, you'd have to go through and manually update and upgrade everything. You don't need to worry about that. So if there is a security vulnerability, or something goes down and it causes a DNS issue, it's not resolving the correct namespace or the host name, we will fix that issue as fast as possible and push an update to the platform and basically that update is immediately distributed to everyone living on top of Cycle. So you are right in the sense of, if we do have an issue, it affects everyone. However that update, whatever is the issue can be fixed within hopefully a matter of less than a couple hours and everybody receives that update simultaneously without ever needing to worry about actually updating their platform on their own.
So we've been dancing around this part. What is Cycle?
Cycle is a container orchestration platform that abstracts away a lot of the complexities that comes along with something like Kubernetes for smaller companies. So we really aim to build a product that empowers small companies to do what they do best. We love saying that to our customers, but it is true. We like to build a product where all of a sudden, a company of 10 people doesn't need a DevOps team to be able to run an orchestration platform. All of their devs can understand and interact with it perfectly. But if that company does scale and they want to have a lot more of the fine grain control, we give them that ability as well in the future, they can use our API. We're simplifying the container orchestration platform process, basically.
Okay. If people wanted to follow you, where are the best places to find you?
For me, it's going to be on LinkedIn. My name is Grady Saccullo and as well as you can follow me on Twitter. I'm starting to become a little more active or that's the goal, but LinkedIn is probably the best place if you want to reach out to me and you have any questions, feel free.
We will have Grady's links down below in the show notes. Grady, thanks for hanging out with us today.
Most definitely. Yeah. It's been a pleasure guys and I'm very thankful that you guys allowed me to come onto the podcast.
We hope this episode was helpful to you. If you want to discuss it or ask a question, please reach out to us. Our contact information and the link to the Slack workspace are at https://www.devopsparadox.com/contact. If you subscribe through Apple Podcasts, be sure to leave us a review there. That helps other people discover this podcast. Go sign up right now at https://www.devopsparadox.com/ to receive an email whenever we drop the latest episode. Thank you for listening to DevOps Paradox.