Kubernetes Tutorial on Rolling Deployments

In my previous blog posts, I have talked about how to deploy a Spring Boot App in a Kubernetes cluster. You already know the fundamental concepts about K8s (Kubernetes), such as Nodes, Pods, and services. In this Kubernetes tutorial, I will introduce you to a new one: rolling updates.

Users regularly have the same expectation: that their applications be available all the time. It doesn’t matter how many features they want to add; their apps should never have any downtime. It makes sense, but that doesn’t mean it’s easy. We as developers know we need to deploy new versions of users’ apps several times a day. If we want to avoid the classic messages, “We are under maintenance. Please check back later,” in our apps,  Kubernetes offers us an awesome solution: rolling updates.

According to Kubernetes’ documentation, “Rolling updates allow Deployments’ update to take place with zero downtime by incrementally updating Pods instances with new ones. The new Pods will be scheduled on Nodes with available resources.” To see this in action, we are going to update the Spring Boot app created in previous blog posts and implement a rolling update.  

Currently, our Spring Boot app has one endpoint:

Instead of the current message, I want to update it to “Yes, Gorilla Logic did it again, Gorilla and K8s are friends.” Our updated endpoint should look like the following:

The next step is to create a new Docker image and push it to your favorite registry (in my case, DockerHub). Create a new Docker image and attach a new tag to it:

Where 0.02 is version number 2 of our Docker image.

Next, push the new version of our Spring Boot app:

Ok, now is the time to update our K8s’ mykubernetes-springboot deployment:

Immediately, you can see the following terminal message:

Run the “kubectl get pods” command:

“kubectl get pods” command

You can note that one new Pod is being created. Why? Easy: K8s keeps the old version and start to create a new one; when the new one is ready, the old one is deleted. This approach lets K8s always have zero downtime in its applications. Pretty cool, right?

Run the “kubectl get pods” command again:

“kubectl get pods” command

Excellent! Now you only see one Pod running: our updated application.

It is time to access our application. Remember that the application lives inside a Pod, and we’ve created a service (in previous blog posts) that allows us to access the application from outside the cluster. Go to your favorite browser and type: You can see that the application is running and was successfully updated:

Yes, Gorilla did it again. Gorilla and K8s are friends.

So far, we have been able to update our app without downtime problems. Do you want to know some extra tips?

Extra Tips

If you want to know the history of a specific deployment, run the following command:

For example:

How do you roll back? Super easy: by setting the revision that you want. K8s will scale up the corresponding ReplicaSet and scale down the current one. Once this is done, you will have rolled back. For example:

Another feature we should add is a rolling update strategy that allows you to specify maxUnavailable and maxSurge variables to regulate the rolling update process.

maxUnavailable “is an optional field that specifies the maximum number of Pods that can be unavailable during the update process. The value can be an absolute number (for example, 5) or a percentage of desired Pods (for example, 10%). The absolute number is calculated from percentage by rounding down.”  (Source: Kubernetes)

maxSurge “is an optional field that specifies the maximum number of Pods that can be created over the desired number of Pods. The value can be an absolute number (for example, 5) or a percentage of desired Pods (for example, 10%). The value cannot be 0 if MaxUnavailable is 0. The absolute number is calculated from the percentage by rounding up. The default value is 25%.

“For example, when this value is set to 30%, the new ReplicaSet can be scaled up immediately when the rolling update starts, such that the total number of old and new Pods does not exceed 130% of desired Pods. Once old Pods have been killed, the new ReplicaSet can be scaled up further, ensuring that the total number of Pods running at any time during the update is at most 130% of desired Pods.” (Source: Kubernetes)

That’s all for nowhave fun rolling! See you soon with more blog posts about Kubernetes.

Gerardo Lopez
Gerardo Lopez
Gerardo is an Oracle Certified Associate Java SE 8 Programmer. Gerardo has experience with Back-End, Front-End and Mobile areas using a wide set of technologies such as Java, Angular, Ruby on Rails, Node.js, Android and iOS Native Apps.

Deliver off-the-chart results.

WordPress Video Lightbox Plugin