Blue-green deployment with a database on Kubernetes

Blue-green deployment with a database on Kubernetes

In this article, you will learn how to use a blue-green deployment strategy on Kubernetes to propagate changes in the database. The database change process is an essential task for every software project. If your application connects to a database, the code with a database schema is as important as the application source code. Therefore, you should store it in your version control system. It also should be a part of your CI/CD process. How Kubernetes may help in this process?

First of all, you may easily implement various deployment strategies on Kubernetes. One of them is a blue-green deployment. In this approach, you maintain two copies of your production environment: blue and green. As a result, this technique reduces risk and minimizes downtime. Also, it perfectly fits in the database schema and the application model change process. Databases can often be a challenge, particularly if you need to change the schema to support a new version of the software.

However, our scenario will be very simple. After changing the data model we release the second version of our application. Of course, the whole traffic is still forwarded to the first version of the application (“blue”). Then, we will migrate a database to a new version. Finally, we switch the whole traffic to the latest version (“green”).

Before starting with this article it is a good idea to read a little bit more about Istio. You can find some interesting information about technologies used in this example like Istio, Kubernetes, or Spring Boot in this article.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should just follow my instructions 🙂

Tools used for blue-green deployment and database changes

We need two tools to perform a blue-green deployment with the database on Kubernetes. The first of them is Liquibase. It automates database schema changes management. It also allows versioning of those changes. Moreover, with Liquibase we can easily roll back all the previously performed modifications of your schema.

The second essential tool is Istio. It allows us to easily switch TCP traffic between various versions of deployments.

Should we use framework integration with Liquibase?

Modern Java frameworks like Spring Boot or Quarkus offer built-in integration with Liquibase. In that concept, we just need to create a Liquibase changelog and set its location. Our framework is able to run such a script on application startup. Since it is a very useful approach in development, I would not recommend it for production deployment. Especially if you deploy your application on Kubernetes. Why?

Firstly, you are not able to determine how long does it take to run such a script on your database. It depends on the size of a database, the number of changes, or the current load. It makes it difficult to set an initial delay on the liveness probe. Defining liveness probe for a deployment is obviously a good practice on Kubernetes. But if you set a too long initial delay value it will slow down your redeployment process. On the other hand, if you set a too low value Kubernetes may kill your pod before the application starts.

Even if everything goes well you may have downtime after applying changes to a database and before a new version of the application starts. Ok, so what’s the best solution in that case? We should include a Liquibase script into our deployment pipeline. It has to be executed after deploying the latest version of the application, but just before switching traffic to that version (“green”).

Prerequisites

Before proceeding to the further steps you need to:

  1. Start your Kubernetes cluster (local or remote)
  2. Install Istio on Kubernetes with that guide
  3. Run Postgresql on Kubernetes. You can use that script from my GitHub repository.
  4. Prepare a Docker image with Liquibase update command (instructions below)

Prepare a Docker image with Liquibase

In the first step, we need to create a Docker image with Liquibase that may be easily run on Kubernetes. It needs to execute the update command. We will use an official Liquibase image as a base image in our Dockerfile. There are four parameters that might be overridden. We will set the address of a target database, username, password, and location of the Liquibase changelog file. Here’s our Dockerfile.

FROM liquibase/liquibase
ENV URL=jdbc:postgresql://postgresql:5432/test
ENV USERNAME=postgres
ENV PASSWORD=postgres
ENV CHANGELOGFILE=changelog.xml
CMD ["sh", "-c", "docker-entrypoint.sh --url=${URL} --username=${USERNAME} --password=${PASSWORD} --classpath=/liquibase/changelog --changeLogFile=${CHANGELOGFILE} update"]

Then, we just need to build it. However, you can omit that step. I have already pushed that version of the image to my Docker public repository.

$ docker build -t piomin/liquibase .

Step 1. Create a table in the database with Liquibase

Let’s create a first version of the database schema for our application. To do that we need to define a Liquibase script. We will put that script inside the Kubernetes ConfigMap with the liquibase-changelog-v1 name. It is a simple CREATE TABLE SQL command.

apiVersion: v1
kind: ConfigMap
metadata:
  name: liquibase-changelog-v1
data:
  changelog.sql: |-
    --liquibase formatted sql

    --changeset piomin:1
    create table person (
      id serial primary key,
      firstname varchar(255),
      lastname varchar(255),
      age int
    );
    --rollback drop table person;

Then, let’s create a Kubernetes Job that loads the ConfigMap created in the previous step. The job is performed just after creation with the kubectl apply command.

apiVersion: batch/v1
kind: Job
metadata:
  name: liquibase-job-v1
spec:
  template:
    spec:
      containers:
        - name: liquibase
          image: piomin/liquibase
          env:
            - name: URL
              value: jdbc:postgresql://postgres:5432/bluegreen
            - name: USERNAME
              value: bluegreen
            - name: PASSWORD
              value: bluegreen
            - name: CHANGELOGFILE
              value: changelog.sql
          volumeMounts:
            - name: config-vol
              mountPath: /liquibase/changelog
      restartPolicy: Never
      volumes:
        - name: config-vol
          configMap:
            name: liquibase-changelog-v1

Finally, we can verify if the job has been successfully executed. To do that we need to check out the logs from the pod created by the liquibase-job-v1 job.

blue-green-deployment-on-kubernetes-liquibase

Step 2. Deploy the first version of the application

In the next step, we are proceeding to the deployment of our application. This simple Spring Boot application exposes a REST API and connects to a PostgresSQL database. Here’s the entity class that corresponds to the previously created database schema.

@Entity
@Getter
@Setter
@NoArgsConstructor
public class Person {

    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;
    @Column(name = "firstname")
    private String firstName;
    @Column(name = "lastname")
    private String lastName;
    private int age;
}

In short, that is the first version (v1) of our application. Let’s take a look at the Deployment manifest. We need to inject database connection settings with environment variables. We will also expose liveness and readiness probes using Spring Boot Actuator.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: person-v1
spec:
  replicas: 2
  selector:
    matchLabels:
      app: person
      version: v1
  template:
    metadata:
      labels:
        app: person
        version: v1
    spec:
      containers:
      - name: person
        image: piomin/person-service
        ports:
        - containerPort: 8080
        env:
          - name: DATABASE_USER
            valueFrom:
              configMapKeyRef:
                key: POSTGRES_USER
                name: postgres-config
          - name: DATABASE_NAME
            valueFrom:
              configMapKeyRef:
                key: POSTGRES_DB
                name: postgres-config
          - name: DATABASE_PASSWORD
            valueFrom:
              secretKeyRef:
                key: POSTGRES_PASSWORD
                name: postgres-secret
        livenessProbe:
          httpGet:
            port: 8080
            path: /actuator/health/liveness
        readinessProbe:
          httpGet:
            port: 8080
            path: /actuator/health/readiness

The readiness health check exposed by the application includes a status of a connection with the PostgresSQL database. Therefore, you may be sure that it works properly.

spring:
  application:
    name: person-service
  datasource:
    url: jdbc:postgresql://postgres:5432/${DATABASE_NAME}
    username: ${DATABASE_USER}
    password: ${DATABASE_PASSWORD}

management:
  endpoint:
    health:
      show-details: always
      group:
        readiness:
          include: db
      probes:
        enabled: true

We are running 2 instances of our application.

blue-green-deployment-on-kubernetes-kubectl

Just to conclude. Here’s our current status after Step 2.

blue-green-deployment-on-kubernetes-picture-arch

Step 3. Deploy the second version of the application with a blue-green strategy

Firstly, we perform a very trivial modification in our entity model class. We will change the name of two columns in the database using @Column annotation. We replace firstname with first_name, and lastname with last_name as shown below.

@Entity
@Getter
@Setter
@NoArgsConstructor
public class Person {

    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;
    @Column(name = "first_name")
    private String firstName;
    @Column(name = "last_name")
    private String lastName;
    private int age;
}

The Deployment manifest is very similar to the previous version of our application. Of course, the only difference is in the version label.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: person-v2
spec:
  replicas: 2
  selector:
    matchLabels:
      app: person
      version: v2
  template:
    metadata:
      labels:
        app: person
        version: v2
    spec:
      containers:
      - name: person
        image: piomin/person-service
        ports:
        - containerPort: 8080
        env:
          - name: DATABASE_USER
            valueFrom:
              configMapKeyRef:
                key: POSTGRES_USER
                name: postgres-config
          - name: DATABASE_NAME
            valueFrom:
              configMapKeyRef:
                key: POSTGRES_DB
                name: postgres-config
          - name: DATABASE_PASSWORD
            valueFrom:
              secretKeyRef:
                key: POSTGRES_PASSWORD
                name: postgres-secret
        livenessProbe:
          httpGet:
            port: 8080
            path: /actuator/health/liveness
        readinessProbe:
          httpGet:
            port: 8080
            path: /actuator/health/readiness

However, before deploying that version of the application we need to apply Istio rules. Istio should forward the whole traffic to the person-v1. Firstly, let’s define a DestinationRule with two subsets related to versions v1 and v2.

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: person-destination
spec:
  host: person
  subsets:
    - name: v1
      labels:
        version: v1
    - name: v2
      labels:
        version: v2

Then, we will apply the following Istio rule. It forwards the 100% of incoming traffic to the person pods labelled with version=v1.

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: person-virtualservice
spec:
  hosts:
    - person
  http:
    - route:
      - destination:
          host: person
          subset: v1
        weight: 100
      - destination:
          host: person
          subset: v2
        weight: 0

Here’s our current status after applying changes in Step 3.

blue-green-deployment-on-kubernetes-picture-arch-new

Also, let’s verify a current list of deployments.

Step 4. Modify database and switch traffic to the latest version

Currently, both versions of our application are running. So, if we modify the database schema and then we switch traffic to the latest version, we achieve a zero-downtime deployment. So, the same as before we will create a ConfigMap that contains the Liquibase changelog file.

apiVersion: v1
kind: ConfigMap
metadata:
  name: liquibase-changelog-v2
data:
  changelog.sql: |-
    --liquibase formatted sql

    --changeset piomin:2
    alter table rename column firstName to first_name;
    alter table rename column lastName to last_name;

Then, we create Kubernetes Job that uses a changelog file from the liquibase-changelog-v2 ConfigMap.

apiVersion: batch/v1
kind: Job
metadata:
  name: liquibase-job-v2
spec:
  template:
    spec:
      containers:
        - name: liquibase
          image: piomin/liquibase
          env:
            - name: URL
              value: jdbc:postgresql://postgres:5432/bluegreen
            - name: USERNAME
              value: bluegreen
            - name: PASSWORD
              value: bluegreen
            - name: CHANGELOGFILE
              value: changelog.sql
          volumeMounts:
            - name: config-vol
              mountPath: /liquibase/changelog
      restartPolicy: Never
      volumes:
        - name: config-vol
          configMap:
            name: liquibase-changelog-v2

Once, the Kubernetes job is finished we just need to update Istio VirtualService to forward the whole traffic to the v2 version of the application.

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: person-virtualservice
spec:
  hosts:
    - person
  http:
    - route:
      - destination:
          host: person
          subset: v1
        weight: 0
      - destination:
          host: person
          subset: v2
        weight: 100

That is the last step in our blue-green deployment process on Kubernetes. The picture visible below illustrates the current status. The database schema has been updated. Also, the whole traffic is now sent to the v2 version of person-service.

Also, here’s a current list of deployments.

Testing Blue-green deployment on Kubernetes

In order to easily test our blue-green deployment process, and I created the second application caller-service. It calls endpoint GET /persons/{id} exposed by the person-service.

@RestController
@RequestMapping("/caller")
public class CallerController {

    private RestTemplate restTemplate;

    public CallerController(RestTemplate restTemplate) {
        this.restTemplate = restTemplate;
    }

    @GetMapping("/call")
    public String call() {
        ResponseEntity<String> response = restTemplate
                .getForEntity("http://person:8080/persons/1", String.class);
        if (response.getStatusCode().is2xxSuccessful())
            return response.getBody();
        else
            return "Error: HTTP " + response.getStatusCodeValue();
    }
}

Before testing, you should add at least a single person to the database. You can use POST /persons for that. In this example, I’m using the port forwarding feature.

$ curl http://localhost:8081/persons -H "Content-Type: application/json" -d '{"firstName":"John","lastName":"Smith","age":33}'

Ok, so here’s a list of deployments you need to start with Step 4. You see that every application has two containers inside a pod (except PostgreSQL). It means that Istio sidecar has been injected into those pods.

Finally, just before executing Step 4 run the following script that calls the caller-service endpoint. The same as before I’m using a port forwarding feature.

$ siege -r 200 -c 1 http://localhost:8080/caller/call

Conclusion

In this article, I described step-by-step how to update your database and application data model on Kubernetes using a blue-green deployment strategy. I chose a scenario with conflicting changes like modification of table column name.

6 COMMENTS

comments user
jkornata

Won’t we have a possible downtime/read errors in v1 if we do the switch in VirtualService manually? There will be a moment in time when the database is already updated, v2 won’t accept traffic and v1 uses incorrect schema? Am I wrong?

    comments user
    piotr.minkowski

    It depends on how long does it take to modify the Istio rule after running the Liquibase script 🙂

comments user
Michal Kunikowski

Hmmm. lets say that migration script takes 10 minutes. If you are doing incompatible changes from prev version to next version does it means that prev version could have 10 minutes errors?
I think in general you should not do incompatible changes from prev version to next version.
What if you need do fast rollback if service version two has some bugs.

IMO should be like this
ver1 -> ver2 -> add new column. copy data from old column
ver 2 -> ver 3 -> delete old column from ver1

    comments user
    piotr.minkowski

    Well, I also think that in general, you should not do incompatible changes from prev version to the next version 🙂
    Don’t focus on that example. I know that in that particular case create a new column, and copy data is a better choice. I just needed a simple scenario with incompatible changes. I can imagine any different case – like let’s say we added a unique key on a column, and we also need to prevent from adding duplicates on the app side etc.

comments user
Robert Reeves

Hi, Piotr!

Great post! I have found some issues with some of the PG config and LB config. Also found a SQL error in the V2 config map. I opened a PR: https://github.com/piomin/sample-spring-bluegreen-with-db/pull/2.

Thanks!

Robert @ Liquibase

    comments user
    piotr.minkowski

    Hi!
    Thanks for your updates.

Leave a Reply