Canary Release on Kubernetes with Knative and Tekton
In this article, you will learn how to prepare a canary release in your CI/CD with Knative and Tekton. Since Knative supports many versions of the same service it seems to be the right tool to do canary releases. We will use its feature called gradual rollouts to shift the traffic to the latest version in progressive steps. As an exercise, we are going to compile natively (with GraalVM) and run a simple REST service built on top of Spring Boot. We will use Cloud Native Buildpacks as a build tool on Kubernetes. Let’s begin!
If you are interested in more details about Spring Boot native compilation please refer to my article Microservices on Knative with Spring Boot and GraalVM.
Prerequisites
Native compilation for Java is a memory-intensive process. Therefore, we need to reserve at least 8GB for our Kubernetes cluster. We also have to install Tekton and Knative there, so it is worth having even more memory.
1. Install Knative Serving – we will use the latest version of Knative (1.3). Go to the following site for the installation manual. Once you did that, you can just verify if it works with the following command:
$ kubectl get pods -n knative-serving
2. Install Tekton – you can go to that site for more details. However, there is just a single command to install it:
$ kubectl apply --filename https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml
Source Code
If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then go to the callme-service
directory. After that, you should just follow my instructions.
Spring Boot for Native GraalVM
In order to expose the REST endpoints, we need to include Spring Boot Starter Web. Our service also stored data in the H2 database, so we include Spring Boot Starter Data JPA. The last dependency is for native compilation support. The current version of Spring Native is 0.11.3
.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.experimental</groupId>
<artifactId>spring-native</artifactId>
<version>0.11.3</version>
</dependency>
Let’s switch to the code. It is our model class exposed by the REST endpoint. It contains the current date, the name of the Kubernetes pod, and the version number.
@Entity
public class Callme {
@Id
@GeneratedValue
private Integer id;
@Temporal(TemporalType.TIMESTAMP)
private Date addDate;
private String podName;
private String version;
// getters, setters, constructor ...
}
There is a single endpoint that creates an event, stores it in the database and returns it as a result. The name of the pod and the name of the namespace are taken directly from Kubernetes Deployment
. We use the version number from Maven pom.xml
as the application version.
@RestController
@RequestMapping("/callme")
public class CallmeController {
@Value("${spring.application.name}")
private String appName;
@Value("${POD_NAME}")
private String podName;
@Value("${POD_NAMESPACE}")
private String podNamespace;
@Autowired
private CallmeRepository repository;
@Autowired(required = false)
BuildProperties buildProperties;
@GetMapping("/ping")
public String ping() {
Callme c = repository.save(new Callme(new Date(), podName,
buildProperties != null ? buildProperties.getVersion() : null));
return appName +
" v" + c.getVersion() +
" (id=" + c.getId() + "): " +
podName +
" in " + podNamespace;
}
}
In order to use the Maven version, we need to generate the build-info.properties
file during the build. Therefore we should add the build-info
goal in the spring-boot-maven-plugin
execution properties. If you would like to build a native image locally just set the configuration environment property BP_NATIVE_IMAGE
to true
. Then you can just run the command mvn spring-boot:build-image
.
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<executions>
<execution>
<goals>
<goal>build-info</goal>
</goals>
</execution>
</executions>
<configuration>

</configuration>
</plugin>
Create Tekton Pipelines
Our pipeline consists of three tasks. In the first of them, we are cloning the source code repository with the Spring Boot application. In the second step, we are building the image natively on Kubernetes using the Cloud Native Builpacks task. After building the image we are pushing to the remote registry. Finally, we are running the image on Kubernetes as the Knative service.
Firstly, we need to install the Tekton git-clone
task:
$ kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/master/task/git-clone/0.4/git-clone.yaml
We also need the buildpacks task that allows us to run Cloud Native Buildpacks on Kubernetes:
$ kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/master/task/buildpacks/0.3/buildpacks.yaml
Finally, we are installing the kubernetes-actions
task to deploy Knative Service
using the YAML manifest.
$ kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/main/task/kubernetes-actions/0.2/kubernetes-actions.yaml
All our tasks are ready. Let’s just verify it:
$ kubectl get task
NAME AGE
buildpacks 1m
git-clone 1m
kubernetes-actions 33s
Finally, we are going to create a Tekton pipeline. In the buildpacks
task reference, we need to set several parameters. Since we have two Maven modules in the repository, we first need to set the working directory to the callme-service
(1). Also, we use Paketo Buildpacks, so we change the default builder image to the paketobuildpacks/builder:base
(2). Finally, we need to enable native build for Cloud Native Buildpacks. In order to do that, we should set the environment variable BP_NATIVE_IMAGE
to true
(3). After building and pushing the image we can deploy it on Kubernetes (4).
apiVersion: tekton.dev/v1alpha1
kind: Pipeline
metadata:
name: build-spring-boot-pipeline
spec:
params:
- description: image URL to push
name: image
type: string
tasks:
- name: fetch-repository
params:
- name: url
value: 'https://github.com/piomin/sample-spring-boot-graalvm.git'
- name: subdirectory
value: ''
- name: deleteExisting
value: 'true'
taskRef:
kind: Task
name: git-clone
workspaces:
- name: output
workspace: source-workspace
- name: buildpacks
params:
- name: APP_IMAGE
value: $(params.image)
- name: SOURCE_SUBPATH # (1)
value: callme-service
- name: BUILDER_IMAGE # (2)
value: 'paketobuildpacks/builder:base'
- name: ENV_VARS # (3)
value:
- BP_NATIVE_IMAGE=true
runAfter:
- fetch-repository
taskRef:
kind: Task
name: buildpacks
workspaces:
- name: source
workspace: source-workspace
- name: cache
workspace: cache-workspace
- name: deploy
params:
- name: args # (4)
value:
- apply -f callme-service/k8s/
runAfter:
- buildpacks
taskRef:
kind: Task
name: kubernetes-actions
workspaces:
- name: manifest-dir
workspace: source-workspace
workspaces:
- name: source-workspace
- name: cache-workspace
In order to push the image into the remote secure registry, you need to create a Secret
containing your username and password.
$ kubectl create secret docker-registry docker-user-pass \
--docker-username=<USERNAME> \
--docker-password=<PASSWORD> \
--docker-server=https://index.docker.io/v1/
After that, you should create ServiceAccount
that uses a newly created Secret
. As you probably figured out our pipeline uses that ServiceAccount
.
apiVersion: v1
kind: ServiceAccount
metadata:
name: buildpacks-service-account
secrets:
- name: docker-user-pass
Deploy Knative Service with gradual rollouts
Here’s the YAML manifest with our Knative Service
for the 1.0
version. It is a very simple definition. The only additional thing we need to do is to inject the name of the pod and namespace into the container.
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: callme-service
spec:
template:
spec:
containers:
- name: callme
image: piomin/callme-service:1.0
ports:
- containerPort: 8080
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
In order to inject the name of the pod and namespace into the container, we use Downward API. This Kubernetes feature is by default disabled on Knative. To enable it we need to add the property kubernetes.podspec-fieldref
with value enabled in the config-features
ConfigMap
.
apiVersion: v1
kind: ConfigMap
metadata:
name: config-features
namespace: knative-serving
data:
kubernetes.podspec-fieldref: enabled
Now, let’s run our pipeline for the 1.0
version of our sample application. To run it, you should also create a PersistentVolumeClaim
with the name tekton-workspace-pvc
.
apiVersion: tekton.dev/v1alpha1
kind: PipelineRun
spec:
params:
- name: image
value: 'piomin/callme-service:1.0'
pipelineRef:
name: build-spring-boot-pipeline
serviceAccountName: buildpacks-service-account
workspaces:
- name: source-workspace
persistentVolumeClaim:
claimName: tekton-workspace-pvc
subPath: source
- name: cache-workspace
persistentVolumeClaim:
claimName: tekton-workspace-pvc
subPath: cache
Finally, it is time to release a new version of our application – 1.1
. Firstly, you should change the version number in Maven pom.xml
.
We should also change the version number in the k8s/ksvc.yaml
manifest. However, the most important thing is related to the annotation serving.knative.dev/rollout-duration
. It enables gradual rollouts for Knative Service
. The value 300s
of this parameter means that our rollout to the latest revision will take exactly 300 seconds. Knative is going to roll out to 1% of traffic first, and then in equal incremental steps for the rest of the assigned traffic. In that case, it will increase the traffic to the latest service by 1% every 3 seconds.
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: callme-service
annotations:
serving.knative.dev/rollout-duration: "300s"
spec:
template:
spec:
containers:
- name: callme
image: piomin/callme-service:1.1
ports:
- containerPort: 8080
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
Verify Canary Release with Knative
Let’s verify our canary release process built with Knative and Tekton. Once you run the pipeline for the 1.0
and 1.1
version of our sample application, you can display a list of Knative Service
. As you the latest revision is callme-service-00002
, but the rollout is still in progress:
We can send some test requests to our Knative Service
. For me, Knative is available on localhost:80
, since I’m using Kubernetes on the Docker Desktop. The only thing I need to do is to set the name URL of the service in the Host
header.
$ curl http://localhost:80/callme/ping \
-H "Host:callme-service.default.example.com"
Here are some responses. As you see, the first two requests have been processed by the 1.0
version of our application. While the last request by the 1.1
version.
Let’s verify the current percentage traffic distribution between the two revisions. Currently, it is 52% to 1.1 and 48% to 1.0.
Finally, after the rollout procedure is finished you should get the same response as me.
Final Thoughts
As you see the process of canary release with Knative is very simple. You only need to set a single annotation on the Knative Service
. You can compare it for example with Argo Rollouts which allows us to perform progressive traffic shifting for a standard Kubernetes Deployment
.
Leave a Reply