Kubernetes CI/CD with Tekton and ArgoCD
In this article, you will learn how to configure the CI/CD process on Kubernetes using Tekton and ArgoCD. The first question may be – do we really need both these tools to achieve that? Of course, no. Since Tekton is a cloud-native CI/CD tool you may use only it to build your pipelines on Kubernetes. However, a modern way of building the CD process should follow the GitOps pattern. It means that we store a configuration of the application in Git – the same as a source code. The CD process should react to changes in this configuration, and then apply them to the Kubernetes cluster. Here comes Argo CD.
In the next part of this article, we will build a sample CI/CD process for a Java application. Some steps of that process will be managed by Tekton, and some others by ArgoCD. Let’s take a look at the diagram below. In the first step, we are going to clone the Git repository with the application source code. Then we will run JUnit tests. After that, we will trigger a source code analysis with SonarQube. Finally, we will build the application image. All these steps are a part of the continuous integration process. Argo CD is responsible for the deployment phase. Also, in case of any changes in the configuration is synchronizes the state of the application on Kubernetes with the Git repository.
Source Code
If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. This time there is a second repository – dedicated to storing a configuration independently from application source code. You can clone the following repository and go to the cicd/apps/sample-spring-kotlin
directory. After that, you should just follow my instructions. Let’s begin.
Prerequisites
Before we begin, we need to install Tekton and ArgoCD on Kubernetes. We can do this in several different ways. The simplest one is by using OpenShift operators. Their names may be a bit confusing. But in fact, Red Hat OpenShift Pipeline installs Tekton, while Red Hat OpenShift GitOps installs ArgoCD.
Build CI Pipeline with Tekton
The idea behind Tekton is very simple and typical for the CI/CD approach. We are building pipelines. Pipelines consist of several independent steps – tasks. In order to run a pipeline, we should create the PipelineRun
object. It manages the PipelineResources
passed to tasks as inputs and outputs. Tekton executes each task in its own Kubernetes pod. For more details, you may visit the Tekton documentation site.
Here’s the definition of our pipeline. Before adding tasks we define workspaces at the pipeline global level. We need a workspace for keeping application source code during the build and also a place to store Sonarqube settings. Such workspaces are required by particular tasks.
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: sample-java-pipeline
spec:
tasks:
...
workspaces:
- name: source-dir
- name: sonar-settings
For typical operations like git clone
or Maven build, we may use predefined tasks. We can find them on Tekton Hub available here. If you are testing Tekton on OpenShift, some of them are already available as ClusterTask
there just after the installation with an operator.
Task 1: Clone Git repository
Here’s the first step of our pipeline. It is referencing to the git-clone
ClusterTask
. We need to pass the address of the GitHub repository and the name of the branch. Also, we have to assign the workspace to the task using the output
name, which is required by that task.
- name: git-clone
params:
- name: url
value: 'https://github.com/piomin/sample-spring-kotlin-microservice.git'
- name: revision
value: master
taskRef:
kind: ClusterTask
name: git-clone
workspaces:
- name: output
workspace: source-dir
Task 2: Run JUnit tests with Maven
In the next step, we are running JUnit tests. This time we also use a predefined ClusterTask
called maven
. In order to run JUnit tests, we should set the GOAL
parameter to test
. This task requires two workspaces: a workspace with a source code and a second workspace with Maven settings. Because we do not override any Maven configuration I’m just passing there the workspace with source code.
- name: junit-tests
params:
- name: GOALS
value:
- test
runAfter:
- git-clone
taskRef:
kind: ClusterTask
name: maven
workspaces:
- name: source
workspace: source-dir
- name: maven-settings
workspace: source-dir
Task 3: Execute Sonarqube scanning
The two next steps will be a little bit more complicated. In order to run Sonarqube scanning, we first need to import the sonarqube-scanner
task from Tekton Hub.
$ kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/main/task/sonarqube-scanner/0.1/sonarqube-scanner.yaml
After that, we may refer to the already imported task. We should set the address of our Sonarqube instance in the SONAR_HOST_URL
parameter, and the unique name of the project in the SONAR_PROJECT_KEY
parameter. The task takes two input workspaces. The first of them contains source code, while the second may contain properties file to override some Sonarqube settings. Since it is not possible to pass Sonarqube organization name in task parameters, we will have to do that using the sonar-project.properties
file.
- name: sonarqube
params:
- name: SONAR_HOST_URL
value: 'https://sonarcloud.io'
- name: SONAR_PROJECT_KEY
value: sample-spring-boot-kotlin
runAfter:
- junit-tests
taskRef:
kind: Task
name: sonarqube-scanner
workspaces:
- name: source-dir
workspace: source-dir
- name: sonar-settings
workspace: sonar-settings
Task 4: Get the version of the application from pom.xml
In the next step, we are going to retrieve the version number of our application. We will use the version
property available inside the Maven pom.xml
file. In order to read the value of the version property, we will execute the evaluate
command provided by the Maven Helper Plugin. Then we are going to emit that version as a task result. Since there is not such predefined task available on Tekton Hub, we will create our own custom task.
The definition of our custom task is visible below. After executing the mvn help:evaluate
command we set that value as a task result with the name version
.
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: maven-get-project-version
spec:
workspaces:
- name: source
params:
- name: MAVEN_IMAGE
type: string
description: Maven base image
default: gcr.io/cloud-builders/mvn@sha256:57523fc43394d6d9d2414ee8d1c85ed7a13460cbb268c3cd16d28cfb3859e641
- name: CONTEXT_DIR
type: string
description: >-
The context directory within the repository for sources on
which we want to execute maven goals.
default: "."
results:
- description: Project version read from pom.xml
name: version
steps:
- name: mvn-command
image: $(params.MAVEN_IMAGE)
workingDir: $(workspaces.source.path)/$(params.CONTEXT_DIR)
script: |
#!/usr/bin/env bash
VERSION=$(/usr/bin/mvn help:evaluate -Dexpression=project.version -q -DforceStdout)
echo -n $VERSION | tee $(results.version.path)
Then let’s just create a task on Kubernetes. The YAML manifest is available in the GitHub repository under the cicd/pipelines/
directory.
$ kubectl apply -f cicd/pipelines/tekton-maven-version.yaml
$ kubectl get task
NAME AGE
maven-get-project-version 17h
sonarqube-scanner 2d19h
Finally, we just need to refer to the already created task and set a workspace with the application source code.
- name: get-version
runAfter:
- sonarqube
taskRef:
kind: Task
name: maven-get-project-version
workspaces:
- name: source
workspace: source-dir
Task 5: Build and push image
Finally, we may proceed to the last step of our pipeline. We will build the application image and push it to the registry. The output image will be tagged using the Maven version number. That’s why our task needs to refer to the result emitted by the previous task using the following notation: tasks.get-version.results.version
. Then, this property is passed as an input parameter to the jib-maven
task responsible for building our image in Dockerless mode.
- name: build-and-push-image
params:
- name: IMAGE
value: >-
image-registry.openshift-image-registry.svc:5000/piotr-cicd/sample-spring-kotlin:$(tasks.get-version.results.version)
runAfter:
- get-version
taskRef:
kind: ClusterTask
name: jib-maven
workspaces:
- name: source
workspace: source-dir
Run Tekton pipeline
Before we run the pipeline we need to create two resources to use for workspaces. The application source code will be saved on a persistence volume. That’s why we should create PVC
.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: tekton-workspace
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
The Sonar properties file may be passed as ConfigMap
to the pipeline. Because I’m going to perform source code analysis on the cloud instance, I also need to set an organization name.
kind: ConfigMap
apiVersion: v1
metadata:
name: sonar-properties
data:
sonar-project.properties: sonar.organization=piomin
Finally, we can start our pipeline as shown below. Of course, we could just create a PipelineRun
object.
Now, our pipeline is running. Let’s take a look at the logs of the junit-tests
task. As you see there were three JUnit tests executed, and all of them finished successfully.
Then we can go to the SonarCloud site and see the source code analysis report.
Also, let’s verify a list of available images. The tag of our application image is the same as the version set in Maven pom.xml
.
$ oc get is
NAME IMAGE REPOSITORY TAGS UPDATED
sample-spring-kotlin image-registry.openshift-image-registry.svc:5000/piotr-cicd/sample-spring-kotlin 1.3.0 7 minutes ago
Trigger pipeline on GitHub push
In the previous section, we started a pipeline on-demand. What about running it after pushing changes in source code to the GitHub repository? Fortunately, Tekton provides a built-in mechanism for that. We need to define Trigger
and EventListener
. Firstly, we should create the TriggerTemplate
object. It is defining the PipelineRun
object, which references to our sample-java-pipeline
.
apiVersion: triggers.tekton.dev/v1alpha1
kind: TriggerTemplate
metadata:
name: sample-github-template
spec:
params:
- default: main
description: The git revision
name: git-revision
- description: The git repository url
name: git-repo-url
resourcetemplates:
- apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: sample-java-pipeline-run-
spec:
pipelineRef:
name: sample-java-pipeline
serviceAccountName: pipeline
workspaces:
- name: source-dir
persistentVolumeClaim:
claimName: tekton-workspace
- configMap:
name: sonar-properties
name: sonar-settings
The ClusterTriggerBinding
is already available. There is a dedicated definition for GitHub push defined on OpenShift. Thanks to that our EventListener
may just refer to that binding and already created TriggerTemplate
.
apiVersion: triggers.tekton.dev/v1alpha1
kind: EventListener
metadata:
name: sample-github-listener
spec:
serviceAccountName: pipeline
triggers:
- bindings:
- kind: ClusterTriggerBinding
ref: github-push
name: trigger-1
template:
ref: sample-github-template
After creating EventListener
Tekton automatically creates Kubernetes Service
that allows triggering push events.
$ oc get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
el-sample-github-listener ClusterIP 172.30.88.73 <none> 8080/TCP 2d1h
On OpenShift, we can easily expose the service outside the cluster as the Route
object.
$ oc expose svc el-sample-github-listener
$ oc get route
NAME HOST/PORT PATH SERVICES
PORT TERMINATION WILDCARD
el-example el-sample-github-listener-piotr-cicd.apps.qyt1tahi.eastus.aroapp.io el-sample-github-listener http-listener None
After exposing the service we can go to our GitHub repository and define a webhook. In your repository go to Settings
-> Webhooks
-> Add webhook
. Then paste the address of your Route
, choose application/json
as a Content type
and select push event to send.
Now, you just need to push any change to your GitHub repository.
Continuous Delivery with ArgoCD
The application Kubernetes deployment.yaml
manifest is available in the GitHub repository under the cicd/apps/sample-spring-kotlin
directory. It is very simple. It contains only Deployment
and Service
definitions. The version of the Deployment
manifest refers to the 1.3.0
version of the image.
apiVersion: apps/v1
kind: Deployment
metadata:
name: sample-spring-kotlin
spec:
replicas: 1
selector:
matchLabels:
app: sample-spring-kotlin
template:
metadata:
labels:
app: sample-spring-kotlin
spec:
containers:
- name: sample-spring-kotlin
image: image-registry.openshift-image-registry.svc:5000/piotr-cicd/sample-spring-kotlin:1.3.0
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: sample-spring-kotlin
spec:
type: ClusterIP
selector:
app: sample-spring-kotlin
ports:
- port: 8080
Now, we can switch to Argo CD. We can create a new application there using UI or YAML manifest. We will use default settings, so the only thing we need to set is the address of the GitHub repository, a path to the Kubernetes manifest, and a target namespace on the cluster. It means that synchronization between the GitHub repository and Kubernetes needs to be triggered manually.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: sample-spring-kotlin
spec:
destination:
name: ''
namespace: piotr-cicd
server: 'https://kubernetes.default.svc'
source:
path: cicd/apps/sample-spring-kotlin
repoURL: 'https://github.com/piomin/openshift-quickstart.git'
targetRevision: HEAD
project: default
Let’s take a look at the Argo CD UI console. Here’s the state initial synchronization. ArgoCD has already deployed the version 1.3.0
of our application built by the Tekton pipeline.
Now, let’s release a new version of the application. We change the version in Maven pom.xml
from 1.3.0
to 1.4.0
. This change, commit id and message are visible in the picture below. Push to GitHub repository triggers run of the sample-java-pipeline
pipeline by calling webhook.
After a successful run, our pipeline builds and pushes a new version of the application image to the registry. The currently tagged version of the image is 1.4.0
as shown below.
$ oc get is
NAME IMAGE REPOSITORY TAGS UPDATED
sample-spring-kotlin image-registry.openshift-image-registry.svc:5000/piotr-cicd/sample-spring-kotlin 1.4.0,1.3.0 17 seconds ago
After that, we may switch to the configuration repository. We are going to change the version of the target image. We will also increase the number of running pods as shown below.
Argo CD automatically detects changes in the GitHub repository and sets the status to OutOfSync
. It highlights the objects that have been changed by the last commit.
Now, the only thing we need to do is to click the SYNC
button. After that Argo CD creates a new revision with the latest image and runs 2 application pods instead of a single one.
Final Thoughts
Tekton and ArgoCD may be used together to successfully design and run CI/CD processes on Kubernetes. Argo CD watches cluster objects stored in a Git repository and manages the create, update, and delete processes for objects within the repository. Tekton is a CI/CD tool that handles all parts of the development lifecycle, from building images to deploying cluster objects. You can easily run and manage them on OpenShift. If you want to compare a currently described approach based on Tekton and ArgoCD with Jenkins you may read my article Continuous Integration with Jenkins on Kubernetes.
11 COMMENTS