Continuous Development on Kubernetes with GitOps Approach
In this article, you will learn how to design your apps continuous development process on Kubernetes with the GitOps approach. In order to deliver the application to stage or production, we should use a standard CI/CD process and tools. It requires separation between the source code and the configuration code. It may result in using dedicated tools for the building phase, and for the deployment phase. We are talking about a similar way to the one described in the following article, where we use Tekton as a CI tool and Argo CD as a delivery tool. With that approach, each time you want to release a new version of the image, you should commit it to the repository with configuration. Then, the tool responsible for the CD process applies changes to the cluster. Consequently, it performs a deployment of the new version.
I’ll describe here four possible approaches. Here’s the list of topics:
If you would like to try this exercise yourself, you may always take a look at my source code. In order to do that, you need to clone my GitHub repository. There is a sample Spring Boot application there. You can also access the following repository with the configuration for that app. Go to the apps/simple
for a plain Deployment object example, and to the apps/helm
for the Helm chart. After that, you should just follow my instructions. Let’s begin.
Approach 1: Use the same tag and make a rollout
The first approach is probably the simplest way to achieve our goal. However, not the best one 🙂 Let’s start with the Kubernetes Deployment
. That fragment of YAML is a part of the configuration, so the Argo CD manages it. There are two important things here. We use dev-latest
as the image tag (1). It won’t be changed when we are deploying a new version of the image. We also need to pull the latest version of the image each time we will do the Deployment
rollout. Therefore, we set imagePullPolicy
to Always
(2).
apiVersion: apps/v1
kind: Deployment
metadata:
name: sample-spring-kotlin-microservice
spec:
selector:
matchLabels:
app.kubernetes.io/name: sample-spring-kotlin-microservice
template:
metadata:
labels:
app.kubernetes.io/name: sample-spring-kotlin-microservice
spec:
containers:
- image: piomin/sample-spring-kotlin:dev-latest # (1)
name: sample-spring-kotlin-microservice
ports:
- containerPort: 8080
name: http
imagePullPolicy: Always # (2)
Let’s assume we have a pipeline e.g. in GitLab CI triggered by every push to the dev
branch. We use Maven to build an image from the source code (using jib-maven-plugin
) and then push it to the container registry. In the last step (reload-app
) we are restarting the application in order to run the latest version of the image tagged with dev-latest
. In order to do that, we should execute the kubectl restart deploy sample-spring-kotlin-microservice
command.
image: maven:latest
stages:
- compile
- image-build
- reload-app
build:
stage: compile
script:
- mvn compile
image-build:
stage: image-build
script:
- mvn -s .m2/settings.xml compile jib:build
reload-app:
image: bitnami/kubectl:latest
stage: deploy
only:
- dev
script:
- kubectl restart deploy sample-spring-kotlin-microservice -n dev
We still have all the versions available in the registry. But just a single one is tagged as dev-latest
. Of course, we can use any other convention of image tagging, e.g. based on timestamp or git commit id.
In this approach, we still use GitOps to manage the app configuration on Kubernetes. The CI pipeline pushes the latest version to the registry and triggers reload on Kubernetes.
Approach 2: Commit the latest tag to the repository managed by the CD tool
Configuration
Let’s consider a slightly different approach than the previous one. Argo CD automatically synchronizes changes pushed to the config repository with the Kubernetes cluster. Once the pipeline pushes a changed version of the image tag, Argo CD performs a rollout. Here’s the Argo CD Application
manifest.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: sample-spring-kotlin-simple
namespace: argocd
spec:
destination:
namespace: apps
server: https://kubernetes.default.svc
project: default
source:
path: apps/simple
repoURL: https://github.com/piomin/openshift-cluster-config
targetRevision: HEAD
syncPolicy:
automated: {}
Here’s the first version of our Deployment
. As you see we are deploying the image with the 1.0.0
tag.
apiVersion: apps/v1
kind: Deployment
metadata:
name: sample-spring-kotlin-microservice
spec:
selector:
matchLabels:
app.kubernetes.io/name: sample-spring-kotlin-microservice
template:
metadata:
labels:
app.kubernetes.io/name: sample-spring-kotlin-microservice
spec:
containers:
- image: piomin/sample-spring-kotlin:1.0.0
name: sample-spring-kotlin-microservice
ports:
- containerPort: 8080
name: http
Now, our pipeline should build a new image, override the image tag in YAML and push the latest version to the Git repository. I won’t create a pipeline, but just show you step-by-step what should be done. Let’s begin with the tool. Our pipeline may use Skaffold to build, push and override image tags in YAML. Skaffold is a CLI tool very useful for simplifying development on Kubernetes. However, we can also use it for building CI/CD blocks or templating Kubernetes manifests for the GitOps approach. Here’s the Skaffold configuration file. It is very simple. The same as for the previous example, we use Jib for building and pushing an image. Skaffold supports multiple tag policies for tagging images. We may define e.g. a tagger that uses the current date and time.
apiVersion: skaffold/v2beta22
kind: Config
build:
artifacts:
- image: piomin/sample-spring-kotlin
jib: {}
tagPolicy:
dateTime: {}
Skaffold in CI/CD
In the first step, we are going to build and push the image. Thanks to the --file-output
Skaffold will export the info about a build to the file.
$ skaffold build --file-output='/Users/pminkows/result.json' --push
The file with the result is located under the /Users/pminkows/result.json
path. It contains the basic information about a build including the image name and tag.
{"builds":[{"imageName":"piomin/sample-spring-kotlin","tag":"piomin/sample-spring-kotlin:2022-06-03_13-53-43.988_CEST@sha256:287572a319ee7a0caa69264936063d003584e026eefeabb828e8ecebca8678a7"}]}
This image has also been pushed to the registry.
Now, let’s run the skaffold render
command to override the image tag in the YAML manifest. Assuming we are running it in the next pipeline stage we can just set the /Users/pminkows/result.json
file as in input. The output apps/simple/deployment.yaml
is a location inside the Git repository managed by Argo CD. We don’t want to include the namespace name, therefore we should set the parameter --offline=true
.
$ skaffold render -a /Users/pminkows/result.json \
-o apps/simple/deployment.yaml \
--offline=true
Here’s the final version of our YAML manifest.
apiVersion: apps/v1
kind: Deployment
metadata:
name: sample-spring-kotlin-microservice
spec:
selector:
matchLabels:
app.kubernetes.io/name: sample-spring-kotlin-microservice
template:
metadata:
labels:
app.kubernetes.io/name: sample-spring-kotlin-microservice
spec:
containers:
- image: piomin/sample-spring-kotlin:2022-06-03_13-53-43.988_CEST@sha256:287572a319ee7a0caa69264936063d003584e026eefeabb828e8ecebca8678a7
name: sample-spring-kotlin-microservice
ports:
- containerPort: 8080
name: http
Finally, our pipeline needs to commit the latest version of our manifest to the Git repository. Argo CD will automatically deploy it to the Kubernetes cluster.
Approach 3: Detects the latest version of the image and update automatically with Renovate
Concept
Firstly, let’s visualize the current approach. Our pipeline pushes the latest image to the container registry. Renovate is continuously monitoring the tags of the image in the container registry to detect a change. Once it detects that the image has been updated it creates a pull request to the Git repository with configuration. Then Argo CD detects a new image tag committed into the registry and synchronizes it with the current Kubernetes cluster.
Configuration
Renovate is a very interesting tool. It continuously runs and detects the latest available versions of dependencies. These can be e.g. Maven dependencies. But in our case, we can use it for monitoring container images in a registry.
In the first step, we will prepare a configuration for Renovate. It accepts JSON format. There are several things we need to set:
(1) platform
– our repository is located on GitHub
(2) repository
– the location of the configuration repository. Renovate monitors the whole repository and detects files matching filtering criteria
(3) enabledManagers
– we are going to monitor Kubernetes YAML manifests and Helm value files. For every single manager, we should set file filtering rules.
(4) packageRules
– our goal is to automatically update the configuration repository with the latest tag. Since Renovate creates a pull request after detecting a change we would like to enable PR auto-merge on GitHub. Auto-merge should be performed only for patches (e.g. update from 1.0.0
to 1.0.1
) or minor updates (e.g. from 1.0.1
to 1.0.5
). For other types of updates, PR needs to be approved manually.
(5) ignoreTests
– we need to enable it to perform PR auto-merge. Otherwise Renovate will require at least one test in the repository to perform PR auto-approve.
{
"platform": "github",
"repositories": [
{
"repository": "piomin/openshift-cluster-config",
"enabledManagers": ["kubernetes", "helm-values"],
"kubernetes" : {
"fileMatch": ["\\.yaml$"]
},
"helm-values": {
"fileMatch": ["(.*)values.yaml$"]
},
"packageRules": [
{
"matchUpdateTypes": ["minor", "patch"],
"automerge": true
}
],
"ignoreTests": true
}
]
}
In order to create a pull request, Renovate needs to have write access to the GitHub repository. Let’s create a Kubernetes Secret
containing the GitHub access token.
apiVersion: v1
kind: Secret
metadata:
name: renovate-secrets
namespace: renovate
data:
RENOVATE_TOKEN: <BASE64_TOKEN>
type: Opaque
Installation
Now we can install Renovate on Kubernetes. The best way for that is through the Helm chart. Let’s add the Helm repository:
$ helm repo add renovate https://docs.renovatebot.com/helm-charts
$ helm repo update
Then we may install it using the previously prepared config.json
file. We also need to pass the name of the Secret
containing the GitHub token and set the cron job scheduling interval. We will run the job responsible for detecting changes and creating PR once per minute.
$ helm install --generate-name \
--set-file renovate.config=config.json \
--set cronjob.schedule='*/1 * * * *' \
--set existingSecret=renovate-secrets \
renovate/renovate -n renovate
After installation, you should see a CronJob
in the renovate
namespace:
$ kubectl get cj -n renovate
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
renovate-1653648026 */1 * * * * False 0 1m19s 2m19s
Use Case
Let’s consider the following Helm values.yaml
. It needs to have a proper structure, i.e. image.repository
, image.tag
and image.registry
fields.
app:
name: sample-kotlin-spring
replicas: 1
image:
repository: 'pminkows/sample-kotlin-spring'
tag: 1.4.20
registry: quay.io
Let’s the image pminkows/sample-kotlin-spring
with the tag 1.4.21
to the registry.
Once Renovate detected a new image tag in the container registry it created a PR with auto-approval enabled:
Finally, the following Argo CD Application
will apply changes automatically to the Kubernetes cluster:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: sample-spring-kotlin-helm
namespace: argocd
spec:
destination:
namespace: apps
server: https://kubernetes.default.svc
project: default
source:
path: apps/helm
repoURL: https://github.com/piomin/openshift-cluster-config
targetRevision: HEAD
helm:
valueFiles:
- values.yaml
syncPolicy:
automated: {}
Approach 4: Use Argo CD Image Updater
Finally, we may proceed to the last proposition in this article to implement the development process in Kubernetes with GitOps. That option is available only for the container images managed by Argo CD. Let me show you a tool called Argo CD Image Update. The concept around this tool is pretty similar to Renovate. It can check for new versions of the container images deployed on Kubernetes and automatically update them. You can read more about it here.
Argo CD Image Updater can work in two modes. Once it detects a new version of the image in the registry it can update the image version in the Git repository (git
) or directly inside Argo CD Application
(argocd
). We will use the argocd
mode, which is a default option. Firstly, let’s install Argo CD Image Updater on Kubernetes in the same namespace as Argo CD. We can use the Helm chart for that:
$ helm repo add argo https://argoproj.github.io/argo-helm
$ helm install argocd-image-updater argo/argocd-image-updater -n argocd
After that, the only thing we need to do is to annotate the Argo CD Application
with argocd-image-updater.argoproj.io/image-list
. The value of the annotation is the list of images to monitor. Assuming there is the same Argo CD Application as in the previous section it looks as shown below:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: sample-spring-kotlin-helm
namespace: argocd
annotations:
argocd-image-updater.argoproj.io/image-list: quay.io/pminkows/sample-spring-kotlin
spec:
destination:
namespace: apps
server: https://kubernetes.default.svc
project: default
source:
path: apps/helm
repoURL: https://github.com/piomin/openshift-cluster-config
targetRevision: HEAD
helm:
valueFiles:
- values.yaml
syncPolicy:
automated: {}
Once the Argo CD Image Update detects a new version of the image quay.io/pminkows/sample-spring-kotlin
it adds two parameters (or just updates a value of the image.tag
parameter) to the Argo CD Application
. In fact, it leverages the feature of Argo CD that allows overriding the parameters of the Argo CD Application. You can read more about that feature in their documentation. After that, Argo CD will automatically deploy the image with the tag taken from image.tag
parameter.
Final Thoughts
The main goal of this article is to show you how to design your apps development process on Kubernetes in the era of GitOps. I assumed you use Argo CD for GitOps on Kubernetes, but in fact, only the last described approach requires it. Our goal was to build a development pipeline that builds an image after the source code change and pushes it into the registry. Then with the GitOps model, we are running such an image on Kubernetes in the development environment. I showed how you can use tools like Skaffold, Renovate, or Argo CD Image Updater to implement the required behavior.
Leave a Reply