Validate Kubernetes Deployment in CI/CD with Tekton and Datree
In this article, you will learn how to use the tool Datree to validate Kubernetes manifests in the CI/CD process with Tekton. In order to do that, first, we will create a simple Java application with Quarkus. Then we will build a pipeline with the step that runs the datree
CLI command. This command interacts with our account on app.datree.io
that performs validation against policies and rules defined there. In this article, I’m describing just a specific part of the CI/CD process. To read more about the whole CI/CD process on Kubernetes, see my article about Tekton and Argo CD.
Introduction
Our pipeline consists of three steps. In the first step, it clones the source code from the Git repository. Then it builds the application using Maven. Thanks to Quarkus Kubernetes features, we don’t need to create YAML manifests by ourselves. Quarkus will generate them based on the source code and configuration properties during the build. It can also build and publish images in the same step (but that’s just an option, disabled by default). Maybe this feature is not suitable for production, but we may use it here to simplify our pipeline. Anyway, once we generate the Kubernetes manifest we may proceed to the next step – validation with Datree. Here’s a picture to illustrate our process.
Now, some words about Datree. We can easily install it locally. After that, we may validate a single or several YAML manifests using the datree
CLI. Of course, the main goal is to include Datree’s policy check as part of our CI/CD pipeline. Thanks to that, we hope to prevent Kubernetes misconfigurations from reaching production. Let’s begin!
Source Code
If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then go to the person-service
directory. After that, you should just follow my instructions 🙂
Using Datree with Kubernetes Manifests
In order to do a quick intro to Datree, we will first try it locally. You need to install the datree
CLI. I installed it on my macOS with Homebrew.
$ brew tap datreeio/datree
$ brew install datreeio/datree/datree
You also need to have JDK and Maven on your machine in order to build the application from the source code. Now, go to the person-service
directory. Build the application with the following command:
$ mvn clean package
Since our application includes the quarkus-kubernetes
Maven module we don’t need to create a YAML manifest manually. Quarkus generates it during the build. You can find the generated kubernetes.yml
file in the target/kubernetes
directory. Now, let’s just verify it using the datree test
command as shown below.
$ datree test target/kubernetes/kubernetes.yml
Here’s the result. It doesn’t look very good for our current configuration 🙂 What’s important for now, you will find the link to your Datree account at the bottom. Just click it and then sign up. You will see the list of your validation policies.
By default, there are 34 rules available in the Default
policy. Not all of them are active. For example, we may enable the following rule that verifies if the owner
label exists.
Moreover, we may create our own custom rule. To do that we first need to navigate to the account settings and enable the option Policy as Code
. Then download the policy file.
Let’s say we would like to enable Prometheus metrics for our Deployment
on Kubernetes. To enable scraping we should add the following annotation to the Deployment
manifest:
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: /g/metrics
prometheus.io/port: "8080"
Now, let’s create the rule that verifies if those annotations have been added to the Deployment
object. In order to do that, we need to add the following custom rule to the policies.yaml
file. I think it is quite intuitive and doesn’t require a detailed explanation. If you are interested in more details please refer to the documentation.
customRules:
- identifier: ENABLE_PROMETHEUS_LABELS
name: Ensure the Prometheus labels are set
defaultMessageOnFailure: Prometheus scraping not enabled!
schema:
properties:
metadata:
properties:
annotations:
properties:
prometheus.io/scrape:
enum:
- "true"
required:
- prometheus.io/scrape
- prometheus.io/path
- prometheus.io/port
required:
- annotations
Finally, we need to add our custom rule to the Default
policy.
policies:
- name: Default
isDefault: true
rules:
- identifier: ENABLE_PROMETHEUS_LABELS
messageOnFailure: Prometheus scraping not enabled!
- ...
A modified policy should be published to our Datree account using the following command:
$ datree publish policies.yaml
Create Datree Tekton Task
There are three tasks used in our example Tekton pipeline. The first two of them, git-clone
and maven
, are the standard tasks and we may easily get them from the Tekton Hub. You won’t find a task dedicated to Datree in the hub. However, we can easily create it by ourselves. I’ll use a very minimalistic version of that task only with options required in our case. If you need a more advanced definition you can find an example here.
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: datree
spec:
description: >-
The Datree (datree.io) task
workspaces:
- name: source
params:
- name: yamlSrc
description: Path for the yaml files relative to the workspace path
type: string
default: "./*.yaml"
- name: policy
description: Specify which policy to execute
type: string
default: "Default"
- name: token
description: The Datree token to the account on datree.io
type: string
steps:
- name: datree-test
image: datree/datree
workingDir: $(workspaces.source.path)
env:
- name: DATREE_TOKEN
value: $(params.token)
- name: WORKSPACE_PATH
value: $(workspaces.source.path)
- name: PARAM_YAMLSRC
value: $(params.yamlSrc)
- name: PARAM_POLICY
value: $(params.policy)
script: |
#!/usr/bin/env sh
POLICY_FLAG=""
if [ "${PARAM_POLICY}" != "" ] ; then
POLICY_FLAG="--policy $PARAM_POLICY"
fi
/datree test $WORKSPACE_PATH/$PARAM_YAMLSRC $POLICY_FLAG
Now, we can apply the task to the Kubernetes cluster. You can find the file with the task definition in the repository here: .tekton/datree-task.yaml
. First, let’s create a test namespace for our current example.
$ kubectl create ns datree
$ kubectl apply -f .tekton/datree-task.yaml -n datree
Build and Run Tekton Pipeline with Datree
Now, we have all tasks required to compose our pipeline.
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: datree-pipeline
namespace: datree
spec:
tasks:
- name: git-clone
params:
- name: url
value: 'https://github.com/piomin/sample-quarkus-applications.git'
- name: userHome
value: /tekton/home
taskRef:
kind: Task
name: git-clone
workspaces:
- name: output
workspace: source-dir
- name: maven
params:
- name: GOALS
value:
- clean
- package
- name: CONTEXT_DIR
value: /person-service
runAfter:
- git-clone
taskRef:
kind: Task
name: maven
workspaces:
- name: source
workspace: source-dir
- name: maven-settings
workspace: maven-settings
- name: datree
params:
- name: yamlSrc
value: /person-service/target/kubernetes/kubernetes.yml
- name: policy
value: Default
- name: token
value: ${YOUR_TOKEN} # put your Datree token here
runAfter:
- maven
taskRef:
kind: Task
name: datree
workspaces:
- name: source
workspace: source-dir
workspaces:
- name: source-dir
- name: maven-settings
Before running the pipeline you need to obtain your token from the Datree account. Once again, go to your settings and copy the token.
Now, you need to set it as a pipeline parameter token
. We also need to pass the location of the Kubernetes manifest inside a workspace. As I mentioned before, it is automatically generated by Quarkus under the path target/kubernetes/kubernetes.yml
. In order to run the Tekton pipeline on Kubernetes, we should create the PipelineRun
object.
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: datree-pipeline-run-
namespace: datree
labels:
tekton.dev/pipeline: datree-pipeline
spec:
pipelineRef:
name: datree-pipeline
serviceAccountName: pipeline
workspaces:
- name: source-dir
persistentVolumeClaim:
claimName: tekton-workspace-pvc
- configMap:
name: maven-settings
name: maven-settings
That’s all. Let’s just run our pipeline by applying the Tekton PipelineRun
object.
$ kubectl apply -f .tekton/pipeline-run.yaml -n datree
Let’s just see how it looks. The pipeline has failed due to Kubernetes manifest validation errors detected by the Datree task. That’s exactly what we wanted to achieve. Now, let’s try to fix some errors to make the pipeline run finish successfully.
Customize Kubernetes Deployment with Quarkus
We will analyze a report generated by the Datree task issue by issue. But before we do that, let’s see the Deployment
file generated by Quarkus during the build.
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
app.quarkus.io/commit-id: 130b6957fa6b21fac87fe65a40f4dee75e0e39c3
app.quarkus.io/build-timestamp: 2022-02-17 - 12:12:10 +0000
labels:
app.kubernetes.io/name: person-service
app.kubernetes.io/version: "1.0"
name: person-service
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: person-service
app.kubernetes.io/version: "1.0"
template:
metadata:
annotations:
app.quarkus.io/commit-id: 130b6957fa6b21fac87fe65a40f4dee75e0e39c3
app.quarkus.io/build-timestamp: 2022-02-17 - 12:12:10 +0000
labels:
app.kubernetes.io/name: person-service
app.kubernetes.io/version: "1.0"
spec:
containers:
- env:
- name: KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
key: database-user
name: person-db
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
key: database-password
name: person-db
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
key: database-name
name: person-db
image: piomin/person-service:1.0
imagePullPolicy: Always
name: person-service
ports:
- containerPort: 8080
name: http
protocol: TCP
Let’s take a look at a report generated by the Datree task after analyzing the manifest shown above.
(1) Ensure each container has a configured memory / CPU request – to fix that, we will add the following two properties in the application.properties
file:
quarkus.kubernetes.resources.requests.memory=64Mi
quarkus.kubernetes.resources.requests.cpu=250m
(2) Ensure each container has a configured memory / CPU limit – that’s a very similar situation to the issue from point 1, but this time related to memory and CPU limits. We need to add the following properties in the application.properties
file:
quarkus.kubernetes.resources.limits.memory=512Mi
quarkus.kubernetes.resources.limits.cpu=500m
(3) Ensure each container has a configured liveness and readiness probe – we just need to add a single dependency to the Maven pom.xml
to enable health checks with Quarkus
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-smallrye-health</artifactId>
</dependency>
(4) Ensure Deployment
has more than one replica configured – let’s set 2 replicas for our application
quarkus.kubernetes.replicas = 2
(5) Ensure workload has a configured owner
label – we can also use the Quarkus Kubernetes module to add a label to the Deployment
manifest
quarkus.kubernetes.labels.owner = piotr.minkowski
(6) Ensure the Prometheus labels are set – finally our custom rule for checking Prometheus annotations. Let’s add the module that automatically exposes the Prometheus endpoint for the Quarkus application and add annotations to the Deployment
manifest
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-smallrye-metrics</artifactId>
</dependency>
If you run the pipeline once again, it should finish successfully.
Here’s the final version of the Kubernetes Deployment
analyzed with Datree.
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
app.quarkus.io/commit-id: 65b6ffded40ecac3297ae77c63c148920efedf0f
app.quarkus.io/build-timestamp: 2022-02-18 - 07:57:34 +0000
prometheus.io/scrape: "true"
prometheus.io/path: /q/metrics
prometheus.io/port: "8080"
prometheus.io/scheme: http
labels:
owner: piotr.minkowski
app.kubernetes.io/name: person-service
app.kubernetes.io/version: "1.0"
name: person-service
spec:
replicas: 2
selector:
matchLabels:
app.kubernetes.io/name: person-service
app.kubernetes.io/version: "1.0"
template:
metadata:
annotations:
app.quarkus.io/commit-id: 65b6ffded40ecac3297ae77c63c148920efedf0f
app.quarkus.io/build-timestamp: 2022-02-18 - 07:57:34 +0000
prometheus.io/scrape: "true"
prometheus.io/path: /q/metrics
prometheus.io/port: "8080"
prometheus.io/scheme: http
labels:
owner: piotr.minkowski
app.kubernetes.io/name: person-service
app.kubernetes.io/version: "1.0"
spec:
containers:
- env:
- name: KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
key: database-user
name: person-db
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
key: database-password
name: person-db
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
key: database-name
name: person-db
image: pminkows/person-service:1.0
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
httpGet:
path: /q/health/live
port: 8080
scheme: HTTP
initialDelaySeconds: 0
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 10
name: person-service
ports:
- containerPort: 8080
name: http
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /q/health/ready
port: 8080
scheme: HTTP
initialDelaySeconds: 0
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 10
resources:
limits:
cpu: 1000m
memory: 512Mi
requests:
cpu: 250m
memory: 64Mi
Leave a Reply