Testing GitOps on Virtual Kubernetes Clusters with ArgoCD
In this article, you will learn how to test and verify the GitOps configuration managed by ArgoCD on virtual Kubernetes clusters. Assuming that we are fully managing the cluster in the GitOps way, it is crucial to verify each change in the Git repository before applying it to the target cluster. In order to test it, we need to provision a new Kubernetes cluster on demand. Fortunately, we may take advantage of virtual clusters using the Loft vcluster solution. In this approach, we are just “simulating” another cluster on the existing Kubernetes. Once, we are done with the tests, we may remove it. I have already introduced vcluster in one of my previous articles about multicluster management with ArgoCD available here.
Source Code
If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should just follow my instructions.
Architecture
There are several branches in the Git configuration repository. However, ArgoCD automatically applies configuration from the master branch. Before merging other branches we want to test the current state of the repository on a newly created cluster. Thanks to that we could be sure that we won’t break anything on the main cluster. Moreover, we will make sure it is possible to apply the current configuration to any new, real cluster.
The virtual cluster creation process is triggered by Loft. Once, we create a new virtual Kubernetes cluster Loft adds it as a managed cluster to ArgoCD (thanks to the integration between Loft and ArgoCD). The name of the virtual cluster should be the same as the name of the tested branch in the configuration repository. When ArgoCD detects a new cluster, it automatically creates the Application
to manage it. It is possible thanks to the ApplicationSet
and its cluster generator. Then the Application
automatically synchronizes the Git repository from the selected branch into the target vcluster. Here’s the diagram that illustrates our architecture.
Install ArgoCD on Kubernetes
In the first step, we are going to install ArgoCD on the management Kubernetes cluster. We can do it using the latest version of the argo-cd
Helm chart available in the repository. Assuming you have Helm CLI installed on your laptop add the argo-helm
repository with the following command:
$ helm repo add argo https://argoproj.github.io/argo-helm
Then you can install ArgoCD in the argocd
namespace by executing the following command:
$ helm install argocd argo/argo-cd -n argocd --create-namespace
In order to access the ArgoCD UI outside Kubernetes we can configure the Ingress object or enable port forwarding as shown below:
$ kubectl port-forward service/argocd-server -n argocd 8080:443
Now, the UI is available under the https://localhost:8080
address. You also need to obtain the admin password:
$ kubectl -n argocd get secret argocd-initial-admin-secret \
-o jsonpath="{.data.password}" | base64 -d
Install Loft vcluster on Kubernetes
In the next step, we are going to install Loft vcluster on Kubernetes. If you just want to create and manage virtual clusters you can just install the vcluster
CLI on your laptop. Here’s the documentation with the installation instructions. With the loft
CLI, we can install web UI on Kubernetes and take advantage e.g. with built-in integration with ArgoCD. Here are the installation instructions. Once you install the CLI on your laptop you can use the loft
command to install it on your Kubernetes cluster:
$ loft start
Here’s the result screen after the installation:
If your installation finished successfully you should have two pods running in the loft
namespace:
$ kubectl get po -n loft
NAME READY STATUS RESTARTS AGE
loft-77875f8946-xp8v2 1/1 Running 0 5h27m
loft-agent-58c96f88-z6bzw 1/1 Running 0 5h25m
The Loft UI is available under the https://localhost:9898
address. We can login there and create our first project:
For that project, we have to enable integration with our instance of ArgoCD. Go to the “Project Settings”, and then select the “Argo CD” menu item. Then you should click “Enable Argo CD Integration”. Our instance of ArgoCD is running in the argocd
namespace. Don’t forget to save the changes.
As you see, we also need to configure the loftHost
in the Admin > Config section. It should point to the internal address of the loft
Kubernetes Service from the loft
namespace.
Creating Configuration for ArgoCD with Helm
In our exercise today, we will install cert-manager on Kubernetes and then use its CRDs to create issuers and certificates. In order to be able to run it on any cluster, we will create simple, parametrized templates with Helm. Here’s the structure of our configuration repository:
├── apps
│ └── cert-manager
│ ├── certificates.yaml
│ └── cluster-issuer.yaml
└── bootstrap
├── Chart.yaml
├── templates
│ ├── cert-manager.yaml
│ └── projects.yaml
└── values.yaml
Let’s analyze the content stored in this configuration repository. Each time we are creating virtual cluster per repository branch we are using a dedicated ArgoCD Project
.
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
name: {{ .Values.project }}
namespace: argocd
annotations:
argocd.argoproj.io/sync-wave: "1"
spec:
clusterResourceWhitelist:
- group: '*'
kind: '*'
destinations:
- name: '*'
namespace: '*'
server: '*'
sourceRepos:
- '*'
For the next activities, we are basing on the ArgoCD “app of apps” pattern. Our first Application
is installing the cert-manager
using the official Helm chart (1). The destination Kubernetes cluster (2) and ArgoCD project (3) are set using the Helm parameters.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: cert-manager
namespace: argocd
annotations:
argocd.argoproj.io/sync-wave: "2"
spec:
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
source:
repoURL: 'https://charts.jetstack.io'
targetRevision: v1.12.2
chart: cert-manager # (1)
destination:
namespace: cert-manager
server: {{ .Values.server }} # (2)
project: {{ .Values.project }} # (3)
Our second ArgoCD Application
is responsible for applying cert-manager
CRD objects from the apps
directory. We are parametrizing not only a target cluster and ArgoCD Project
, but also a source branch in the configuration repository (1).
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: certs-config
namespace: argocd
annotations:
argocd.argoproj.io/sync-wave: "3"
spec:
destination:
namespace: certs
server: {{ .Values.server }}
project: {{ .Values.project }}
source:
path: apps/cert-manager
repoURL: https://github.com/piomin/kubernetes-config-argocd.git
targetRevision: {{ .Values.project }} # (1)
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
In the apps
directory, we are storing cert-manager CRDs. Here’s the ClusterIssuer
object:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: ss-clusterissuer
spec:
selfSigned: {}
Here’s the CRD object responsible for generating a certificate. It refers to the previously shown the ClusterIssuer
object.
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: secure-caller-cert
spec:
keystores:
jks:
passwordSecretRef:
name: jks-password-secret
key: password
create: true
issuerRef:
name: ss-clusterissuer
group: cert-manager.io
kind: ClusterIssuer
privateKey:
algorithm: ECDSA
size: 256
dnsNames:
- localhost
- secure-caller
secretName: secure-caller-cert
commonName: localhost
duration: 1h
renewBefore: 5m
Create ArgoCD ApplicationSet for Virtual Clusters
Assuming we have already prepared a configuration in the Git repository, we may proceed to the ArgoCD settings. We will create ArgoCD ApplicationSet
with the cluster generator. It loads all the remote clusters managed by ArgoCD and creates the corresponding Application
for each of them (1). It uses the name of the vcluster to generate the name of the ArgoCD Application
(2). The name of the virtual cluster is generated by Loft during the creation process. It automatically adds some prefixes to the name we set when creating a new virtual Kubernetes cluster in Loft UI. The proper name without any prefixes is stored in the loft.sh/vcluster-instance-name
label of the ArgoCD Secret
. We will use the value of that label to set the name of the source branch (3), or Helm params in the “app of apps” pattern.
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: config
namespace: argocd
spec:
generators:
- clusters: # (1)
selector:
matchLabels:
argocd.argoproj.io/secret-type: cluster
template:
metadata:
name: '{{name}}-config-test' # (2)
spec:
destination:
namespace: argocd
server: https://kubernetes.default.svc
project: default
source:
helm:
parameters:
- name: server
value: '{{server}}'
- name: project
value: '{{metadata.labels.loft.sh/vcluster-instance-name}}'
path: bootstrap
repoURL: https://github.com/piomin/kubernetes-config-argocd.git
# (3)
targetRevision: '{{metadata.labels.loft.sh/vcluster-instance-name}}'
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
Let’s assume we are testing the initial version of our configuration before merging it to the master
branch in the abc
branch. In order to do that, we need to create a virtual cluster with the same name as the branch name – abc
.
After creating a cluster, we need to enable integration with ArgoCD.
Then, Loft adds a new cluster to the instance of ArgoCD set in our project settings.
ArgoCD ApplicationSet
detects a new managed cluster and creates a dedicated app for managing it. This app automatically synchronizes the configuration from the abc
branch. As a result, there is also the Application responsible for installing a cert-manager using Helm chart and another one for applying cert-manager CRD objects. As you see it doesn’t work properly…
Let’s what happened. Ok, we didn’t install CRDs together with cert-manager
.
There is also a problem with the ephemeral storage used by the cert-manager
. With this configuration, a single pod deployed by cert-manager
consumes the whole ephemeral storage from the node.
We will fix those problems in the configuration of the cert-manager
Helm chart. In order to install CRDs we need to set the installCRDs
Helm parameter to true
. Since I’m running cert-manager
locally, I also have to set limits for ephemeral storage usage for several components. Here’s the final configuration. You will find this version in the b
branch inside the Git repository.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: cert-manager
namespace: argocd
annotations:
argocd.argoproj.io/sync-wave: "2"
spec:
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
destination:
namespace: cert-manager
server: {{ .Values.server }}
project: {{ .Values.project }}
source:
repoURL: 'https://charts.jetstack.io'
targetRevision: v1.12.2
chart: cert-manager
helm:
parameters:
- name: installCRDs
value: 'true'
- name: webhook.limits.ephemeral-storage
value: '500Mi'
- name: cainjector.enabled
value: 'false'
- name: startupapicheck.limits.ephemeral-storage
value: '500Mi'
- name: resources.limits.ephemeral-storage
value: '500Mi'
So, now let’s create another virtual Kubernetes cluster with the b
name.
There were also some other minor problems. However, I was testing them and fixing them in the b
branch. Then I have an immediate verification by synchronizing the Git branch with ArgoCD to the target virtual Kubernetes cluster.
Finally, I achieved the desired state on the virtual Kubernetes cluster. Now, I’m safe to merge the changes into the master
branch and apply them to the main cluster 🙂
Here we go 🙂
Final Thoughts
If you are using the GitOps approach to manage your whole Kubernetes cluster, testing updated configuration becomes very important. With Kubernetes virtual clusters we may simplify the process of testing configuration managed by ArgoCD. Loft provides built-in integration with ArgoCD. We can easily define multiple projects and multiple ArgoCD for managing different aspects of the Kubernetes cluster.
2 COMMENTS