Manage Multiple Kubernetes Clusters with ArgoCD

Manage Multiple Kubernetes Clusters with ArgoCD

In this article, you will learn how to deploy the same app across multiple Kubernetes clusters with ArgoCD. In order to easily test the solution we will run several virtual Kubernetes clusters on the single management cluster with the vcluster tool. Since that’s the first article where I’m using vcluster, I’m going to do a quick introduction in the next section. As usual, we will use Helm for installing the required components and creating an app template. I will also show you, how we can leverage Kyverno in this scenario. But first things first – let’s discuss our architecture for the current article.

Introduction

If I want to easily test a scenario with multiple Kubernetes clusters I usually use kind for that. You can find examples in some of my previous articles. For example, here is the article about Cilium cluster mesh. Or another one about mirroring traffic between multiple clusters with Istio. This time I’m going to try a slightly different solution – vcluster. It allows us to run virtual Kubernetes clusters inside the namespaces of other clusters. Those virtual clusters have a separate API server and a separate data store. We can easily interact with them the same as with the “real” clusters through the Kube context on the local machine. The vcluster and all of its workloads will be hosted in a single underlying host namespace. Once we delete a namespace we will remove the whole virtual cluster with all workloads.

How vcluster may help in our exercise? First of all, it creates all the resources on the “hosting” Kubernetes cluster. There is a dedicated namespace that contains a Secret with a certificate and private key. Based on that Secret we can automatically add a newly created cluster to the clusters managed by Argo CD. I’ll show you how can leverage Kyverno ClusterPolicy for that. I will trigger on new Secret creation in the virtual cluster namespace, and then generate a new Secret in the Argo CD namespace containing the cluster details.

Here is the diagram that illustrates our architecture. ArgoCD is managing multiple Kubernetes clusters and deploying the app across those clusters using the ApplicationSet object. Once a new cluster is created it is automatically included in the list of clusters managed by Argo CD. It is possible thanks to Kyverno policy that generates a new Secret with the argocd.argoproj.io/secret-type: cluster label in the argocd namespace.

multiple-kubernetes-clusters-argocd-arch

Prerequisites

Of course, you need to have a Kubernetes cluster. In this exercise, I’m using Kubernetes on Docker Desktop. But you can as well use any other local distribution like minikube or a cloud-hosted instance. No matter which distribution you choose you also need to have:

  1. Helm CLI – used to install Argo CD, Kyverno and vcluster on the “hosting” Kubernetes cluster
  2. vcluster CLI – used to interact with virtual Kubernetes clusters. We can also use vcluster to create a virtual cluster, however, we can also do it directly using the Helm chart. You will vcluster CLI installation instructions are available here.

Running Virtual Clusters on Kubernetes

Let’s create our first virtual cluster on Kubernetes. In that approach, we can use the vcluster create command for that. Additionally, we need to sign the cluster certificate using the internal DNS name containing the name of the Service and a target namespace. Assuming that the name of the cluster is vc1, the default namespace name is vcluster-vc1. Therefore, the API server certificate should be signed for the vc1.vcluster-vc1 domain. Here is the appropriate values.yaml file that overrides default chart properties.

syncer:
  extraArgs:
  - --tls-san=vc1.vc1-vcluster

Then, we can install the first virtual cluster in the vcluster-vc1 namespace. By default, vcluster uses k3s distribution (to decrease resource consumption), so we will switch to vanilla k8s using the distro parameter:

$ vcluster create vc1 --upgrade --connect=false \
  --distro k8s \
  -f values.yaml 

We need to create another two virtual clusters with names vc2 and vc3. So you should repeat the same steps using the values.yaml and the vcluster create command dedicated to each of them. After completing the required steps we can display a list of running virtual clusters:

multiple-kubernetes-clusters-argocd-vclusters

Each cluster has a dedicated namespace that contains all the required pods for k8s distribution.

$ kubectl get pod -n vcluster-vc1
NAME                                           READY   STATUS    RESTARTS   AGE
coredns-586cbcd49f-pkn5q-x-kube-system-x-vc1   1/1     Running   0          20m
vc1-7985c794d6-7pqln                           1/1     Running   0          21m
vc1-api-6564bf7bbf-lqqxv                       1/1     Running   0          39s
vc1-controller-9f98c7f9c-87tqb                 1/1     Running   0          23s
vc1-etcd-0                                     1/1     Running   0          21m

Now, we can switch to the newly create Kube context using the vcluster connect command. Under the hood, vcluster creates a Kube context with the vcluster_vc1_vcluster-vc1_docker-desktop name and exposes API outside of the cluster using the NodePort Service.

For example, we can display a list of namespaces. As you see it is different than a list on the “hosting” cluster.

$ kubectl get ns   
NAME              STATUS   AGE
default           Active   25m
kube-node-lease   Active   25m
kube-public       Active   25m
kube-system       Active   25m

In order to switch back to the “hosting” cluster just run the following command:

$ vcluster disconnect

Installing Argo CD on Kubernetes

In the next step, we will install Argo CD on Kubernetes. To do that, we will use an official Argo CD Helm chart. First, let’s add the following Helm repo:

$ helm repo add argo https://argoproj.github.io/argo-helm

Then we can install the latest version of Argo CD in the selected namespace. For it is the argocd namespace.

$ helm install argocd argo/argo-cd -n argocd --create-namespace

After a while, Argo CD should be installed. We will then use the UI dashboard to interact with Argo CD. Therefore let’s expose it outside the cluster using the port-forward command for the argocd-server Service. After that we can access the dashboard under the local port 8080:

$ kubectl port-foward svc/argocd-server 8080:80 -n argocd

The default username is admin. ArgoCD Helm chart generates the password automatically during the installation. And you will find it inside the argocd-initial-admin-secret Secret.

$ kubectl get secret argocd-initial-admin-secret \
  --template={{.data.password}} \
  -n argocd | base64 -D

Automatically Adding Argo CD Clusters with Kyverno

The main goal here is to automatically add a newly create virtual Kubernetes to the clusters managed by Argo CD. Argo CD stores the details about each managed cluster inside the Kubernetes Secret labeled with argocd.argoproj.io/secret-type: cluster. On the other hand vcluster stores cluster credentials in the Secret inside a namespace dedicated to the particular cluster. The name of the Secret is the name of cluster prefixed with vc-. For example, the Secret name for the vc1 cluster is vc-vc1.

Probably, there are several ways to achieve the goal described above. However, for me, the simplest way is through the Kyverno ClusterPolicy. Kyverno is able to not only validate the resources it can also create additional resources when a resource is created or updated. Before we start, we need to install Kyverno on Kubernetes. As usual, we will Helm chart for that. First, let’s add the required Helm repository:

$ helm repo add kyverno https://kyverno.github.io/kyverno/

Then, we can install it for example in the kyverno namespace with the following command:

$ helm install kyverno kyverno/kyverno -n kyverno --create-namespace

That’s all – we may create our Kyverno policy. Let’s discuss the ClusterPolicy fields step by step. By default, the policy will not be applied to the existing resource when it is installed. To change this behavior we need to set the generateExistingOnPolicyUpdate parameter to true (1). Now it will also be for existing resources (our virtual clusters are already running). The policy triggers for any existing or newly created Secret with name starting from vc- (2). It sets several variables using the context field (3).

The policy has an access to the source Secret fields, so it is able to get API server CA (4), client certificate (5), and private key (6). Finally, it generates a new Secret with the same name as a cluster name (8). We can retrieve the name of the cluster from the namespace of the source Secret (7). The generated Secret should contain the label argocd.argoproj.io/secret-type: cluster (10) and should be placed in the argocd namespace (9). We fill all the required fields of Secret using variables (11). ArgoCD can access vcluster internally using Kubernetes Service with the same as the vcluster name (12).

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: sync-secret
spec:
  generateExistingOnPolicyUpdate: true # (1)
  rules:
  - name: sync-secret
    match:
      any:
      - resources: # (2)
          names:
          - "vc-*"
          kinds:
          - Secret
    exclude:
      any:
      - resources:
          namespaces:
          - kube-system
          - default
          - kube-public
          - kyverno
    context: # (3)
    - name: namespace
      variable:
        value: "{{ request.object.metadata.namespace }}"
    - name: name
      variable:
        value: "{{ request.object.metadata.name }}"
    - name: ca # (4)
      variable: 
        value: "{{ request.object.data.\"certificate-authority\" }}"
    - name: cert # (5)
      variable: 
        value: "{{ request.object.data.\"client-certificate\" }}"
    - name: key # (6)
      variable: 
        value: "{{ request.object.data.\"client-key\" }}"
    - name: vclusterName # (7)
      variable:
        value: "{{ replace_all(namespace, 'vcluster-', '') }}"
        jmesPath: 'to_string(@)'
    generate:
      kind: Secret
      apiVersion: v1
      name: "{{ vclusterName }}" # (8)
      namespace: argocd # (9)
      synchronize: true
      data:
        kind: Secret
        metadata:
          labels:
            argocd.argoproj.io/secret-type: cluster # (10)
        stringData: # (11)
          name: "{{ vclusterName }}"
          server: "https://{{ vclusterName }}.{{ namespace }}:443" # (12)
          config: |
            {
              "tlsClientConfig": {
                "insecure": false,
                "caData": "{{ ca }}",
                "certData": "{{ cert }}",
                "keyData": "{{ key }}"
              }
            }

Once you created the policy you can display its status with the following command:

$ kubectl get clusterpolicy
NAME          BACKGROUND   VALIDATE ACTION   READY
sync-secret   true         audit             true

Finally, you should see the three following secrets inside the argocd namespace:

Deploy the App Across Multiple Kubernetes Clusters with ArgoCD

We can easily deploy the same app across multiple Kubernetes clusters with the ArgoCD ApplicationSet object. The ApplicationSet controller is automatically installed by the ArgoCD Helm chart. So, we don’t have to do anything additional to use it. ApplicationSet is doing a very simple thing. Based on the defined criteria it generates several ArgoCD Applications. There are several types of criteria available. One of them is the list of Kubernetes clusters managed by ArgoCD.

In order to create the Application per a managed cluster, we need to use a “Cluster Generator”. The ApplicationSet visible above automatically uses all clusters managed by ArgoCD (1). It provides several parameter values to the Application template. We can use them to generate a unique name (2) or set the target cluster name (4). In this exercise, we will deploy a simple Spring Boot app that exposes some endpoints over HTTP. The configuration is stored in the following GitHub repo inside the apps/simple path (3). The target namespace name is demo (5). The app is synchronized automatically with the configuration stored in the Git repo (6).

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: sample-spring-boot
  namespace: argocd
spec:
  generators:
  - clusters: {} # (1)
  template:
    metadata:
      name: '{{name}}-sample-spring-boot' # (2)
    spec:
      project: default
      source: # (3)
        repoURL: https://github.com/piomin/openshift-cluster-config.git
        targetRevision: HEAD
        path: apps/simple
      destination:
        server: '{{server}}' # (4)
        namespace: demo # (5)
      syncPolicy: # (6)
        automated:
          selfHeal: true
        syncOptions:
          - CreateNamespace=true

Let’s switch to the ArgoCD dashboard. We have four clusters managed by ArgoCD: three virtual clusters and a single “real” cluster in-cluster.

multiple-kubernetes-clusters-argocd-clusters

Therefore you should have four ArgoCD applications generated and automatically synchronized. It means that our Sporing Boot app is currently running on all the clusters.

multiple-kubernetes-clusters-argocd-apps

Let’s connect with the vc1 virtual cluster:

$ vcluster connect vc1

We can display a list of running pods inside the demo namespace. Of course, you can repeat the same steps for another two virtual clusters.

We can access the HTTP through the Kubernetes Service just by running the following command:

$ kubectl port-forward svc/sample-spring-kotlin 8080:8080 -n demo

The app exposes Swagger UI with the list of available endpoints. You can access it under the /swagger-ui.html path.

Final Thoughts

In this article, I focused on simplifying deployment across multiple Kubernetes clusters as much as possible. We deployed our sample app across all running clusters using a single ApplicationSet CRD. We were able to add managed clusters with Kyverno policy automatically. Finally, we perform the whole exercise using a single “real” cluster, which hosted several virtual Kubernetes clusters with the vcluster tool. There is also a very interesting solution dedicated to a similar challenge based on OpenShift GitOps and Advanced Cluster Management for Kubernetes. You can read more it in my previous article.

10 COMMENTS

comments user
Nelson Pereira de CastroNelson

Congrats man! How to apply ClusterPolicy? Tanks!

    comments user
    piotr.minkowski

    with the `kubectl apply` command

comments user
Yohan

Hi Piotr!
Thanks for the detailed blog post. I’ve been looking to setup multiple cluster locally to test out ArgoCD’s ApplicationSets and generators.
I’m curious; how would we go about setting this up using Kind?
I quickly looked into it but networking between clusters was giving me trouble.
Cheers.

comments user
joebowbeer

Why did you install k8s distro instead of using the default k3s?

    comments user
    piotr.minkowski

    There is no specific reason for that. Just my preferences

comments user
Sergey

Hi!

Can you explain what is the request.object structure in ClusterPolice.yaml? How to define it?

    comments user
    piotr.minkowski

    Hi!
    I’m not very sure what exactly are you asking about. There is a detailed description of all the most important fields of the `ClusterPolicy` in the article.

comments user
Felipe

Nice article. An approach using helm generator with different values would be interesting too. I think this setup is probably the most next to “real world”

    comments user
    piotr.minkowski

    Thanks 🙂

Leave a Reply