Kubernetes Multicluster Load Balancing with Skupper

Kubernetes Multicluster Load Balancing with Skupper

In this article, you will learn how to leverage Skupper for load balancing between app instances running on several Kubernetes clusters. We will create some Kubernetes clusters locally with Kind. Then we will connect them using Skupper.

Skupper cluster interconnection works in Layer 7 (application layer). It means there is no need to create any VNPs or special firewall rules. Skupper is working according to the Virtual Application Network (VAN) approach. Thanks to that it can connect different Kubernetes clusters and guarantee communication between services without exposing them to the Internet. You can read more about the concept behind it in the Skupper docs.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that, you need to clone my GitHub repository. This time we will do almost everything using a command-line tool (skupper CLI). The repository contains just a sample app Spring Boot with Kubernetes Deployment manifests and Skaffold config. You will find here instructions on how to deploy the app with Skaffold, but you can as well use another tool. As always, follow my instructions for the details 🙂

Create Kubernetes clusters with Kind

In the first step, we will create three Kubernetes clusters with Kind. We need to give them different names: c1, c2 and c3. Accordingly, they are available under the context names: kind-c1, kind-c2 and kind-c3.

$ kind create cluster --name c1
$ kind create cluster --name c2
$ kind create cluster --name c3

In this exercise, we will switch between the clusters a few times. Personally, I’m using the kubectx to switch between different Kubernetes contexts and kubens to switch between the namespaces.

By default, Skupper exposes itself as a Kubernetes LoadBalancer Service. Therefore, we need to enable the load balancer on Kind. In order to do that, we can install MetalLB. You can find the full installation instructions in the Kind docs here. Firstly, let’s switch to the c1 cluster:

$ kubectx kind-c1

Then, we have to apply the following YAML manifest:

$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml

You should repeat the same procedure for the other two clusters: c2 and c3. However, it is not all. We also need to set up the address pool used by load balancers. To do that, let’s first check a range of IP addresses on the Docker network used by Kind. For me it is 172.19.0.0/16 172.19.0.1.

$ docker network inspect -f '{{.IPAM.Config}}' kind

According to the results, we need to choose the right IP address for all three Kind clusters. Then we have to create the IPAddressPool object, which contains the IPs range. Here’s the YAML manifest for the c1 cluster:

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: example
  namespace: metallb-system
spec:
  addresses:
  - 172.19.255.200-172.19.255.250
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: empty
  namespace: metallb-system

Here’s the pool configuration for e.g. the c2 cluster. It is important that the address range should not conflict with the ranges in two other Kind clusters.

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: example
  namespace: metallb-system
spec:
  addresses:
  - 172.19.255.150-172.19.255.199
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: empty
  namespace: metallb-system

Finally, the configuration for the c3 cluster:

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: example
  namespace: metallb-system
spec:
  addresses:
  - 172.19.255.100-172.19.255.149
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: empty
  namespace: metallb-system

After applying the YAML manifests with the kubectl apply -f command we can proceed to the next section.

Install Skupper on Kubernetes

We can install and manage Skupper on Kubernetes in two different ways: with CLI or through YAML manifests. Most of the examples in Skupper documentation use CLI for that, so I guess it is a preferable approach. Consequently, before we start with Kubernetes, we need to install CLI. You can find installation instructions in the Skupper docs here. Once you install it, just verify if it works with the following command:

$ skupper version

After that, we can proceed with Kubernetes clusters. We will create the same namespace interconnect inside all three clusters. To simplify our upcoming exercise we can also set a default namespace for each context (alternatively you can do it with the kubectl config set-context --current --namespace interconnect command).

$ kubectl create ns interconnect
$ kubens interconnect

Then, let’s switch to the kind-c1 cluster. We will stay in this context until the end of our exercise 🙂

$ kubectx kind-c1

Finally, we will install Skupper on our Kubernetes clusters. In order to do that, we have to execute the skupper init command. Fortunately, it allows us to set the target Kubernetes context with the -c parameter. Inside the kind-c1 cluster, we will also enable the Skupper UI dashboard (--enable-console parameter). With the Skupper console, we may e.g. visualize a traffic volume for all targets in the Skupper network.

$ skupper init --enable-console --enable-flow-collector
$ skupper init -c kind-c2
$ skupper init -c kind-c3

Let’s verify the status of the Skupper installation:

$ skupper status
$ skupper status -c kind-c2
$ skupper status -c kind-c3

Here’s the status for Skupper running in the kind-c1 cluster:

kubernetes-skupper-status

We can also display a list of running Skupper pods in the interconnect namespace:

$ kubectl get po
NAME                                          READY   STATUS    RESTARTS   AGE
skupper-prometheus-867f57b89-dc4lq            1/1     Running   0          3m36s
skupper-router-55bbb99b87-k4qn5               2/2     Running   0          3m40s
skupper-service-controller-6bf57595dd-45hvw   2/2     Running   0          3m37s

Now, our goal is to connect both the c2 and c3 Kind clusters with the c1 cluster. In the Skupper nomenclature, we have to create a link between the namespace in the source and target cluster. Before we create a link we need to generate a secret token that signifies permission to create a link. The token also carries the link details. We are generating two tokens on the target cluster. Each token is stored as the YAML file. The first of them is for the kind-c2 cluster (skupper-c2-token.yaml), and the second for the kind-c3 cluster (skupper-c3-token.yaml).

$ skupper token create skupper-c2-token.yaml
$ skupper token create skupper-c3-token.yaml

We will consider several scenarios where we create a link using different parameters. Before that, let’s deploy our sample app on the kind-c2 and kind-c3 clusters.

Running the sample app on Kubernetes with Skaffold

After cloning the sample app repository go to the main directory. You can easily build and deploy the app to both kind-c2 and kind-c3 with the following commands:

$ skaffold dev --kube-context=kind-c2
$ skaffold dev --kube-context=kind-c3

After deploying the app skaffold automatically prints all the logs as shown below. It will be helpful for the next steps in our exercise.

Our app is deployed under the sample-spring-kotlin-microservice name.

Load balancing with Skupper – scenarios

Scenario 1: the same number of pods and link cost

Let’s start with the simplest scenario. We have a single pod of our app running on the kind-c2 and kind-c3 cluster. In Skupper we can also assign a cost to each link to influence the traffic flow. By default, link cost is set to 1 for a new link. In a service network, the routing algorithm attempts to use the path with the lowest total cost from the client to the target server. For now, we will leave a default value. Here’s a visualization of the first scenario:

Let’s create links to the c1 Kind cluster using the previously generated tokens.

$ skupper link create skupper-c2-token.yaml -c kind-c2
$ skupper link create skupper-c3-token.yaml -c kind-c3

If everything goes fine you should see a similar message:

We can also verify the status of links by executing the following commands:

$ skupper link status -c kind-c2
$ skupper link status -c kind-c3

It means that now c2 and c3 Kind clusters are “working” in the same Skupper network as the c1 cluster. The next step is to expose our app running in both the c2 and c3 clusters into the c1 cluster. Skupper works at Layer 7 and by default, it doesn’t connect apps unless we won’t enable that feature for the particular app. In order to expose our apps to the c1 cluster we need to run the following command on both c2 and c3 clusters.

$ skupper expose deployment/sample-spring-kotlin-microservice \
  --port 8080 -c kind-c2
$ skupper expose deployment/sample-spring-kotlin-microservice \
  --port 8080 -c kind-c3

Let’s take a look at what happened at the target (kind-c1) cluster. As you see Skupper created the sample-spring-kotlin-microservice Kubernetes Service that forwards traffic to the skupper-router pod. The Skupper Router is responsible for load-balancing requests across pods being a part of the Skupper network.

To simplify our exercise, we will enable port-forwarding for the Service visible above.

$ kubectl port-forward svc/sample-spring-kotlin-microservice 8080:8080

Thanks to that we don’t have to configure Kubernetes Ingress to call the service. Now, we can send some test requests over localhost, e.g. with siege.

$ siege -r 200 -c 5 http://localhost:8080/persons/1

We can easily verify that the traffic is coming to pods running on the kind-c2 and kind-c3 by looking at the logs. Alternatively, we can go to the Skupper console and see the traffic visualization:

kubernetes-skupper-diagram-first

Scenario 2: different number of pods and same link cost

In the next scenario, we won’t change anything in the Skupper network configuration. We will just run the second pod of the app in the kind-c3 cluster. So now, there is a single pod running in the kind-c2 cluster, and two pods running in the kind-c3 cluster. Here’s our architecture.

Once again, we can send some requests to the previously tested Kubernetes Service with the siege command:

$ siege -r 200 -c 5 http://localhost:8080/persons/2

Let’s take a look at traffic visualization in the Skupper dashboard. We can switch between all available pods. Here’s the diagram for the pod running in the kind-c2 cluster.

kubernetes-skupper-diagram

Here’s the same diagram for the pod running in the kind-c3 cluster. As you see it receives only ~50% (or even less depending on which pod we visualize) of traffic received by the pod in the kind-c2 cluster. That’s because Skupper there are two pods running in the kind-c3 cluster, while Skupper still balances requests across clusters equally.

Scenario 3: only one pod and different link costs

In the current scenario, there is a single pod of the app running on the c2 Kind cluster. At the same time, there are no pods on the c3 cluster (the Deployment exists but it has been scaled down to zero instances). Here’s the visualization of our scenario.

kubernetes-skupper-arch2

The important thing here is that the c3 cluster is preferred by Skupper since the link to it has a lower cost (2) than the link to the c2 cluster (4). So now, we need to remove the previous link, and then create a new one with the following commands:

$ skupper link create skupper-c2-token.yaml --cost 4 -c kind-c2
$ skupper link create skupper-c3-token.yaml --cost 2 -c kind-c3

In order to create a Skupper link once again you first need to delete the previous one with the skupper link delete link1 command. Then you have to generate new tokens with the skupper token create command as we did before.

Let’s take a look at the Skupper network status:

kubernetes-skupper-network-status

Let’s send some test requests to the exposed service. It works without any errors. Since there is only a single running pod, the whole traffic goes there:

Scenario 4 – more pods in one cluster and different link cost

Finally, the last scenario in our exercise. We will use the same Skupper configuration as in Scenario 3. However, this time we will run two pods in the kind-c3 cluster.

kubernetes-skupper-arch-1

We can switch once again to the Skupper dashboard. Now, as you see, all the pods receive a very similar amount of traffic. Here’s the diagram for the pod running on the kind-c2 cluster.

kubernetes-skupper-equal-traffic

Here’s a similar diagram for the pod running on the kind-c3 cluster. After setting the cost of the link assuming the number of pods running on the cluster I was able to split traffic equally between all the pods across both clusters. It works. However, it is not a perfect way for load-balancing. I would expect at least an option for enabling a round-robin between all the pods working in the same Skupper network. The solution presented in this scenario will work as expected unless we enable auto-scaling for the app.

Final Thoughts

Skupper introduces an interesting approach to the Kubernetes multicluster connectivity based fully on Layer 7. You can compare it to another solution based on the different layers like Submariner or Cilium cluster mesh. I described both of them in my previous articles. If you want to read more about Submariner visit the following post. If you are interested in Cilium read that article.

Leave a Reply