Kubernetes Multicluster with Kind and Cilium
In this article, you will learn how to configure Kubernetes multicluster locally with Kind and Cilium. If you are looking for some other articles about local Kubernetes multicluster you should also read Kubernetes Multicluster with Kind and Submariner and Multicluster Traffic Mirroring with Istio and Kind.
Cilium can act as a CNI plugin on your Kubernetes cluster. It uses a Linux kernel technology called BPF, that enables the dynamic insertion of security visibility and control logic within the Linux kernel. It provides distributed load-balancing for pod to pod traffic and identity-based implementation of the NetworkPolicy
resource. However, in this article, we will focus on its feature called Cluster Mesh, which allows setting up direct networking across multiple Kubernetes clusters.
Prerequisites
Before starting with this exercise you need to install some tools on your local machine. Of course, you need to have kubectl
to interact with your Kubernetes cluster. Except this, you will also have to install:
- Kind CLI – in order to run multiple Kubernetes clusters locally. For more details refer here
- Cilium CLI – in order to manage and inspect the state of a Cilium installation. For more details you may refer here
- Skaffold CLI (optionally) – if you would to build and run applications directly from the code. Otherwise, you may just use my images published on Docker Hub. In case you decided to build directly from the source code you also need to have JDK and Maven
- Helm CLI – we will use Helm to install Cilium on Kubernetes. Alternatively we could use Cilium CLI for that, but with Helm chart we can easily enable some additional required Cilium features
Source Code
If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should just follow my instructions.
You can also use application images instead of building them directly from the code. The image of the callme-service application is available here https://hub.docker.com/r/piomin/callme-service, while the image for the caller-service application is available here: https://hub.docker.com/r/piomin/callme-service.
Run Kubernetes clusters locally with Kind
When running a new Kind cluster we first need to disable the default CNI plugin based on kindnetd
. Of course, we will use Cilium instead. Moreover, pod CIDR ranges in both our clusters must be non-conflicting and have unique IP addresses. That’s why we also provide podSubnet
and serviceSubnet
in the Kind cluster configuration manifest. Here’s a configuration file for our first cluster:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
networking:
disableDefaultCNI: true
podSubnet: "10.0.0.0/16"
serviceSubnet: "10.1.0.0/16"
The name of that cluster is c1
. So that the name of Kubernetes context is kind-c1
. In order to create a new Kind cluster using the YAML manifest visible above we should run the following command:
$ kind create cluster --name c1 --config kind-c1-config.yaml
Here’s a configuration file for our second cluster. It has different pod and service CIDRs than the first cluster:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
networking:
disableDefaultCNI: true
podSubnet: "10.2.0.0/16"
serviceSubnet: "10.3.0.0/16"
The same as before we need to run the kind create
command with a different name c2
and a YAML manifest for the second cluster:
$ kind create cluster --name c2 --config kind-c2-config.yaml
Install Cilium CNI on Kubernetes
Once we have successfully created two local Kubernetes clusters with Kind we may proceed to the Cilium installation. Firstly, let’s switch to the context of the kind-c1
cluster:
$ kubectl config use-context kind-c1
We will install Cilium using the Helm chart. To do that, we should add a new Helm repository.
$ helm repo add cilium https://helm.cilium.io/
For the Cluster Mesh option, we need to enable some Cilium features disabled by default. Also, the important thing is to set cluster.name
and cluster.id
parameters.
$ helm install cilium cilium/cilium --version 1.10.5 \
--namespace kube-system \
--set nodeinit.enabled=true \
--set kubeProxyReplacement=partial \
--set hostServices.enabled=false \
--set externalIPs.enabled=true \
--set nodePort.enabled=true \
--set hostPort.enabled=true \
--set cluster.name=c1 \
--set cluster.id=1
Let’s switch to the context of the kind-c2
cluster:
$ kubectl config use-context kind-c2
We need to set different values for cluster.name
and cluster.id
parameters in the Helm installation command.
$ helm install cilium cilium/cilium --version 1.10.5 \
--namespace kube-system \
--set nodeinit.enabled=true \
--set kubeProxyReplacement=partial \
--set hostServices.enabled=false \
--set externalIPs.enabled=true \
--set nodePort.enabled=true \
--set hostPort.enabled=true \
--set cluster.name=c2 \
--set cluster.id=2
After installing Cilium you can easily verify the status by running the cilium status
command on both clusters. Just to clarify, the Cilium Cluster Mesh is not enabled yet.
Install MetalLB on Kind
When deploying Cluster Mesh Cilium attempt to auto-detect the best service type for the LoadBalancer
to expose the Cluster Mesh control plane to other clusters. The default and recommended option is LoadBalancer
IP (there is also NodePort
and ClusterIP
available). That’s why we need to enable external IP support on Kind, which is not provided by default (on macOS and Windows). Fortunately, we may install MetalLB on our cluster. MetalLB is a load-balancer implementation for bare metal Kubernetes clusters, using standard routing protocols. Firstly, let’s create a namespace for MetalLB components:
$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/master/manifests/namespace.yaml
Then we have to create a secret.
$ kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
Let’s install MetalLB components in the metallb-system
namespace.
$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/master/manifests/metallb.yaml
If everything works fine you should have the following pods in the metallb-system
namespace.
$ kubectl get pod -n metallb-system
NAME READY STATUS RESTARTS AGE
controller-6cc57c4567-dk8fw 1/1 Running 1 12h
speaker-26g67 1/1 Running 2 12h
speaker-2dhzf 1/1 Running 1 12h
speaker-4fn7t 1/1 Running 2 12h
speaker-djbtq 1/1 Running 1 12h
Now, we need to setup address pools for LoadBalancer
. We should check the range of addresses for kind network on Docker:
$ docker network inspect -f '{{.IPAM.Config}}' kind
For me the address pool for kind is 172.20.0.0/16
. Let’s say I’ll configure 50 IPs starting from 172.20.255.200
. Then we should create a manifest with external pool IP:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.20.255.200-172.20.255.250
The pool for the second cluster should be different to avoid conflicts in addresses. I’ll also configure 50 IPs, this time starting from 172.20.255.150
:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.20.255.150-172.20.255.199
After that, we are ready to enable Cilium Cluster Mesh.
Enable Cilium Multicluster on Kubernetes
In the command visible enable Cilium multicluster mesh for the first Kubernetes cluster. As you see we are going to choose the LoadBalancer
type to connect both clusters. If you compare it with Cilium documentation, I also had to set option create-ca
, since there is no CA generated when installing Cilium with Helm:
$ cilium clustermesh enable --context kind-c1 \
--service-type LoadBalancer \
--create-ca
Then, we may verify it works successfully:
$ cilium clustermesh status --context kind-c1 --wait
Now, let’s do the same thing for the second cluster:
$ cilium clustermesh enable --context kind-c2 \
--service-type LoadBalancer \
--create-ca
Then, we can verify the status of a cluster mesh on the second cluster:
$ cilium clustermesh status --context kind-c2 --wait
Finally, we can connect both clusters together. This step only needs to be done in one direction. Following Cilium documentation, the connection will automatically be established in both directions:
$ cilium clustermesh connect --context kind-c1 \
--destination-context kind-c2
If everything goes fine you should see a similar result as shown below.
After that, let’s verify the status of the Cilium cluster mesh once again:
$ cilium clustermesh status --context kind-c1 --wait
If everything goes fine you should see a similar result as shown below.
You can also verify the Kubernetes Service with Cilium Mesh Control Plane.
$ kubectl get svc -A | grep clustermesh
kube-system clustermesh-apiserver LoadBalancer 10.1.150.156 172.20.255.200 2379:32323/TCP 13h
You can validate the connectivity by running the connectivity test in multi-cluster mode:
$ cilium connectivity test --context kind-c1 --multi-cluster kind-c2
To be honest, all these tests were failed for my kind clusters 🙂 I was quite concerned. I’m not very sure how those tests work. However, it didn’t cause that I gave up on deploying my applications on both Kubernetes clusters to test Cilium multicluster.
Testing Cilium Kubernetes Multicluster with Java apps
Establishing load-balancing between clusters is achieved by defining a Kubernetes service with an identical name and namespace in each cluster and adding the annotation io.cilium/global-service: "true"
to declare it global. Cilium will automatically perform load-balancing to pods in both clusters. Here’s the Kubernetes Service object definition for the callme-service
application. As you see it is not exposed outside the cluster since it has ClusterIP
type.
apiVersion: v1
kind: Service
metadata:
name: callme-service
labels:
app: callme-service
annotations:
io.cilium/global-service: "true"
spec:
type: ClusterIP
ports:
- port: 8080
name: http
selector:
app: callme-service
Here’s the deployment manifest of the callme-service
application:
apiVersion: apps/v1
kind: Deployment
metadata:
name: callme-service
spec:
replicas: 1
selector:
matchLabels:
app: callme-service
version: v1
template:
metadata:
labels:
app: callme-service
version: v1
spec:
containers:
- name: callme-service
image: piomin/callme-service
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
env:
- name: VERSION
value: "v1"
Our test case is very simple. I’m going to deploy the caller-service
application on the c1
cluster. The caller-service
calls REST endpoint exposed by the callme-service
. The callme-service
application is running on the c2
cluster. I’m not changing anything in the implementation in comparison to the versions running on the single cluster. It means that the caller-service
calls the callme-service
endpoint using Kubernetes Service
name and HTTP port (http://callme-service:8080
).
Ok, so now let’s just deploy the callme-service
on the c2
cluster using skaffold
. Before running the command go to the callme-service
directory. Of course, you can also deploy a ready image with kubectl
. The deployment manifest is available in the callme-service/k8s
directory.
$ skaffold run --kube-context kind-c2
The caller-service
application exposes a single HTTP endpoint that prints information about the version. It also calls the similar endpoint exposed by the callme-service
. As I mentioned before, it uses the name of Kubernetes Service
in communication.
@RestController
@RequestMapping("/caller")
public class CallerController {
private static final Logger LOGGER = LoggerFactory.getLogger(CallerController.class);
@Autowired
RestTemplate restTemplate;
@Value("${VERSION}")
private String version;
@GetMapping("/ping")
public String ping() {
String response = restTemplate.getForObject("http://callme-service:8080/callme/ping", String.class);
LOGGER.info("Calling: response={}", response);
return "I'm caller-service " + version + ". Calling... " + response;
}
}
Here’s the Deployment
manifest for the caller-service
.
apiVersion: apps/v1
kind: Deployment
metadata:
name: caller-service
spec:
replicas: 1
selector:
matchLabels:
app: caller-service
template:
metadata:
name: caller-service
labels:
app: caller-service
version: v1
spec:
containers:
- name: caller-service
image: piomin/caller-service
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
env:
- name: VERSION
value: "v1"
Also, we need to create Kubernetes Service
.
apiVersion: v1
kind: Service
metadata:
name: caller-service
labels:
app: caller-service
spec:
type: ClusterIP
ports:
- port: 8080
name: http
selector:
app: caller-service
Don’t forget to apply callme-service
Service
also on the c1
cluster, although there are no running instances on that cluster. I can simply deploy it using Skaffold. All required manifests are applied automatically. Before running the following command go to the caller-service
directory.
$ skaffold dev --port-forward --kube-context kind-c1
Thanks to the port-forward
option I can simply test the caller-service
on localhost. What is worth mentioning Skaffold supports Kind, so you don’t have to do any additional steps to deploy applications there. Finally, let’s test the communication by calling the HTTP endpoint exposed by the caller-service
.
$ curl http://localhost:8080/caller/ping
Final Thoughts
It was not really hard to configure Kubernetes multicluster with Cilium and Kind. I had just a problem with their test connectivity tool that doesn’t work for cluster mesh. However, my simple test with two applications running on different Kubernetes clusters and communicating via HTTP was successful.
4 COMMENTS