Kubernetes Multicluster with Kind and Submariner
In this article, you will learn how to create multiple Kubernetes clusters locally and establish direct communication between them with Kind and Submariner. Kind (Kubernetes in Docker) is a tool for running local Kubernetes clusters using Docker containers. Each Kubernetes node is a separated Docker container. All these containers are running in the same Docker network kind
.
Our goal in this article is to establish direct communication between pods running in two different Kubernetes clusters created with Kind. Of course, it is not possible by default. We should treat such clusters as two Kubernetes clusters running in different networks. Here comes Submariner. It is a tool originally created by Rancher. It enables direct networking between pods and services in different Kubernetes clusters, either on-premises or in the cloud.
Let’s perform a quick brief of our architecture. We have two applications caller-service
and callme-service
. Also, there are two Kubernetes clusters c1
and c2
created using Kind. The caller-service
application is running on the c1
cluster, while the callme-service
application is running on the c2
cluster. The caller-service
application communicates with the callme-service
application directly without using Kubernetes Ingress
.
Architecture – Submariner on Kubernetes
Let me say some words about Submariner. Since it is a relatively new tool you may have no touch with it. It runs a single, central broker and then joins several members to this broker. Basically, a member is a Kubernetes cluster that is a part of the Submariner cluster. All the members may communicate directly with each other. The Broker component facilitates the exchange of metadata information between Submariner gateways deployed in participating Kubernetes clusters.
The architecture of our example system is visible below. We run the Submariner Broker
on the c1
cluster. Then we run Submariner “agents” on both clusters. Service discovery is based on the Lighthouse project. It provides DNS discovery for Kubernetes clusters connected by Submariner. You may read more details about it here.
Source Code
If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should just follow my instructions.
Both applications are configured to be deployed with Skaffold. In that case, you just need to download Skaffold CLI following the instructions available here. Of course, you also need to have Java and Maven available on your PC.
If you are interested in more about using Skaffold to build and deploy Java applications you can read my article Local Java Development on Kubernetes.
Create Kubernetes clusters with Kind
Firstly, let’s create two Kubernetes clusters using Kind. Each cluster consists of a control plane and a worker node. Since we are going to install Calico as a networking plugin on Kubernetes, we will disable a default CNI plugin on Kind. Finally, we need to configure CIDRs for pods and services. The IP pool should be unique per both clusters. Here’s the Kind configuration manifest for the first cluster. It is available in the project repository under the path k8s/kind-cluster-c1.yaml
.
kind: Cluster
name: c1
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
networking:
podSubnet: 10.240.0.0/16
serviceSubnet: 10.110.0.0/16
disableDefaultCNI: true
Then, let’s create the first cluster using the configuration manifest visible above.
$ kind create cluster --config k8s/kind-cluster-c1.yaml
We have a similar configuration manifest for a second cluster. The only difference is in the name of the cluster and CIDRs for Kubernetes pods and services. It is available in the project repository under the path k8s/kind-cluster-c2.yaml
.
kind: Cluster
name: c2
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
networking:
podSubnet: 10.241.0.0/16
serviceSubnet: 10.111.0.0/16
disableDefaultCNI: true
After that, let’s create the second cluster using the configuration manifest visible above.
$ kind create cluster --config k8s/kind-cluster-c2.yaml
Once the clusters have been successfully created we can verify them using the following command.
$ kind get clusters
c1
c2
Kind automatically creates two Kubernetes contexts for those clusters. We can switch between the kind-c1
and kind-c2
context.
Install Calico on Kubernetes
We will use the Tigera operator to install Calico as a default CNI on Kubernetes. It is possible to use different installation methods, but that with operator is the simplest one. Firstly, let’s switch to the kind-c1 context.
$ kubectx kind-c1
I’m using the kubectx
tool for switching between Kubernetes contexts and namespaces. You can download the latest version of this tool from the following site: https://github.com/ahmetb/kubectx/releases.
In the first step, we install the Tigera operator on the cluster.
$ kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml
After that, we need to create the Installation
CRD object responsible for installing Calico on Kubernetes. We can configure all the required parameters inside a single file. It is important to set the same CIDRs as pods CIDRs inside the Kind configuration file. Here’s the manifest for the first cluster.
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
calicoNetwork:
ipPools:
- blockSize: 26
cidr: 10.240.0.0/16
encapsulation: VXLANCrossSubnet
natOutgoing: Enabled
nodeSelector: all()
The manifest is available in the repository as the k8s/tigera-c1.yaml
file. Let’s apply it.
$ kubectl apply -f k8s/tigera-c1.yaml
Then, we may switch to the kind-c2
context and create a similar manifest with the Calico installation.
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
calicoNetwork:
ipPools:
- blockSize: 26
cidr: 10.241.0.0/16
encapsulation: VXLANCrossSubnet
natOutgoing: Enabled
nodeSelector: all()
Finally, let’s apply it to the second cluster using the k8s/tigera-c2.yaml
file.
$ kubectl apply -f k8s/tigera-c2.yaml
We may verify the installation of Calico by listing running pods in the calico-system
namespace. Here’s the result on my local Kubernetes cluster.
$ kubectl get pod -n calico-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-696ffc7f48-86rfz 1/1 Running 0 75s
calico-node-nhkn5 1/1 Running 0 76s
calico-node-qkkqk 1/1 Running 0 76s
calico-typha-6d6c85c77b-ffmt5 1/1 Running 0 70s
calico-typha-6d6c85c77b-w8x6t 1/1 Running 0 76s
By default, Kind uses a simple networking implementation –
Kindnetd
. However, this CNI plugin is not tested with Submariner. Therefore, we need to change it to one of the already supported ones like Calico.
Install Submariner on Kubernetes
In order to install Submariner on our Kind cluster, we first need to download CLI.
$ curl -Ls https://get.submariner.io | bash
$ export PATH=$PATH:~/.local/bin
Submariner
subctl
CLI requiresxz-utils
. So, first, you need to install this package by executing the following command:apt update -y && apt install xz-utils -y
.
After that, we can use the subctl
binary to deploy the Submarine Broker. If you use Docker on Mac or Windows (like me) you need to perform those operations inside a container with the Kind control plane. So first, let’s get inside the control plane container. Kind automatically sets the name of that container as a conjunction of a cluster name and -control-plane
suffix.
$ docker exec -it c1-control-plane /bin/bash
That container has already kubectl
been installed. The only thing we need to do is to add the context of the second Kubernetes cluster kind-c2
. I just copied it from my local Kube config file, which contains the right data. It has been added by Kind during Kubernetes cluster creation. You can check out the location of the Kubernetes config inside the c1-control-plane
container by displaying the KUBECONFIG environment variable.
$ echo $KUBECONFIG
/etc/kubernetes/admin.conf
If you are copying data from your local Kube config file you just need to change the address of your Kubernetes cluster. Instead of the external IP and port, you have to set the internal Docker container IP and port.
We should use the following IP address for internal communication between both clusters.
Now, we can deploy the Submariner Broker on the c1
cluster. After running the following command Submariner installs an operator on Kubernetes and generates the broker-info.subm
file. That file is then used to join members to the Submariner cluster.
$ subctl deploy-broker
Enable direct communication between Kubernetes clusters with Submariner
Let’s clarify some things before proceeding. We have already created a Submariner Broker
on the c1 cluster. To simplify the process I’m using the same Kubernetes cluster as a Submariner Broker
and Member
. We also use subctl
CLI to add members to a Submariner cluster. One of the essential components that have to be installed is a Submariner Gateway Engine. It is deployed as a DaemonSet
that is configured to run on nodes labelled with submariner.io/gateway=true
. So, in the first step, we will set this label on both Kubernetes worker nodes of c1
and c2
clusters.
$ kubectl label node c1-worker submariner.io/gateway=true
$ kubectl label node c2-worker submariner.io/gateway=true --context kind-c2
Just to remind you – we are still inside the c1-control-plane
container. Now we can add a first member to our Submariner cluster. To do that, we still use subctl
CLI command as shown below. With the join
command, we need to the broker-info.subm
file already generated while running the deploy-broker
command. We will also disable NAT traversal for IPsec.
$ subctl join broker-info.subm --natt=false --clusterid kind-c1
After that, we may add a second member to our cluster.
$ subctl join broker-info.subm --natt=false --clusterid kind-c2 --kubecontext kind-c2
The Submariner operator creates several deployments in the submariner-operator
namespace. Let’s display a list of pods running there.
$ kubectl get pod -n submariner-operator
NAME READY STATUS RESTARTS AGE
submariner-gateway-kd6zs 1/1 Running 0 5m50s
submariner-lighthouse-agent-b798b8987-f6zvl 1/1 Running 0 5m48s
submariner-lighthouse-coredns-845c9cdf6f-8qhrj 1/1 Running 0 5m46s
submariner-lighthouse-coredns-845c9cdf6f-xmd6q 1/1 Running 0 5m46s
submariner-operator-586cb56578-qgwh6 1/1 Running 1 6m17s
submariner-routeagent-fcptn 1/1 Running 0 5m49s
submariner-routeagent-pn54f 1/1 Running 0 5m49s
We can also use some subctl
commands. Let’s display a list of Submariner gateways.
$ subctl show gateways
Showing information for cluster "kind-c2":
NODE HA STATUS SUMMARY
c2-worker active All connections (1) are established
Showing information for cluster "c1":
NODE HA STATUS SUMMARY
c1-worker active All connections (1) are established
Or a list of Submariner connections.
$ subctl show connections
Showing information for cluster "c1":
GATEWAY CLUSTER REMOTE IP NAT CABLE DRIVER SUBNETS STATUS RTT avg.
c2-worker kind-c2 172.20.0.5 no libreswan 10.111.0.0/16, 10.241.0.0/16 connected 384.957µs
Showing information for cluster "kind-c2":
GATEWAY CLUSTER REMOTE IP NAT CABLE DRIVER SUBNETS STATUS RTT avg.
c1-worker kind-c1 172.20.0.2 no libreswan 10.110.0.0/16, 10.240.0.0/16 connected 592.728µs
Deploy applications on Kubernetes and expose them with Submariner
Since we have already installed Submariner on both clusters we can deploy our sample applications. Let’s begin with caller-service
. Make sure you are in the kind-c1
context. Then go to the caller-service
directory and deploy the application using Skaffold as shown below.
$ cd caller-service
$ skaffold dev --port-forward
Then, you should switch to the kind-c2
context. Now, deploy the callme-service
application.
$ cd callme-service
$ skaffold run
In the next step, we need to expose our service to Submariner. To do that you have to execute the following command with subctl
.
$ subctl export service --namespace default callme-service
Submariner exposes services on the domain clusterset.local
. So, our service is now available under the URL callme-service.default.svc.clusterset.local
. Here’s a part of a code in caller-service
responsible for communication with callme-service
through the Submariner DNS.
@GetMapping("/ping")
public String ping() {
LOGGER.info("Ping: name={}, version={}", buildProperties.getName(), version);
String response = restTemplate
.getForObject("http://callme-service.default.svc.clusterset.local:8080/callme/ping", String.class);
LOGGER.info("Calling: response={}", response);
return "I'm caller-service " + version + ". Calling... " + response;
}
In order to analyze what happened let’s display some CRD objects created by Submariner. Firstly, it created ServiceExport
on the cluster with the exposed service. In our case, it is the kind-c2
cluster.
$ kubectl get ServiceExport
NAME AGE
callme-service 15s
Once we exposed the service it is automatically imported on the second cluster. We need to switch to the kind-c1
cluster and then display the ServiceImport
object.
$ kubectl get ServiceImport -n submariner-operator
NAME TYPE IP AGE
callme-service-default-kind-c2 ClusterSetIP ["10.111.176.50"] 4m55s
The ServiceImport
object stores the IP address of Kubernetes Service
callme-service
.
$ kubectl get svc --context kind-c2
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
callme-service ClusterIP 10.111.176.50 <none> 8080/TCP 31m
kubernetes ClusterIP 10.111.0.1 <none> 443/TCP 74m
Finally, we may test a connection between clusters by calling the following endpoint. The caller-service
calls the GET /callme/ping
endpoint exposed by callme-service
. Thanks to enabling the port-forward
option on the Skaffold command we may access the service locally on port 8080.
$ curl http://localhost:8080/caller/ping
8 COMMENTS