Multicluster Traffic Mirroring with Istio and Kind
In this article, you will learn how to create an Istio mesh with mirroring between multiple Kubernetes clusters running on Kind. We will deploy the same application in two Kubernetes clusters, and then we will mirror the traffic between those clusters. When such a scenario might be useful?
Let’s assume we have two Kubernetes clusters. The first of them is a production cluster, while the second is a test cluster. While there is huge incoming traffic to the production cluster, there is no traffic to the test cluster. What can we do in such a situation? We can just send a portion of production traffic to the test cluster. With Istio, you can also mirror internal traffic between e.g. microservices.
To simulate the scenario described above we will create two Kubernetes clusters locally with Kind. Then, we will install Istio mesh in multi-primary mode between different networks. The Kubernetes API Server and Istio Gateway need to be accessible by pods running on a different cluster. We have two applications. The caller-service
application is running on the c1 cluster. It calls callme-service
. The v1
version of the callme-service
application is deployed on the c1 cluster, while the v2
version of the application is deployed on the c2
cluster. We will mirror 50% of traffic coming from the v1
version of our application to the v2
version running on the different clusters. The following picture illustrates our architecture.
Source Code
If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should just follow my instructions.
Both applications are configured to be deployed with Skaffold. In that case, you just need to download Skaffold CLI following the instructions available here. Of course, you also need to have Java and Maven available on your PC.
Create Kubernetes clusters with Kind
Firstly, let’s create two Kubernetes clusters using Kind. We don’t have to override any default settings, so we can just use the following command to create clusters.
$ kind create cluster --name c1
$ kind create cluster --name c2
Kind automatically creates a Kubernetes context and adds it to the config file. Just to verify, let’s display a list of running clusters.
$ kind get clusters
c1
c2
Also, we can display a list of contexts created by Kind.
$ kubectx | grep kind
kind-c1
kind-c2
Install MetalLB on Kubernetes clusters
To establish a connection between multiple clusters locally we need to expose some services as LoadBalancer
. That’s why we need to install MetalLB. MetalLB is a load-balancer implementation for bare metal Kubernetes clusters. Firstly, we have to create the metalb-system
namespace. Those operations should be performed on both our clusters.
$ kubectl apply -f \
https://raw.githubusercontent.com/metallb/metallb/master/manifests/namespace.yaml
Then, we are going to create the memberlists
secret required by MetalLB.
$ kubectl create secret generic -n metallb-system memberlist \
--from-literal=secretkey="$(openssl rand -base64 128)"
Finally, let’s install MetalLB.
$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/master/manifests/metallb.yaml
In order to complete the configuration, we need to provide a range of IP addresses MetalLB controls. We want this range to be on the docker kind
network.
$ docker network inspect -f '{{.IPAM.Config}}' kind
For me it is CIDR 172.20.0.0/16
. Basing on it, we can configure the MetalLB IP pool per each cluster. For the first cluster c1
I’m setting addresses starting from 172.20.255.200
and ending with 172.20.255.250
.
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.20.255.200-172.20.255.250
We need to apply the configuration to the first cluster.
$ kubectl apply -f k8s/metallb-c1.yaml --context kind-c1
For the second cluster c2
I’m setting addresses starting from 172.20.255.150
and ending with 172.20.255.199
.
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.20.255.150-172.20.255.199
Finally, we can apply the configuration to the second cluster.
$ kubectl apply -f k8s/metallb-c2.yaml --context kind-c2
Install Istio on Kubernetes in multicluster mode
A multicluster service mesh deployment requires establishing trust between all clusters in the mesh. In order to do that we should configure the Istio certificate authority (CA) with a root certificate, signing certificate, and key. We can easily do it using Istio tools. First, we to go to the Istio installation directory on your PC. After that, we may use the Makefile.selfsigned.mk
script available inside the tools/certs
directory
$ cd $ISTIO_HOME/tools/certs/
The following command generates the root certificate and key.
$ make -f Makefile.selfsigned.mk root-ca
The following command generates an intermediate certificate and key for the Istio CA for each cluster. This will generate the required files in a directory named with a cluster name.
$ make -f Makefile.selfsigned.mk kind-c1-cacerts
$ make -f Makefile.selfsigned.mk kind-c2-cacerts
Then we may create Kubernetes Secret
basing on the generated certificates. The same operation should be performed for the second cluster kind-c2
.
$ kubectl create namespace istio-system
$ kubectl create secret generic cacerts -n istio-system \
--from-file=kind-c1/ca-cert.pem \
--from-file=kind-c1/ca-key.pem \
--from-file=kind-c1/root-cert.pem \
--from-file=kind-c1/cert-chain.pem
We are going to install Istio using the operator. It is important to set the same meshID
for both clusters and different networks. We also need to create Istio Gateway
for communication between two clusters inside a single mesh. It should be labeled with topology.istio.io/network=network1
. The Gateway
definition also contains two environment variables ISTIO_META_ROUTER_MODE
and ISTIO_META_REQUESTED_NETWORK_VIEW
. The first variable is responsible for setting the sni-dnat
value that adds the clusters required for AUTO_PASSTHROUGH mode.
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
values:
global:
meshID: mesh1
multiCluster:
clusterName: kind-c1
network: network1
components:
ingressGateways:
- name: istio-eastwestgateway
label:
istio: eastwestgateway
app: istio-eastwestgateway
topology.istio.io/network: network1
enabled: true
k8s:
env:
- name: ISTIO_META_ROUTER_MODE
value: "sni-dnat"
- name: ISTIO_META_REQUESTED_NETWORK_VIEW
value: network1
service:
ports:
- name: status-port
port: 15021
targetPort: 15021
- name: tls
port: 15443
targetPort: 15443
- name: tls-istiod
port: 15012
targetPort: 15012
- name: tls-webhook
port: 15017
targetPort: 15017
Before installing Istio we should label the istio-system
namespace with topology.istio.io/network=network1
. The Istio installation manifest is available in the repository as the k8s/istio-c1.yaml
file.
$ kubectl --context kind-c1 label namespace istio-system \
topology.istio.io/network=network1
$ istioctl install --config k8s/istio-c1.yaml \
--context kind-c
There a similar IstioOperator
definition for the second cluster. The only difference is in the name of the network, which is now network2
, and the name of the cluster.
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
values:
global:
meshID: mesh1
multiCluster:
clusterName: kind-c2
network: network2
components:
ingressGateways:
- name: istio-eastwestgateway
label:
istio: eastwestgateway
app: istio-eastwestgateway
topology.istio.io/network: network2
enabled: true
k8s:
env:
- name: ISTIO_META_ROUTER_MODE
value: "sni-dnat"
- name: ISTIO_META_REQUESTED_NETWORK_VIEW
value: network2
service:
ports:
- name: status-port
port: 15021
targetPort: 15021
- name: tls
port: 15443
targetPort: 15443
- name: tls-istiod
port: 15012
targetPort: 15012
- name: tls-webhook
port: 15017
targetPort: 15017
The same as for the first cluster, let’s label the istio-system
namespace with topology.istio.io/network
and install Istio using operator manifest.
$ kubectl --context kind-c2 label namespace istio-system \
topology.istio.io/network=network2
$ istioctl install --config k8s/istio-c2.yaml \
--context kind-c2
Configure multicluster connectivity
Since the clusters are on separate networks, we need to expose all local services on the gateway in both clusters. Services behind that gateway can be accessed only by services with a trusted TLS certificate and workload ID. The definition of the cross gateway is exactly the same for both clusters. You can that manifest in the repository as k8s/istio-cross-gateway.yaml
.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: cross-network-gateway
spec:
selector:
istio: eastwestgateway
servers:
- port:
number: 15443
name: tls
protocol: TLS
tls:
mode: AUTO_PASSTHROUGH
hosts:
- "*.local"
Let’s apply the Gateway
object to both clusters.
$ kubectl apply -f k8s/istio-cross-gateway.yaml \
--context kind-c1
$ kubectl apply -f k8s/istio-cross-gateway.yaml \
--context kind-c2
In the last step in this scenario, we enable endpoint discovery between Kubernetes clusters. To do that, we have to install a remote secret in the kind-c2
cluster that provides access to the kind-c1
API server. And vice versa. Fortunately, Istio provides an experimental feature for generating remote secrets.
$ istioctl x create-remote-secret --context=kind-c1 --name=kind-c1
$ istioctl x create-remote-secret --context=kind-c2 --name=kind-c2
Before applying generated secrets we need to change the address of the cluster. Instead of localhost
and dynamically generated port, we have to use c1-control-plane:6443
for the first cluster, and respectively c2-control-plane:6443
for the second cluster. The remote secrets generated for my clusters are committed in the project repository as k8s/secret1.yaml
and k8s/secret2.yaml
. You compare them with secrets generated for your clusters. Replace them with your secrets, but remember to change the address of your clusters.
$ kubectl apply -f k8s/secret1.yaml --context kind-c2
$ kubectl apply -f k8s/secret2.yaml --context kind-c1
Configure Mirroring with Istio
We are going to deploy our sample applications in the default
namespace. Therefore, automatic sidecar injection should be enabled for that namespace.
$ kubectl label --context kind-c1 namespace default \
istio-injection=enabled
$ kubectl label --context ind-c2 namespace default \
istio-injection=enabled
Before configuring Istio rules let’s deploy v1
version of the callme-service
application on the kind-c1
cluster.
apiVersion: apps/v1
kind: Deployment
metadata:
name: callme-service-v1
spec:
replicas: 1
selector:
matchLabels:
app: callme-service
version: v1
template:
metadata:
labels:
app: callme-service
version: v1
spec:
containers:
- name: callme-service
image: piomin/callme-service
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
env:
- name: VERSION
value: "v1"
Then, we will deploy the v2
version of the callme-service
application on the kind-c2
cluster.
apiVersion: apps/v1
kind: Deployment
metadata:
name: callme-service-v2
spec:
replicas: 1
selector:
matchLabels:
app: callme-service
version: v2
template:
metadata:
labels:
app: callme-service
version: v2
spec:
containers:
- name: callme-service
image: piomin/callme-service
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
env:
- name: VERSION
value: "v2"
Of course, we should also create Kubernetes Service
on both clusters.
apiVersion: v1
kind: Service
metadata:
name: callme-service
labels:
app: callme-service
spec:
type: ClusterIP
ports:
- port: 8080
name: http
selector:
app: callme-service
The Istio DestinationRule
defines two Subsets
for callme-service
basing on the version
label.
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: callme-service-destination
spec:
host: callme-service
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
Finally, we may configure traffic mirroring with Istio. The 50% of traffic coming to the callme-service
deployed on the kind-c1
cluster is sent to the callme-service
deployed on the kind-c2
cluster.
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: callme-service-route
spec:
hosts:
- callme-service
http:
- route:
- destination:
host: callme-service
subset: v1
weight: 100
mirror:
host: callme-service
subset: v2
mirrorPercentage:
value: 50.
We will also deploy the caller-service
application on the kind-c1
cluster. It calls endpoint GET /callme/ping
exposed by the callme-service
application.
$ kubectl get pod --context kind-c1
NAME READY STATUS RESTARTS AGE
caller-service-b9dbbd6c8-q6dpg 2/2 Running 0 1h
callme-service-v1-7b65795f48-w7zlq 2/2 Running 0 1h
Let’s verify the list of running pods in the default namespace in the kind-c2 cluster.
$ kubectl get pod --context kind-c2
NAME READY STATUS RESTARTS AGE
callme-service-v2-665b876579-rsfks 2/2 Running 0 1h
In order to test Istio mirroring through multiple Kubernetes clusters, we call the endpoint GET /caller/ping
exposed by caller-service
. As I mentioned before it calls the similar endpoint exposed by the callme-service
application with the HTTP client. The simplest way to test it is through enabling port-forwarding. Thanks to that, the caller-service
Service
is available on the local port 8080
. Let’s call that endpoint 20 times with siege
.
$ siege -r 20 -c 1 http://localhost:8080/caller/service
After that, you can verify the logs for callme-service-v1
and callme-service-v2
deployments.
$ kubectl logs pod/callme-service-v1-7b65795f48-w7zlq --context kind-c1
$ kubectl logs pod/callme-service-v2-665b876579-rsfks --context kind-c2
You should see the following log 20 times for the kind-c1
cluster.
I'm callme-service v1
And respectively you should see the following log 10 times for the kind-c2
cluster, because we mirror 50% of traffic from v1
to v2
.
I'm callme-service v2
Final Thoughts
This article shows how to create Istio multicluster with traffic mirroring between different networks. If you would like to simulate a similar scenario in the same network you may use a tool called Submariner. You may find more details about running Submariner on Kubernetes in this article Kubernetes Multicluster with Kind and Submariner.
16 COMMENTS