OpenShift Multicluster with Advanced Cluster Management for Kubernetes and Submariner
This article will teach you how to connect multiple Openshift clusters with Submariner and Advanced Cluster Management for Kubernetes. Submariner allows us to configure direct networking between pods and services in different Kubernetes clusters, either on-premises or in the cloud. It operates at the L3 layer. It establishes a secure tunnel between two clusters and provides service discovery. I have already described how to install and manage it on Kubernetes mostly with the subctl
CLI in the following article.
Today we will focus on the integration between Submariner and OpenShift through the Advanced Cluster Management for Kubernetes (ACM). ACM is a tool dedicated to OpenShift. It allows to control of clusters and applications from a single console, with built-in security policies. You can find several articles about it on my blog. For example, the following one describes how to use ACM together with Argo CD in the GitOps approach.
Source Code
This time we won’t work much with a source code. However, if you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. After that, you should follow my further instructions.
Architecture
Our architecture consists of three Openshift clusters: a single hub cluster and two managed clusters. The hub cluster aims to create new managed clusters and establish a secure connection between them using Submariner. So, in the initial state, there is just a hub cluster with the Advanced Cluster Management for Kubernetes (ACM) operator installed on it. With ACM we will create two new Openshift clusters on the target infrastructure (AWS) and install Submariner on them. Finally, we are going to deploy two sample Spring Boot apps. The callme-service
app exposes a single GET /callme/ping
endpoint and runs on ocp2. We will expose it through Submariner to the ocp1
cluster. On the ocp1 cluster, there is the second app caller-service
that invokes the endpoint exposed by the callme-service
app. Here’s the diagram of our architecture.
Install Advanced Cluster Management on OpenShift
In the first step, we must install the Advanced Cluster Management for Kubernetes (ACM) on OpenShift using an operator. The default installation namespace is open-cluster-management.
We won’t change it.
Once we install the operator, we need to initialize the ACM we have to create the MultiClusterHub
object. Once again, we will use the open-cluster-management
for that. Here’s the object declaration. We don’t need to specify any more advanced settings.
apiVersion: operator.open-cluster-management.io/v1
kind: MultiClusterHub
metadata:
name: multiclusterhub
namespace: open-cluster-management
spec: {}
We can do the same thing graphically in the OpenShift Dashboard. Just click the “Create MultiClusterHub” button and then accept the action on the next page. Probably it will take some time to complete the installation since there are several pods to run.
Once the installation is completed, you will see the new menu item at the top of the dashboard allowing you to switch to the “All Clusters” view. Let’s do it. After that, we can proceed to the next step.
Create OpenShift Clusters with ACM
Advanced Cluster Management for Kubernetes allows us to import the existing clusters or create new ones on the target infrastructure. In this exercise, you see how to leverage the cloud provider account for that. Let’s just click the “Connect your cloud provider” tile on the welcome screen.
Provide Cloud Credentials
I’m using my already existing account on AWS. ACM will ask us to provide the appropriate credentials for the AWS account. In the first form, we should provide the name and namespace of our secret with credentials and a default base DNS domain.
Then, the ACM wizard will redirect us to the next steps. We have to provide AWS access key ID and secret, OpenShift pull secret, and also the SSH private/public keys. Of course, we can create the required Kubernetes Secret
without a wizard, just by applying the similar YAML manifest:
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: aws
namespace: open-cluster-management
labels:
cluster.open-cluster-management.io/type: aws
cluster.open-cluster-management.io/credentials: ""
stringData:
aws_access_key_id: AKIAXBLSZLXZJWT3KFPM
aws_secret_access_key: "********************"
baseDomain: sandbox2746.opentlc.com
pullSecret: "********************"
ssh-privatekey: "********************"
ssh-publickey: "********************"
httpProxy: ""
httpsProxy: ""
noProxy: ""
additionalTrustBundle: ""
Provision the Cluster
After that, we can prepare the ACM cluster set. The cluster set feature allows us to group OpenShift clusters. It is the required prerequisite for Submariner installation. Here’s the ManagedClusterSet
object. The name is arbitrary. We can set it e.g. as the submariner
.
apiVersion: cluster.open-cluster-management.io/v1beta2
kind: ManagedClusterSet
metadata:
name: submariner
spec: {}
Finally, we can create two OpenShift clusters on AWS from the ACM dashboard. Go to the Infrastructure -> Clusters -> Cluster list and click the “Create cluster” button. Then, let’s choose the “Amazon Web Services” tile with already created credentials.
In the “Cluster Details” form we should set the name (ocp1
and then ocp2
for the second cluster) and version of the OpenShift cluster (the “Release image” field). We should also assign it to the submariner
cluster set.
Let’s take a look at the “Networking” form. We won’t change anything here intentionally. We will set the same IP address ranges for both the ocp1
and ocp2
clusters. In the default settings, Submariner requires non-overlapping Pod
and Service
CIDRs between the interconnected clusters. This approach prevents routing conflicts. We are going to break those rules, which results in conflicts in the internal IP addresses between the ocp1
and ocp2
clusters. We will see how Submariner helps to resolve such an issue.
It will take around 30-40 minutes to create both clusters. ACM will connect directly to our AWS and create all the required resources there. As a result, our environment is ready. Let’s take how it looks from the ACM dashboard perspective:
There is a single management (hub) cluster and two managed clusters. Both managed clusters are assigned to the submariner
cluster set. If you have the same result as me, you can proceed to the next step.
Enable Submariner for OpenShift clusters with ACM
Install in the Target Managed Cluster Set
Submariner is available on OpenShift in the form of an add-on to ACM. As I mentioned before, it requires ACM ManagedClusterSet
objects for grouping clusters that should be connected. In order to enable Submariner for the specific cluster set, we need to view its details and switch to the “Submariner add-ons” tab. Then, we need to click the “Install Submariner add-ons” button. In the installation form, we have to choose the target clusters and enable the “Globalnet” feature to resolve an issue related to the Pod
and Service
CIDR overlapping. The default value of the “Globalnet” CIDR is 242.0.0.0/8
. If it’s fine for us we can leave the empty value in the text field and proceed to the next step.
In the next form, we are configuring Submariner installation in each OpenShift cluster. We don’t have to change any value there. ACM will create an additional node on the OpenShift cluster using the c5d.large
VM type. It will use that node for installing Multus CNI. Multus is a CNI plugin for Kubernetes that enables attaching multiple network interfaces to pods. It is responsible for enabling the Submariner “Globalnet” feature and giving a subnet from this virtual Global Private Network, configured as a new cluster parameter GlobalCIDR
. We will run a single instance of the Submariner gateway and leave the default libreswan
cable driver.
Of course, we can also provide that configuration as YAML manifests. With that approach, we need to create the ManagedClusterAddOn
and SubmarinerConfig
objects on both ocp1
and ocp2
clusters through the ACM engine. The Submariner Broker
object has to be created on the hub cluster.
apiVersion: addon.open-cluster-management.io/v1alpha1
kind: ManagedClusterAddOn
metadata:
name: submariner
namespace: ocp2
spec:
installNamespace: submariner-operator
---
apiVersion: submarineraddon.open-cluster-management.io/v1alpha1
kind: SubmarinerConfig
metadata:
name: submariner
namespace: ocp2
spec:
gatewayConfig:
gateways: 1
aws:
instanceType: c5d.large
IPSecNATTPort: 4500
airGappedDeployment: false
NATTEnable: true
cableDriver: libreswan
globalCIDR: ""
credentialsSecret:
name: ocp2-aws-creds
---
apiVersion: addon.open-cluster-management.io/v1alpha1
kind: ManagedClusterAddOn
metadata:
name: submariner
namespace: ocp1
spec:
installNamespace: submariner-operator
---
apiVersion: submarineraddon.open-cluster-management.io/v1alpha1
kind: SubmarinerConfig
metadata:
name: submariner
namespace: ocp1
spec:
gatewayConfig:
gateways: 1
aws:
instanceType: c5d.large
IPSecNATTPort: 4500
airGappedDeployment: false
NATTEnable: true
cableDriver: libreswan
globalCIDR: ""
credentialsSecret:
name: ocp1-aws-creds
---
apiVersion: submariner.io/v1alpha1
kind: Broker
metadata:
name: submariner-broker
namespace: submariner-broker
labels:
cluster.open-cluster-management.io/backup: submariner
spec:
globalnetEnabled: true
globalnetCIDRRange: 242.0.0.0/8
Verify the Status of Submariner Network
After installing the Submariner Add-on in the target cluster set, you should see the same statuses for both ocp1
and ocp2
clusters.
Assuming that you are logged in to all the clusters with the oc
CLI, we can the detailed status of the Submariner network with the subctl
CLI. In order to do that, we should execute the following command:
$ subctl show all
It examines all the clusters one after the other and prints all key Submariner components installed there. Let’s begin with the command output for the hub cluster. As you see, it runs the Submariner Broker
component in the submariner-broker
namespace:
Here’s the output for the ocp1
managed cluster. The global CIDR for that cluster is 242.1.0.0/16
. This IP range will be used for exposing services to other clusters inside the same Submariner network.
On the other hand, here’s the output for the ocp2
managed cluster. The global CIDR for that cluster is 242.0.0.0/16
. The connection between ocp1
and ocp2
clusters is established. Therefore we can proceed to the last step in our exercise. Let’s run the sample apps on our OpenShift clusters!
Export App to the Remote Cluster
Since we already installed Submariner on both OpenShift clusters we can deploy our sample applications. Let’s begin with caller-service
. We will run it in the demo-apps
namespace. Make sure you are in the ocp1
Kube context. Here’s the YAML manifest with the Deployment
and Service
definitions for our app:
apiVersion: apps/v1
kind: Deployment
metadata:
name: caller-service
spec:
replicas: 1
selector:
matchLabels:
app: caller-service
template:
metadata:
name: caller-service
labels:
app: caller-service
spec:
containers:
- name: caller-service
image: piomin/caller-service
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
env:
- name: VERSION
value: "v1"
---
apiVersion: v1
kind: Service
metadata:
name: caller-service
labels:
app: caller-service
spec:
type: ClusterIP
ports:
- port: 8080
name: http
selector:
app: caller-service
Then go to the caller-service
directory and deploy the application using Skaffold as shown below. We can also expose the service outside the cluster using the OpenShift Route
object:
$ cd caller-service
$ oc project demo-apps
$ skaffold run
$ oc expose svc/caller-service
Let’s switch to the callme-service
app. Make sure you are in the ocp
2 Kube context. Here’s the YAML manifest with the Deployment
and Service
definitions for our second app:
apiVersion: apps/v1
kind: Deployment
metadata:
name: callme-service
spec:
replicas: 1
selector:
matchLabels:
app: callme-service
template:
metadata:
labels:
app: callme-service
spec:
containers:
- name: callme-service
image: piomin/callme-service
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
env:
- name: VERSION
value: "v1"
---
apiVersion: v1
kind: Service
metadata:
name: callme-service
labels:
app: callme-service
spec:
type: ClusterIP
ports:
- port: 8080
name: http
selector:
app: callme-service
Once again, we can deploy the app on OpenShift using Skaffold.
$ cd callme-service
$ oc project demo-apps
$ skaffold run
This time, instead of exposing the service outside of the cluster, we will export it to the Submariner network. Thanks to that, the caller-service
app will be able to call directly through the IPSec tunnel established between the clusters. We can do it using the subctl
CLI command:
$ subctl export service callme-service
That command creates the ServiceExport
CRD object provided by the Submariner operator. We can apply the following YAML definition as well:
apiVersion: multicluster.x-k8s.io/v1alpha1
kind: ServiceExport
metadata:
name: callme-service
namespace: demo-apps
We can verify if everything turned out okay by checking out the ServiceExport
object status:
Submariner creates an additional Kubernetes Service
with the IP address from the “Globalnet” CIDR pool to avoid services IP overlapping.
Then, let’s switch to the ocp1
cluster. After exporting the Service from the ocp2
cluster Submariner automatically creates the ServiceImport
object on the connected clusters.
apiVersion: multicluster.x-k8s.io/v1alpha1
kind: ServiceImport
metadata:
name: callme-service
namespace: demo-apps
spec:
ports:
- name: http
port: 8080
protocol: TCP
type: ClusterSetIP
status:
clusters:
- cluster: ocp2
Submariner exposes services on the domain clusterset.local
. So, our service is now available under the URL callme-service.demo-apps.svc.clusterset.local
. We can verify it by executing the following curl command inside the caller-service
container. As you see, it uses the external IP address allocated by the Submariner within the “Globalnet” subnet.
Here’s the implementation of @RestController
responsible for handling requests coming to the caller-service service. As you see, it uses Spring RestTemplate
client to call the remote service using the callme-service.demo-apps.svc.clusterset.local
URL provided by Submariner.
@RestController
@RequestMapping("/caller")
public class CallerController {
private static final Logger LOGGER = LoggerFactory
.getLogger(CallerController.class);
@Autowired
Optional<BuildProperties> buildProperties;
@Autowired
RestTemplate restTemplate;
@Value("${VERSION}")
private String version;
@GetMapping("/ping")
public String ping() {
LOGGER.info("Ping: name={}, version={}", buildProperties.or(Optional::empty), version);
String response = restTemplate
.getForObject("http://callme-service.demo-apps.svc.clusterset.local:8080/callme/ping", String.class);
LOGGER.info("Calling: response={}", response);
return "I'm caller-service " + version + ". Calling... " + response;
}
}
Let’s just make a final test using the OpenShift caller-service
Route and the GET /caller/ping
endpoint. As you see it calls the callme-service
app successfully through the Submariner tunnel.
Final Thoughts
In this article, we analyzed the scenario where we are interconnecting two OpenShift clusters with overlapping CIDRs. I also showed you how to leverage the ACM dashboard to simplify the installation and configuration of Submariner on the managed clusters. It is worth mentioning, that there are some other ways to interconnect multiple OpenShift clusters. For example, we can use Red Hat Service Interconnect based on the open-source project Skupper for that. In order to read more about it, you can refer to the following article on my blog.
Leave a Reply