Migrate from Kubernetes to OpenShift in the GitOps Way
In this article, you will learn how to migrate your apps from Kubernetes to OpenShift in the GitOps way using tools like Kustomize, Helm, operators, and Argo CD. We will discuss the best practices in that area. This requires us to avoid approaches like starting a pod in the privileged mode. We will focus not just on running your custom apps, but mostly on the popular pieces of cloud-native or legacy software including:
- Argo CD
- Istio
- Apache Kafka
- Postgres
- HashiCorp Vault
- Prometheus
- Redis
- Cert Manager
Finally, we will migrate our sample Spring Boot app. I will also show you how to build such an app on Kubernetes and OpenShift in the same way using the Shipwright tool. However, before we start, let’s discuss some differences between “vanilla” Kubernetes and OpenShift.
Introduction
What are the key differences between Kubernetes and OpenShift? That’s probably the first question you will ask yourself when considering migration from Kubernetes. Today, I will focus only on those aspects that impact running the apps from our list. First of all, OpenShift is built on top of Kubernetes and is fully compatible with Kubernetes APIs and resources. If you can do something on Kubernetes, you can do it on OpenShift in the same way unless it doesn’t compromise security policy. OpenShift comes with additional security policies out of the box. For example, by default, it won’t allow you to run containers with the root
user.
Apart from security reasons, only the fact that you can do something doesn’t mean that you should do it in that way. So, you can run images from Docker Hub, but Red Hat provides many supported container images built from Red Hat Enterprise Linux. You can find a full list of supported images here. Although you can install popular software on OpenShift using Helm charts, Red Hat provides various supported Kubernetes operators for that. With those operators, you can be sure that the installation will go without any problems and the solution might be integrated with OpenShift better. We will analyze all those things based on the examples from the tools list.
Source Code
If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. I will explain the structure of our sample in detail later. So after cloning the Git repository you should just follow my instructions.
Install Argo CD
Use Official Helm Chart
In the first step, we will install Argo CD on OpenShift. I’m assuming that on Kubernetes, you’re using the official Helm chart for that. In order to install that chart, we need to add the following Helm repository:
$ helm repo add argo https://argoproj.github.io/argo-helm
ShellSessionThen, we can install the Argo CD in the argocd
namespace on OpenShift with the following command. The Argo CD Helm chart provides some parameters dedicated to OpenShift. We need to enable arbitrary uid for the repo server by setting the openshift.enabled
property to true
. If we want to access the Argo CD dashboard outside of the cluster we should expose it as the Route
. In order to do that, we need to enable the server.route.enabled
property and set the hostname using the server.route.hostname
parameter (piomin.eastus.aroapp.io
is my OpenShift domain).
$ helm install argocd argo/argo-cd -n argocd --create-namespace \
--set openshift.enabled=true \
--set server.route.enabled=true \
--set server.route.hostname=argocd.apps.piomin.eastus.aroapp.io
ShellSessionAfter that, we can access the Argo CD dashboard using the Route address as shown below. The admin
user password may be taken from the argocd-initial-admin-secret
Secret
generated by the Helm chart.
Use the OpenShift GitOps Operator (Recommended Way)
The solution presented in the previous section works fine. However, it is not the optimal approach for OpenShift. In that case, the better idea is to use OpenShift GitOps Operator. Firstly, we should find the “Red Hat GitOps Operator” inside the “Operator Hub” section in the OpenShift Console. Then, we have to install the operator.
During the installation, the operator automatically creates the Argo CD instance in the openshift-gitops
namespace.
OpenShift GitOps operator automatically exposes the Argo CD dashboard through the Route
. It is also integrated with OpenShift auth, so we can use cluster credentials to sign in there.
Install Redis, Postgres and Apache Kafka
OpenShift Support in Bitnami Helm Charts
Firstly, let’s assume that we use Bitnami Helm charts to install all three tools from the chapter title (Redis, Postgres, Kafka) on Kubernetes. Fortunately, the latest versions of Bitnami Helm charts provide out-of-the-box compatibility with the OpenShift platform. Let’s analyze what it means.
Beginning from the 4.11
version OpenShift introduces new Security Context Constraints (SCC) called restricted-v2
. In OpenShift, security context constraints allow us to control permissions assigned to the pods. The restricted-v2
SCC includes a minimal set of privileges usually required for a generic workload to run. It is the most restrictive policy that matches the current pod security standards. As I mentioned before, the latest version of the most popular Bitnami Helm charts supports the restricted-v2
SCC. We can check which of the charts support that feature by checking if they provide the global.compatibility.openshift.adaptSecurityContext
parameter. The default value of that parameter is auto
. It means that it is applied only if the detected running cluster is Openshift.
So, in short, we don’t have to change anything in the Helm chart configuration used on Kubernetes to make it work also on OpenShift. However, it doesn’t mean that we won’t change that configuration. Let’s analyze it tool after tool.
Install Redis on OpenShift with Helm Chart
In the first step, let’s add the Bitnami Helm repository with the following command:
$ helm repo add bitnami https://charts.bitnami.com/bitnami
ShellSessionThen, we can install and run a three-node Redis cluster with a single master node in the redis
namespace using the following command:
$ helm install redis bitnami/redis -n redis --create-namespace
ShellSessionAfter installing the chart we can display a list of pods running the redis
namespace:
$ oc get po
NAME READY STATUS RESTARTS AGE
redis-master-0 1/1 Running 0 5m31s
redis-replicas-0 1/1 Running 0 5m31s
redis-replicas-1 1/1 Running 0 4m44s
redis-replicas-2 1/1 Running 0 4m3s
ShellSessionLet’s take a look at the securityContext
section inside one of the Redis cluster pods. It contains characteristic fields for the restricted-v2
SCC, which removes runAsUser
, runAsGroup
and fsGroup
and let the platform use their allowed default IDs.
However, let’s stop for a moment to analyze the current situation. We installed Redis on OpenShift using the Bitnami Helm chart. By default, this chart is based on the Redis Debian image provided by Bitnami in the Docker Hub.
On the other hand, Red Hat provides its build of Redis image based on RHEL 9. Consequently, this image would be more suitable for running on OpenShift.
In order to use a different Redis image with the Bitnami Helm chart, we need to override the registry
, repository
, and tag
fields in the image
section. The full address of the current latest Red Hat Redis image is registry.redhat.io/rhel9/redis-7:1-16
. In order to make the Bitnami chart work with that image, we need to override the default data path to /var/lib/redis/data
and disable the container’s Security Context read-only root filesystem for the slave pods.
image:
tag: 1-16
registry: registry.redhat.io
repository: rhel9/redis-7
master:
persistence:
path: /var/lib/redis/data
replica:
persistence:
path: /var/lib/redis/data
containerSecurityContext:
readOnlyRootFilesystem: false
YAMLInstall Postgres on OpenShift with Helm Chart
With Postgres, we have every similar as before with Redis. The Bitnami Helm chart also supports OpenShift restricted-v2
SCC and Red Hat provide the Postgres image based on RHEL 9. Once again, we need to override some chart parameters to adapt to a different image than the default one provided by Bitnami.
image:
tag: 1-54
registry: registry.redhat.io
repository: rhel9/postgresql-15
primary:
containerSecurityContext:
readOnlyRootFilesystem: false
persistence:
mountPath: /var/lib/pgsql
extraEnvVars:
- name: POSTGRESQL_ADMIN_PASSWORD
value: postgresql123
postgresqlDataDir: /var/lib/pgsql/data
YAMLOf course, we can consider switching to one of the available Postgres operators. From the “Operator Hub” section we can install e.g. Postgres using Crunchy or EDB operators. However, these are not operators provided by Red Hat. Of course, you can use them on “vanilla” Kubernetes as well. In that case, the migration to OpenShift also won’t be complicated.
Install Kafka on OpenShift with the Strimzi Operator
The situation is slightly different in the case of Apache Kafka. Of course, we can use the Kafka Helm chart provided by Bitnami. However, Red Hat provides a supported version of Kafka through the Strimzi operator. This operator is a part of the Red Hat product ecosystem and is available commercially as the AMQ Streams. In order to install Kafka with AMQ Streams on OpenShift, we need to install the operator first.
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: amq-streams
namespace: openshift-operators
annotations:
argocd.argoproj.io/sync-wave: "2"
spec:
channel: stable
installPlanApproval: Automatic
name: amq-streams
source: redhat-operators
sourceNamespace: openshift-marketplace
YAMLOnce we install the operator with the Strimzi CRDs we can provision the Kafka instance on OpenShift. In order to do that, we need to define the Kafka
object. The name of the cluster is my-cluster
. We should install it after a successful installation of the operator CRD, so we set the higher value of the Argo CD sync-wave
parameter than for the amq-streams
Subscription
object. Argo CD should also ignore missing CRDs installed by the operator during sync thanks to the SkipDryRunOnMissingResource
option.
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: my-cluster
namespace: kafka
annotations:
argocd.argoproj.io/sync-wave: "3"
argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
spec:
kafka:
config:
offsets.topic.replication.factor: 3
transaction.state.log.replication.factor: 3
transaction.state.log.min.isr: 2
default.replication.factor: 3
min.insync.replicas: 2
inter.broker.protocol.version: '3.6'
storage:
type: persistent-claim
size: 5Gi
deleteClaim: true
listeners:
- name: plain
port: 9092
type: internal
tls: false
- name: tls
port: 9093
type: internal
tls: true
version: 3.6.0
replicas: 3
entityOperator:
topicOperator: {}
userOperator: {}
zookeeper:
storage:
type: persistent-claim
deleteClaim: true
size: 2Gi
replicas: 3
YAMLGitOps Strategy for Kubernetes and OpenShift
In this section, we will focus on comparing differences in the GitOps manifest between Kubernetes and Openshift. We will use Kustomize to configure two overlays: openshift
and kubernetes
. Here’s the structure of our configuration repository:
.
├── base
│ ├── kustomization.yaml
│ └── namespaces.yaml
└── overlays
├── kubernetes
│ ├── kustomization.yaml
│ ├── namespaces.yaml
│ ├── values-cert-manager.yaml
│ └── values-vault.yaml
└── openshift
├── cert-manager-operator.yaml
├── kafka-operator.yaml
├── kustomization.yaml
├── service-mesh-operator.yaml
├── values-postgres.yaml
├── values-redis.yaml
└── values-vault.yaml
ShellSessionConfiguration for Kubernetes
In addition to the previously discussed tools, we will also install “cert-manager”, Prometheus, and Vault using Helm charts. Kustomize allows us to define a list of managed charts using the helmCharts
section. Here’s the kustomization.yaml
file containing a full set of installed charts:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
- namespaces.yaml
helmCharts:
- name: redis
repo: https://charts.bitnami.com/bitnami
releaseName: redis
namespace: redis
- name: postgresql
repo: https://charts.bitnami.com/bitnami
releaseName: postgresql
namespace: postgresql
- name: kafka
repo: https://charts.bitnami.com/bitnami
releaseName: kafka
namespace: kafka
- name: cert-manager
repo: https://charts.jetstack.io
releaseName: cert-manager
namespace: cert-manager
valuesFile: values-cert-manager.yaml
- name: vault
repo: https://helm.releases.hashicorp.com
releaseName: vault
namespace: vault
valuesFile: values-vault.yaml
- name: prometheus
repo: https://prometheus-community.github.io/helm-charts
releaseName: prometheus
namespace: prometheus
- name: istio
repo: https://prometheus-community.github.io/helm-charts
releaseName: istio
namespace: istio-system
overlays/kubernetes/kustomization.yamlFor some of them, we need to override default Helm parameters. Here’s the values-vault.yaml
file with the parameters for Vault. We enable development mode and UI dashboard:
server:
dev:
enabled: true
ui:
enabled: true
overlays/kubernetes/values-vault.yamlLet’s also customize the default behavior of the “cert-manager” chart with the following values:
installCRDs: true
startupapicheck:
enabled: false
overlays/kubernetes/values-cert-manager.yamlConfiguration for OpenShift
Then, we can switch to the configuration for Openshift. Vault has to be installed with the Helm chart, but for “cert-manager” we can use the operator provided by Red Hat. Since Openshift comes with built-in Prometheus, we don’t need to install it. We will also replace the Helm chart with Istio with the Red Hat-supported OpenShift Service Mesh operator. Here’s the kustomization.yaml
for OpenShift:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
- kafka-operator.yaml
- cert-manager-operator.yaml
- service-mesh-operator.yaml
helmCharts:
- name: redis
repo: https://charts.bitnami.com/bitnami
releaseName: redis
namespace: redis
valuesFile: values-redis.yaml
- name: postgresql
repo: https://charts.bitnami.com/bitnami
releaseName: postgresql
namespace: postgresql
valuesFile: values-postgres.yaml
- name: vault
repo: https://helm.releases.hashicorp.com
releaseName: vault
namespace: vault
valuesFile: values-vault.yaml
overlays/openshift/kustomization.yamlFor Vault we should enable integration with Openshift and support for the Route
object. Red Hat provides a Vault image based on UBI in the registry.connect.redhat.com/hashicorp/vault
registry. Here’s the values-vault.yaml
file for OpenShift:
server:
dev:
enabled: true
route:
enabled: true
host: ""
tls: null
image:
repository: "registry.connect.redhat.com/hashicorp/vault"
tag: "1.16.1-ubi"
global:
openshift: true
injector:
enabled: false
overlays/openshift/values-vault.yamlIn order to install operators we need to define at least the Subscription
object. Here’s the subscription for the OpenShift Service Mesh. After installing the operator we can create a control plane in the istio-system
namespace using the ServiceMeshControlPlane
CRD object. In order to apply the CRD after installing the operator, we need to use the Argo CD sync waves and define the SkipDryRunOnMissingResource
parameter:
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: servicemeshoperator
namespace: openshift-operators
annotations:
argocd.argoproj.io/sync-wave: "2"
spec:
channel: stable
installPlanApproval: Automatic
name: servicemeshoperator
source: redhat-operators
sourceNamespace: openshift-marketplace
---
apiVersion: maistra.io/v2
kind: ServiceMeshControlPlane
metadata:
name: basic
namespace: istio-system
annotations:
argocd.argoproj.io/sync-wave: "3"
argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
spec:
tracing:
type: None
sampling: 10000
policy:
type: Istiod
addons:
grafana:
enabled: false
jaeger:
install:
storage:
type: Memory
kiali:
enabled: false
prometheus:
enabled: false
telemetry:
type: Istiod
version: v2.5
overlays/openshift/service-mesh-operator.yamlSince the “cert-manager” operator is installed in a different namespace than openshift-operators
, we also need to define the OperatorGroup
object.
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: openshift-cert-manager-operator
namespace: cert-manager
annotations:
argocd.argoproj.io/sync-wave: "2"
spec:
channel: stable-v1
installPlanApproval: Automatic
name: openshift-cert-manager-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
---
apiVersion: operators.coreos.com/v1alpha2
kind: OperatorGroup
metadata:
name: cert-manager-operator
namespace: cert-manager
annotations:
argocd.argoproj.io/sync-wave: "2"
spec:
targetNamespaces:
- cert-manager
overlays/openshift/cert-manager-operator.yamlFinally, OpenShift comes with built-in Prometheus monitoring, so we don’t need to install it.
Apply the Configuration with Argo CD
Here’s the Argo CD Application responsible for installing our sample configuration on OpenShift. We should create it in the openshift-gitops
namespace.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: install
namespace: openshift-gitops
spec:
destination:
server: 'https://kubernetes.default.svc'
project: default
source:
path: overlays/openshift
repoURL: 'https://github.com/piomin/kubernetes-to-openshift-argocd.git'
targetRevision: HEAD
YAMLBefore that, we need to enable the use of the Helm chart inflator generator with Kustomize in Argo CD. In order to do that, we can add the kustomizeBuildOptions
parameter in the openshift-gitops
ArgoCD object as shown below.
apiVersion: argoproj.io/v1beta1
kind: ArgoCD
metadata:
name: openshift-gitops
namespace: openshift-gitops
spec:
# ...
kustomizeBuildOptions: '--enable-helm'
YAMLAfter creating the Argo CD Application
and triggering the sync process, the installation starts on OpenShift.
Build App Images
We installed several software solutions including the most popular databases, message brokers, and security tools. However, now we want to build and run our own apps. How to migrate them from Kubernetes to OpenShift? Of course, we can run the app images exactly in the same way as in Kubernetes. On the other hand, we can build them on OpenShift using the Shipwright project. We can install it on OpenShift using the “Builds for Red Hat OpenShift Operator”.
After that, we need to create the ShiwrightBuild
object. It needs to contain the name of the target namespace for running Shipwright in the targetNamespace
field. In my case, the target namespace is builds-demo
. For a detailed description of the Shipwright build, you can refer to that article on my blog.
apiVersion: operator.shipwright.io/v1alpha1
kind: ShipwrightBuild
metadata:
name: openshift-builds
spec:
targetNamespace: builds-demo
YAMLWith Shipwright we can easily switch between multiple build strategies on Kubernetes, and on OpenShift as well. For example, on OpenShift we can use a built-in source-to-image (S2I) strategy, while on Kubernetes e.g. Kaniko or Cloud Native Buildpacks.
apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
name: sample-spring-kotlin-build
namespace: builds-demo
spec:
output:
image: quay.io/pminkows/sample-kotlin-spring:1.0-shipwright
pushSecret: pminkows-piomin-pull-secret
source:
git:
url: https://github.com/piomin/sample-spring-kotlin-microservice.git
strategy:
name: source-to-image
kind: ClusterBuildStrategy
YAMLFinal Thoughts
Migration from Kubernetes to Openshift is not a painful process. Many popular Helm charts support OpenShift restricted-v2
SCC. Thanks to that, in some cases, you don’t need to change anything. However, sometimes it’s worth switching to the version of the particular tool supported by Red Hat.
2 COMMENTS