Manage OpenShift with Terraform

Manage OpenShift with Terraform

This article will teach you how to create and manage OpenShift clusters with Terraform. For the purpose of this exercise, we will run OpenShift on Azure using the managed service called ARO (Azure Red Hat OpenShift). Cluster creation is the first part of the exercise. After that, we are going to install several operators on OpenShift and some apps that use features provided by those operators. Of course, our main goal is to do all the required steps in the single Terraform command.

Let me clarify some things before we begin. In this article, I’m not promoting or recommending Terraform as the best tool for managing OpenShift or Kubernetes clusters at scale. Usually, I prefer the GitOps approach for that. If you are interested in how to leverage such tools like ACM (Advanced Cluster Management for Kubernetes) and Argo CD for managing multiple clusters with the GitOps approach read that article. It describes the idea of a cluster continuous management. From my perspective, Terraform fits better for one-time actions, like for example, creating and configuring OpenShift for the demo or PoC and then removing it. We can also use Terraform to install Argo CD and then delegate all the next steps there.

Anyway, let’s focus on our scenario. We will widely use those two Terraform providers: Azure and Kubernetes. So, it is worth at least taking a look at the documentation to familiarize yourself with the basics.

Prerequisites

Of course, you don’t have to perform that exercise on Azure with ARO. If you already have OpenShift running you can skip the part related to the cluster creation and just run the Terraform script responsible for installing operators and apps. For the whole exercise, you need to install:

  1. Azure CLI (instructions) – once you install the login to your Azure account and create the subscription. To check if all works run the following command: az account show
  2. Terraform CLI (instructions) – once you install the Terraform CLI you can verify it with the following command: terraform version

Source Code

If you would like to try it by yourself, you can always take a look at my source code. In order to do that, you need to clone my GitHub repository. Then you should follow my instructions 🙂

Terraform Providers

The Terraform scripts for cluster creation are available inside the aro directory, while the script for cluster configuration inside the servicemesh directory. Here’s the structure of our repository:

Firstly, let’s take a look at the list of Terraform providers used in our exercise. In general, we need providers to interact with Azure and OpenShift through the Kubernetes API. In most cases, the official Hashicorp Azure Provider for Azure Resource Manager will be enough (1). However, in a few cases, we will have to interact directly with Azure REST API (for example to create an OpenShift cluster object) through the azapi provider (2). The Hashicorp Random Provider will be used to generate a random domain name for our cluster (3). The rest of the providers allow us to interact with OpenShift. Once again, the official Hashicorp Kubernetes Provider is valid in most cases (4). We will also use the kubectl provider (5) and Helm for installing the Postgres database (6) used by the sample apps.

terraform {
  required_version = ">= 1.0"
  required_providers {
    // (1)
    azurerm = {
      source  = "hashicorp/azurerm"
      version = ">=3.3.0"
    }
    // (2)
    azapi = {
      source  = "Azure/azapi"
      version = ">=1.0.0"
    }
    // (3)
    random = {
      source = "hashicorp/random"
      version = "3.5.1"
    }
    local = {
      source = "hashicorp/local"
      version = "2.4.0"
    }
  }
}

provider "azurerm" {
  features {}
}

provider "azapi" {
}

provider "random" {}
provider "local" {}

Here’s the list of providers used in the Re Hat Service Mesh installation:

terraform {
  required_version = ">= 1.0"
  required_providers {
    // (4)
    kubernetes = {
      source = "hashicorp/kubernetes"
      version = "2.23.0"
    }
    // (5)
    kubectl = {
      source  = "gavinbunney/kubectl"
      version = ">= 1.13.0"
    }
    // (6)
    helm = {
      source = "hashicorp/helm"
      version = "2.11.0"
    }
  }
}

provider "kubernetes" {
  config_path = "aro/kubeconfig"
  config_context = var.cluster-context
}

provider "kubectl" {
  config_path = "aro/kubeconfig"
  config_context = var.cluster-context
}

provider "helm" {
  kubernetes {
    config_path = "aro/kubeconfig"
    config_context = var.cluster-context
  }
}

In order to install providers, we need to run the following command (you don’t have to do it now):

$ terraform init

Create Azure Red Hat OpenShift Cluster with Terraform

Unfortunately, there is no dedicated, official Terraform provider for creating OpenShift clusters on Azure ARO. There are some discussions about such a feature (you can find it here), but still without a final effect. Maybe it will change in the future. However, creating an ARO cluster is not such a complicated thing since we may use existing providers listed in the previous section. You can find an interesting guide in the Microsoft docs here. It was also a starting point for my work. I improved several things there, for example, to avoid using the az CLI in the scripts and have the full configuration in Terraform HCL.

Let’s analyze our Terraform manifest step by step. Here’s a list of the most important elements we need to place in the HCL file:

  1. We have to read some configuration data from the Azure client
  2. I have an existing resource group with the openenv prefix, but you can put there any name you want. That’s our main resource group
  3. ARO requires a different resource group than a main resource group
  4. We need to create a virtual network for Openshift. There is a dedicated subnet for master nodes and another one for worker nodes. All the parameters visible there are required. You can change the IP address range as long as it doesn’t allow for conflicts between the master and worker nodes
  5. ARO requires the dedicated service principal to create a cluster. Let’s create the Azure application, and then the service principal with the password. The password is auto-generated by Azure.
  6. The newly created service principal requires some privileges. Let’s assign the “User Access Administrator” and network “Contributor”. Then, we need to search the service principal created by Azure under the “Azure Red Hat OpenShift RP” name and also assign a network “Contributor” there.
  7. All the required objects have already been created. There is no dedicated resource for the ARO cluster. In order to define the cluster resource we need to leverage the azapi provider.
  8. The definition of the OpenShift cluster is available inside the body section. All the fields you see there are required to successfully create the cluster.
// (1)
data "azurerm_client_config" "current" {}
data "azuread_client_config" "current" {}

// (2)
data "azurerm_resource_group" "my_group" {
  name = "openenv-${var.guid}"
}

resource "random_string" "random" {
  length           = 10
  numeric          = false
  special          = false
  upper            = false
}

// (3)
locals {
  resource_group_id = "/subscriptions/${data.azurerm_client_config.current.subscription_id}/resourceGroups/aro-${random_string.random.result}-${data.azurerm_resource_group.my_group.location}"
  domain            = random_string.random.result
}

// (4)
resource "azurerm_virtual_network" "virtual_network" {
  name                = "aro-vnet-${var.guid}"
  address_space       = ["10.0.0.0/22"]
  location            = data.azurerm_resource_group.my_group.location
  resource_group_name = data.azurerm_resource_group.my_group.name
}
resource "azurerm_subnet" "master_subnet" {
  name                 = "master_subnet"
  resource_group_name  = data.azurerm_resource_group.my_group.name
  virtual_network_name = azurerm_virtual_network.virtual_network.name
  address_prefixes     = ["10.0.0.0/23"]
  service_endpoints    = ["Microsoft.ContainerRegistry"]
  private_link_service_network_policies_enabled  = false
  depends_on = [azurerm_virtual_network.virtual_network]
}
resource "azurerm_subnet" "worker_subnet" {
  name                 = "worker_subnet"
  resource_group_name  = data.azurerm_resource_group.my_group.name
  virtual_network_name = azurerm_virtual_network.virtual_network.name
  address_prefixes     = ["10.0.2.0/23"]
  service_endpoints    = ["Microsoft.ContainerRegistry"]
  depends_on = [azurerm_virtual_network.virtual_network]
}

// (5)
resource "azuread_application" "aro_app" {
  display_name = "aro_app"
  owners       = [data.azuread_client_config.current.object_id]
}
resource "azuread_service_principal" "aro_app" {
  application_id               = azuread_application.aro_app.application_id
  app_role_assignment_required = false
  owners                       = [data.azuread_client_config.current.object_id]
}
resource "azuread_service_principal_password" "aro_app" {
  service_principal_id = azuread_service_principal.aro_app.object_id
}

// (6)
resource "azurerm_role_assignment" "aro_cluster_service_principal_uaa" {
  scope                = data.azurerm_resource_group.my_group.id
  role_definition_name = "User Access Administrator"
  principal_id         = azuread_service_principal.aro_app.id
  skip_service_principal_aad_check = true
}
resource "azurerm_role_assignment" "aro_cluster_service_principal_network_contributor_pre" {
  scope                = data.azurerm_resource_group.my_group.id
  role_definition_name = "Contributor"
  principal_id         = azuread_service_principal.aro_app.id
  skip_service_principal_aad_check = true
}
resource "azurerm_role_assignment" "aro_cluster_service_principal_network_contributor" {
  scope                = azurerm_virtual_network.virtual_network.id
  role_definition_name = "Contributor"
  principal_id         = azuread_service_principal.aro_app.id
  skip_service_principal_aad_check = true
}
data "azuread_service_principal" "aro_app" {
  display_name = "Azure Red Hat OpenShift RP"
  depends_on = [azuread_service_principal.aro_app]
}
resource "azurerm_role_assignment" "aro_resource_provider_service_principal_network_contributor" {
  scope                = azurerm_virtual_network.virtual_network.id
  role_definition_name = "Contributor"
  principal_id         = data.azuread_service_principal.aro_app.id
  skip_service_principal_aad_check = true
}

// (7)
resource "azapi_resource" "aro_cluster" {
  name      = "aro-cluster-${var.guid}"
  parent_id = data.azurerm_resource_group.my_group.id
  type      = "Microsoft.RedHatOpenShift/openShiftClusters@2023-07-01-preview"
  location  = data.azurerm_resource_group.my_group.location
  timeouts {
    create = "75m"
  }
  // (8)
  body = jsonencode({
    properties = {
      clusterProfile = {
        resourceGroupId      = local.resource_group_id
        pullSecret           = file("~/Downloads/pull-secret-latest.txt")
        domain               = local.domain
        fipsValidatedModules = "Disabled"
        version              = "4.12.25"
      }
      networkProfile = {
        podCidr              = "10.128.0.0/14"
        serviceCidr          = "172.30.0.0/16"
      }
      servicePrincipalProfile = {
        clientId             = azuread_service_principal.aro_app.application_id
        clientSecret         = azuread_service_principal_password.aro_app.value
      }
      masterProfile = {
        vmSize               = "Standard_D8s_v3"
        subnetId             = azurerm_subnet.master_subnet.id
        encryptionAtHost     = "Disabled"
      }
      workerProfiles = [
        {
          name               = "worker"
          vmSize             = "Standard_D8s_v3"
          diskSizeGB         = 128
          subnetId           = azurerm_subnet.worker_subnet.id
          count              = 3
          encryptionAtHost   = "Disabled"
        }
      ]
      apiserverProfile = {
        visibility           = "Public"
      }
      ingressProfiles = [
        {
          name               = "default"
          visibility         = "Public"
        }
      ]
    }
  })
  depends_on = [
    azurerm_subnet.worker_subnet,
    azurerm_subnet.master_subnet,
    azuread_service_principal_password.aro_app,
    azurerm_role_assignment.aro_resource_provider_service_principal_network_contributor
  ]
}

output "domain" {
  value = local.domain
}

Save Kubeconfig

Once we successfully create the OpenShift cluster, we need to obtain and save the kubeconfig file. It will allow Terraform to interact with the cluster through the master API. In order to get the kubeconfig content we need to call the Azure listAdminCredentials REST endpoint. It is the same as calling the az aro get-admin-kubeconfig command using CLI. It will return JSON with base64-encoded content. After decoding from JSON and Base64 we save the content inside the kubeconfig file in the current directory.

resource "azapi_resource_action" "test" {
  type        = "Microsoft.RedHatOpenShift/openShiftClusters@2023-07-01-preview"
  resource_id = "/subscriptions/${data.azurerm_client_config.current.subscription_id}/resourceGroups/openenv-${var.guid}/providers/Microsoft.RedHatOpenShift/openShiftClusters/aro-cluster-${var.guid}"
  action      = "listAdminCredentials"
  method      = "POST"
  response_export_values = ["*"]
}

output "kubeconfig" {
  value = base64decode(jsondecode(azapi_resource_action.test.output).kubeconfig)
}

resource "local_file" "kubeconfig" {
  content  =  base64decode(jsondecode(azapi_resource_action.test.output).kubeconfig)
  filename = "kubeconfig"
  depends_on = [azapi_resource_action.test]
}

Install OpenShift Operators with Terraform

Finally, we can interact with the existing OpenShift cluster via the kubeconfig file. In the first step, we will deploy some operators. In OpenShift operators are a preferred way of installing more advanced apps (for example consisting of several Deployments). Red Hat comes with a set of supported operators that allows us to extend OpenShift functionalities. It can be, for example, a service mesh, a clustered database, or a message broker.

Let’s imagine we want to install a service mesh on OpenShift. There are some dedicated operators for that. The OpenShift Service Mesh operator is built on top of the open-source project Istio. We will also install the OpenShift Distributed Tracing (Jaeger) and Kiali operators. In order to do that we need to define the Subscription CRD object. Also, if we install an operator in a different namespace than openshift-operators we have to create the OperatorGroup CRD object. Here’s the Terraform HCL script that installs our operators.

// (1)
resource "kubernetes_namespace" "openshift-distributed-tracing" {
  metadata {
    name = "openshift-distributed-tracing"
  }
}
resource "kubernetes_manifest" "tracing-group" {
  manifest = {
    "apiVersion" = "operators.coreos.com/v1"
    "kind"       = "OperatorGroup"
    "metadata"   = {
      "name"      = "openshift-distributed-tracing"
      "namespace" = "openshift-distributed-tracing"
    }
    "spec" = {
      "upgradeStrategy" = "Default"
    }
  }
}
resource "kubernetes_manifest" "tracing" {
  manifest = {
    "apiVersion" = "operators.coreos.com/v1alpha1"
    "kind"       = "Subscription"
    "metadata" = {
      "name"      = "jaeger-product"
      "namespace" = "openshift-distributed-tracing"
    }
    "spec" = {
      "channel"             = "stable"
      "installPlanApproval" = "Automatic"
      "name"                = "jaeger-product"
      "source"              = "redhat-operators"
      "sourceNamespace"     = "openshift-marketplace"
    }
  }
}

// (2)
resource "kubernetes_manifest" "kiali" {
  manifest = {
    "apiVersion" = "operators.coreos.com/v1alpha1"
    "kind"       = "Subscription"
    "metadata" = {
      "name"      = "kiali-ossm"
      "namespace" = "openshift-operators"
    }
    "spec" = {
      "channel"             = "stable"
      "installPlanApproval" = "Automatic"
      "name"                = "kiali-ossm"
      "source"              = "redhat-operators"
      "sourceNamespace"     = "openshift-marketplace"
    }
  }
}

// (3)
resource "kubernetes_manifest" "ossm" {
  manifest = {
    "apiVersion" = "operators.coreos.com/v1alpha1"
    "kind"       = "Subscription"
    "metadata"   = {
      "name"      = "servicemeshoperator"
      "namespace" = "openshift-operators"
    }
    "spec" = {
      "channel"             = "stable"
      "installPlanApproval" = "Automatic"
      "name"                = "servicemeshoperator"
      "source"              = "redhat-operators"
      "sourceNamespace"     = "openshift-marketplace"
    }
  }
}

// (4)
resource "kubernetes_manifest" "ossmconsole" {
  manifest = {
    "apiVersion" = "operators.coreos.com/v1alpha1"
    "kind"       = "Subscription"
    "metadata"   = {
      "name"      = "ossmconsole"
      "namespace" = "openshift-operators"
    }
    "spec" = {
      "channel"             = "candidate"
      "installPlanApproval" = "Automatic"
      "name"                = "ossmconsole"
      "source"              = "community-operators"
      "sourceNamespace"     = "openshift-marketplace"
    }
  }
}

After installing the operators we may proceed to the service mesh configuration (1). We need to use CRD objects installed by the operators. Kubernetes Terraform provider won’t be a perfect choice for that since it verifies the existence of an object before applying the whole script. Therefore we will switch to the kubectl provider that just applies the object without any initial verification. We need to create an Istio control plane using the ServiceMeshControlPlane object (2). As you see, it also enables distributed tracing with Jaeger and a dashboard with Kiali. Once a control plane is ready we may proceed to the next steps. We will create all the objects responsible for Istio configuration including VirtualService, DestinatioRule, and Gateway (3).

resource "kubernetes_namespace" "istio" {
  metadata {
    name = "istio"
  }
}

// (1)
resource "time_sleep" "wait_120_seconds" {
  depends_on = [kubernetes_manifest.ossm]

  create_duration = "120s"
}

// (2)
resource "kubectl_manifest" "basic" {
  depends_on = [time_sleep.wait_120_seconds, kubernetes_namespace.istio]
  yaml_body = <<YAML
kind: ServiceMeshControlPlane
apiVersion: maistra.io/v2
metadata:
  name: basic
  namespace: istio
spec:
  version: v2.4
  tracing:
    type: Jaeger
    sampling: 10000
  policy:
    type: Istiod
  telemetry:
    type: Istiod
  addons:
    jaeger:
      install:
        storage:
          type: Memory
    prometheus:
      enabled: true
    kiali:
      enabled: true
    grafana:
      enabled: true
YAML
}

resource "kubectl_manifest" "console" {
  depends_on = [time_sleep.wait_120_seconds, kubernetes_namespace.istio]
  yaml_body = <<YAML
kind: OSSMConsole
apiVersion: kiali.io/v1alpha1
metadata:
  name: ossmconsole
  namespace: istio
spec:
  kiali:
    serviceName: ''
    serviceNamespace: ''
    servicePort: 0
    url: ''
YAML
}

resource "time_sleep" "wait_60_seconds_2" {
  depends_on = [kubectl_manifest.basic]

  create_duration = "60s"
}

// (3)
resource "kubectl_manifest" "access" {
  depends_on = [time_sleep.wait_120_seconds, kubernetes_namespace.istio, kubernetes_namespace.demo-apps]
  yaml_body = <<YAML
apiVersion: maistra.io/v1
kind: ServiceMeshMemberRoll
metadata:
  name: default
  namespace: istio
spec:
  members:
    - demo-apps
YAML
}

resource "kubectl_manifest" "gateway" {
  depends_on = [time_sleep.wait_60_seconds_2, kubernetes_namespace.demo-apps]
  yaml_body = <<YAML
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: microservices-gateway
  namespace: demo-apps
spec:
  selector:
    istio: ingressgateway
  servers:
    - port:
        number: 80
        name: http
        protocol: HTTP
      hosts:
        - quarkus-insurance-app.apps.${var.domain}
        - quarkus-person-app.apps.${var.domain}
YAML
}

resource "kubectl_manifest" "quarkus-insurance-app-vs" {
  depends_on = [time_sleep.wait_60_seconds_2, kubernetes_namespace.demo-apps]
  yaml_body = <<YAML
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: quarkus-insurance-app-vs
  namespace: demo-apps
spec:
  hosts:
    - quarkus-insurance-app.apps.${var.domain}
  gateways:
    - microservices-gateway
  http:
    - match:
        - uri:
            prefix: "/insurance"
      rewrite:
        uri: " "
      route:
        - destination:
            host: quarkus-insurance-app
          weight: 100
YAML
}

resource "kubectl_manifest" "quarkus-person-app-dr" {
  depends_on = [time_sleep.wait_60_seconds_2, kubernetes_namespace.demo-apps]
  yaml_body  = <<YAML
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: quarkus-person-app-dr
  namespace: demo-apps
spec:
  host: quarkus-person-app
  subsets:
    - name: v1
      labels:
        version: v1
    - name: v2
      labels:
        version: v2
YAML
}

resource "kubectl_manifest" "quarkus-person-app-vs-via-gw" {
  depends_on = [time_sleep.wait_60_seconds_2, kubernetes_namespace.demo-apps]
  yaml_body  = <<YAML
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: quarkus-person-app-vs-via-gw
  namespace: demo-apps
spec:
  hosts:
    - quarkus-person-app.apps.${var.domain}
  gateways:
    - microservices-gateway
  http:
    - match:
      - uri:
          prefix: "/person"
      rewrite:
        uri: " "
      route:
        - destination:
            host: quarkus-person-app
            subset: v1
          weight: 100
        - destination:
            host: quarkus-person-app
            subset: v2
          weight: 0
YAML
}

resource "kubectl_manifest" "quarkus-person-app-vs" {
  depends_on = [time_sleep.wait_60_seconds_2, kubernetes_namespace.demo-apps]
  yaml_body  = <<YAML
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: quarkus-person-app-vs
  namespace: demo-apps
spec:
  hosts:
    - quarkus-person-app
  http:
    - route:
        - destination:
            host: quarkus-person-app
            subset: v1
          weight: 100
        - destination:
            host: quarkus-person-app
            subset: v2
          weight: 0
YAML
}

Finally, we will run our sample Quarkus apps that communicate through the Istio mesh and connect to the Postgres database. The script is quite large. All the apps are running in the demo-apps namespace (1). They are connecting with the Postgres database installed using the Terraform Helm provider from Bitnami chart (2). Finally, we are creating the Deployments for two apps: person-service and insurance-service (3). There are two versions per microservice. Don’t focus on the features of the apps. They are here just to show the subsequent layers of the installation process. We are starting with the operators and CRDs, then moving to Istio configuration, and finally installing our custom apps.

// (1)
resource "kubernetes_namespace" "demo-apps" {
  metadata {
    name = "demo-apps"
  }
}

resource "kubernetes_secret" "person-db-secret" {
  depends_on = [kubernetes_namespace.demo-apps]
  metadata {
    name      = "person-db"
    namespace = "demo-apps"
  }
  data = {
    postgres-password = "123456"
    password          = "123456"
    database-user     = "person-db"
    database-name     = "person-db"
  }
}

resource "kubernetes_secret" "insurance-db-secret" {
  depends_on = [kubernetes_namespace.demo-apps]
  metadata {
    name      = "insurance-db"
    namespace = "demo-apps"
  }
  data = {
    postgres-password = "123456"
    password          = "123456"
    database-user     = "insurance-db"
    database-name     = "insurance-db"
  }
}

// (2)
resource "helm_release" "person-db" {
  depends_on = [kubernetes_namespace.demo-apps]
  chart            = "postgresql"
  name             = "person-db"
  namespace        = "demo-apps"
  repository       = "https://charts.bitnami.com/bitnami"

  values = [
    file("manifests/person-db-values.yaml")
  ]
}
resource "helm_release" "insurance-db" {
  depends_on = [kubernetes_namespace.demo-apps]
  chart            = "postgresql"
  name             = "insurance-db"
  namespace        = "demo-apps"
  repository       = "https://charts.bitnami.com/bitnami"

  values = [
    file("manifests/insurance-db-values.yaml")
  ]
}

// (3)
resource "kubernetes_deployment" "quarkus-insurance-app" {
  depends_on = [helm_release.insurance-db, time_sleep.wait_60_seconds_2]
  metadata {
    name      = "quarkus-insurance-app"
    namespace = "demo-apps"
    annotations = {
      "sidecar.istio.io/inject": "true"
    }
  }
  spec {
    selector {
      match_labels = {
        app = "quarkus-insurance-app"
        version = "v1"
      }
    }
    template {
      metadata {
        labels = {
          app = "quarkus-insurance-app"
          version = "v1"
        }
        annotations = {
          "sidecar.istio.io/inject": "true"
        }
      }
      spec {
        container {
          name = "quarkus-insurance-app"
          image = "piomin/quarkus-insurance-app:v1"
          port {
            container_port = 8080
          }
          env {
            name = "POSTGRES_USER"
            value_from {
              secret_key_ref {
                key = "database-user"
                name = "insurance-db"
              }
            }
          }
          env {
            name = "POSTGRES_PASSWORD"
            value_from {
              secret_key_ref {
                key = "password"
                name = "insurance-db"
              }
            }
          }
          env {
            name = "POSTGRES_DB"
            value_from {
              secret_key_ref {
                key = "database-name"
                name = "insurance-db"
              }
            }
          }
        }
      }
    }
  }
}

resource "kubernetes_service" "quarkus-insurance-app" {
  depends_on = [helm_release.insurance-db, time_sleep.wait_60_seconds_2]
  metadata {
    name = "quarkus-insurance-app"
    namespace = "demo-apps"
    labels = {
      app = "quarkus-insurance-app"
    }
  }
  spec {
    type = "ClusterIP"
    selector = {
      app = "quarkus-insurance-app"
    }
    port {
      port = 8080
      name = "http"
    }
  }
}

resource "kubernetes_deployment" "quarkus-person-app-v1" {
  depends_on = [helm_release.person-db, time_sleep.wait_60_seconds_2]
  metadata {
    name      = "quarkus-person-app-v1"
    namespace = "demo-apps"
    annotations = {
      "sidecar.istio.io/inject": "true"
    }
  }
  spec {
    selector {
      match_labels = {
        app = "quarkus-person-app"
        version = "v1"
      }
    }
    template {
      metadata {
        labels = {
          app = "quarkus-person-app"
          version = "v1"
        }
        annotations = {
          "sidecar.istio.io/inject": "true"
        }
      }
      spec {
        container {
          name = "quarkus-person-app"
          image = "piomin/quarkus-person-app:v1"
          port {
            container_port = 8080
          }
          env {
            name = "POSTGRES_USER"
            value_from {
              secret_key_ref {
                key = "database-user"
                name = "person-db"
              }
            }
          }
          env {
            name = "POSTGRES_PASSWORD"
            value_from {
              secret_key_ref {
                key = "password"
                name = "person-db"
              }
            }
          }
          env {
            name = "POSTGRES_DB"
            value_from {
              secret_key_ref {
                key = "database-name"
                name = "person-db"
              }
            }
          }
        }
      }
    }
  }
}

resource "kubernetes_deployment" "quarkus-person-app-v2" {
  depends_on = [helm_release.person-db, time_sleep.wait_60_seconds_2]
  metadata {
    name      = "quarkus-person-app-v2"
    namespace = "demo-apps"
    annotations = {
      "sidecar.istio.io/inject": "true"
    }
  }
  spec {
    selector {
      match_labels = {
        app = "quarkus-person-app"
        version = "v2"
      }
    }
    template {
      metadata {
        labels = {
          app = "quarkus-person-app"
          version = "v2"
        }
        annotations = {
          "sidecar.istio.io/inject": "true"
        }
      }
      spec {
        container {
          name = "quarkus-person-app"
          image = "piomin/quarkus-person-app:v2"
          port {
            container_port = 8080
          }
          env {
            name = "POSTGRES_USER"
            value_from {
              secret_key_ref {
                key = "database-user"
                name = "person-db"
              }
            }
          }
          env {
            name = "POSTGRES_PASSWORD"
            value_from {
              secret_key_ref {
                key = "password"
                name = "person-db"
              }
            }
          }
          env {
            name = "POSTGRES_DB"
            value_from {
              secret_key_ref {
                key = "database-name"
                name = "person-db"
              }
            }
          }
        }
      }
    }
  }
}

resource "kubernetes_service" "quarkus-person-app" {
  depends_on = [helm_release.person-db, time_sleep.wait_60_seconds_2]
  metadata {
    name = "quarkus-person-app"
    namespace = "demo-apps"
    labels = {
      app = "quarkus-person-app"
    }
  }
  spec {
    type = "ClusterIP"
    selector = {
      app = "quarkus-person-app"
    }
    port {
      port = 8080
      name = "http"
    }
  }
}

Applying Terraform Scripts

Finally, we can apply the whole Terraform configuration described in the article. Here’s the aro-with-servicemesh.sh script responsible for running required Terraform commands. It is placed in the repository root directory. In the first step, we go to the aro directory to apply the script responsible for creating the Openshift cluster. The domain name is automatically generated by Terraform, so we will export it using the terraform output command. After that, we may apply the scripts with operators and Istio configuration. In order to do everything automatically we pass the location of the kubeconfig file and the generated domain name as variables.

#! /bin/bash

cd aro
terraform init
terraform apply -auto-approve
domain="apps.$(terraform output -raw domain).eastus.aroapp.io"

cd ../servicemesh
terraform init
terraform apply -auto-approve -var kubeconfig=../aro/kubeconfig -var domain=$domain

Let’s run the aro-with-service-mesh.sh script. Once you will do it you should have a similar output as visible below. In the beginning, Terraform creates several objects required by the ARO cluster like a virtual network or service principal. Once those resources are ready, it starts the main part – ARO installation.

Let’s switch to Azure Portal. As you see the installation is in progress. There are several other newly created resources. Of course, there is also the resource representing the OpenShift cluster.

openshift-terraform-azure-portal

Now, arm yourself with patience. You can easily go get a coffee…

You can verify the progress, e.g. by displaying a list of virtual machines. If you see all the 3 master and 3 worker VMs running it means that we are slowly approaching the end.

openshift-terraform-virtual-machines

It may take even more than 40 minutes. That’s why I overridden a default timeout for azapi resource to 75 minutes. Once the cluster is ready, Terraform will connect to the instance of OpenShift to install operators there. In the meantime, we can switch to Azure Portal and see the details about the ARO cluster. It displays, among others, the OpenShift Console URL. Let’s log in to the console.

In order to obtain the admin password we need to run the following command (for my cluster and resource group name):

$ az aro list-credentials -n aro-cluster-p2pvg -g openenv-p2pvg

Here’s our OpenShift console:

Let’s back to the installation process. The first part has been just finished. Now, the script executes terraform commands in the servicemesh directory. As you see, it installed our operators.

Let’s check out how it looks in the OpenShift Console. Go to the Operators -> Installed Operators menu item.

openshift-terraform-operators

Of course, the installation is continued in the background. After installing the operators, it created the Istio Control Plane using the CRD object.

Let’s switch to the OpenShift Console once again. Go to the istio project. In the list of installed operators find Red Hat OpenShift Service Mesh and then go to the Istio Service Mesh Control Plane tab. You should see the basic object. As you see all 9 required components, including Istio, Kiali, and Jaeger instances, are successfully installed.

openshift-terraform-istio

And finally the last part of our exercise. Installation is finished. Terraform applied deployment with our Postgres databases and some Quarkus apps.

In order to see the list of apps we can go to the Topology view in the Developer perspective. All the pods are running. As you see there is also a Kiali console available. We can click that link.

openshift-terraform-apps

In the Kiali dashboard, we can see a detailed view of our service mesh. For example, there is a diagram showing a graphical visualization of traffic between the services.

Final Thoughts

If you use Terraform for managing your cloud infrastructure this article is for you. Did you already have doubts is it possible to easily create and configure the OpenShift cluster with Terraform? This article should dispel your doubts. You can also easily create your ARO cluster just by cloning this repository and running a single script on your cloud account. Enjoy 🙂

Leave a Reply