Azure DevOps with OpenShift

Azure DevOps with OpenShift

This article will teach you how to integrate Azure DevOps with the OpenShift cluster to build and deploy your app there. You will learn how to run Azure Pipelines self-hosted agents on OpenShift and use the oc client from your pipelines. If you are interested in Azure DevOps you can read my previous article about that platform and Terraform used together to prepare the environment and run the Spring Boot app on the Azure cloud.

Before we begin, let me clarify some things and explain my decisions. If you were searching for something about Azure DevOps and OpenShift integration, you came across several articles about the Red Hat Azure DevOps extension for OpenShift. I won’t use that extension. In my opinion, it is not actively developed right now, and therefore it provides some limitations that may complicate our integration process. On the other hand, it won’t offer many useful features, so we can do just as well without it.

You will also find articles that show how to prepare a self-hosted agent image based on the Red Hat dotnet-runtime as a base image (e.g. this one). I also won’t use it. Instead, I’m going to leverage the image built on top of UBI9 provided by the tool called Blue Agent (formerly Azure Pipelines Agent). It is a self-hosted Azure Pipelines agent for Kubernetes, easy to run, secure, and auto-scaled. We will have to modify that image slightly, but more details later.

Prerequisites

In order to proceed with the exercise, we need an active subscription to the Azure cloud and an instance of Azure DevOps. We also have to run the OpenShift cluster, which is accessible for our pipelines running on Azure DevOps. I’m running that cluster also on the Azure cloud using the Azure Red Hat OpenShift managed service (ARO). The details about Azure DevOps creation or OpenShift installation are out of the scope of that article.

Source Code

If you would like to try this exercise by yourself, you may always take a look at my source code. Today you will have to clone two sample Git repositories. The first one contains a sample Spring Boot app used in our exercise. We will try to build that app with Azure Pipelines and deploy it on OpenShift. The second repository is a fork of the official Blue Agent repository. It contains a new version of the Dockerfile for our sample self-agent image based on Red Hat UBI9. Once you clone both of these repositories, you just need to follow my instructions.

Azure DevOps Self-Hosted Agent on OpenShift with Blue Agent

Build the Agent Image

In this section, we will build the image of the self-agent based on UBI9. Then, we will run it on the OpenShift cluster. We need to open the Dockerfile in the repository located in the src/docker/Dockerfile-ubi9 path. We don’t need to change much inside that file. It contains several clients, e.g. for AWS or Azure interaction. We will include the line for installing the oc client, which allows us to interact with OpenShift.

RUN curl -s https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/stable/openshift-client-linux.tar.gz -o - | tar zxvf - -C /usr/bin/
Dockerfile

After that, we need to build a new image. You can completely omit this step and pull the final image published on my Docker account and available under the piomin/blue-agent:ubi9 tag.

$ docker build -t piomin/blue-agent:ubi9 -f Dockerfile-ubi9 \
  --build-arg JQ_VERSION=1.6 \
  --build-arg AZURE_CLI_VERSION=2.63.0 \
  --build-arg AWS_CLI_VERSION=2.17.42 \
  --build-arg GCLOUD_CLI_VERSION=490.0.0 \
  --build-arg POWERSHELL_VERSION=7.2.23 \
  --build-arg TINI_VERSION=0.19.0 \
  --build-arg BUILDKIT_VERSION=0.15.2  \
  --build-arg AZP_AGENT_VERSION=3.243.1 \
  --build-arg ROOTLESSKIT_VERSION=2.3.1 \
  --build-arg GO_VERSION=1.22.7 \
  --build-arg YQ_VERSION=4.44.3 .
ShellSession

We can use a Helm chart to install Blue Agent on Kubernetes. This project is still under active development. I could not customize it with parameters to prepare a chart for installing Blue Agent on OpenShift. So, I just set some of them inside the values.yaml file:

pipelines:
  organizationURL: https://dev.azure.com/pminkows
  personalAccessToken: <AZURE_DEVOPS_PERSONAL_ACCESS_TOKEN>
  poolName: Default

image:
  repository: piomin/blue-agent
  version: 2
  flavor: ubi9
YAML

Deploy Agent on OpenShift

We can generate the YAML manifests for the defined parameters without installing it with the following command:

$ helm template -f blue-agent-values.yaml --dry-run .
ShellSession

In order to run it on OpenShift, we need to customize the Deployment object. First of all, I had to grant the privileged SCC (Security Context Constraint) to the container and remove some fields from the securityContext section. Here’s our Deployment object. It refers to the objects previously generated by the Helm chart: agent-blue-agent ServiceAccount and the Secret with the same name.

kind: Deployment
apiVersion: apps/v1
metadata:
  name: agent-blue-agent    
  labels:
    app.kubernetes.io/component: agent
    app.kubernetes.io/instance: agent
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: blue-agent
    app.kubernetes.io/part-of: blue-agent
    app.kubernetes.io/version: 3.243.1
    helm.sh/chart: blue-agent-7.0.3
spec:
  replicas: 3
  selector:
    matchLabels:
      app.kubernetes.io/instance: agent
      app.kubernetes.io/name: blue-agent
  template:
    metadata:
      labels:
        app.kubernetes.io/instance: agent
        app.kubernetes.io/name: blue-agent
      annotations:
        cluster-autoscaler.kubernetes.io/safe-to-evict: 'false'
    spec:
      nodeSelector:
        kubernetes.io/os: linux
      restartPolicy: Always
      serviceAccountName: agent-blue-agent
      schedulerName: default-scheduler
      terminationGracePeriodSeconds: 3600
      securityContext: {}
      containers:
        - resources:
            limits:
              cpu: '2'
              ephemeral-storage: 8Gi
              memory: 4Gi
            requests:
              cpu: '1'
              ephemeral-storage: 2Gi
              memory: 2Gi
          terminationMessagePath: /dev/termination-log
          lifecycle:
            preStop:
              exec:
                command:
                  - bash
                  - '-c'
                  - ''
                  - 'rm -rf ${AZP_WORK};'
                  - 'rm -rf ${TMPDIR};'
          name: azp-agent
          env:
            - name: AGENT_DIAGLOGPATH
              value: /app-root/azp-logs
            - name: VSO_AGENT_IGNORE
              value: AZP_TOKEN
            - name: AGENT_ALLOW_RUNASROOT
              value: '1'
            - name: AZP_AGENT_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.name
            - name: AZP_URL
              valueFrom:
                secretKeyRef:
                  name: agent-blue-agent
                  key: organizationURL
            - name: AZP_POOL
              value: Default
            - name: AZP_TOKEN
              valueFrom:
                secretKeyRef:
                  name: agent-blue-agent
                  key: personalAccessToken
            - name: flavor_ubi9
            - name: version_7.0.3
          securityContext:
            privileged: true
          imagePullPolicy: Always
          volumeMounts:
            - name: azp-logs
              mountPath: /app-root/azp-logs
            - name: azp-work
              mountPath: /app-root/azp-work
            - name: local-tmp
              mountPath: /app-root/.local/tmp
          terminationMessagePolicy: File
          image: 'piomin/blue-agent:ubi9'
      serviceAccount: agent-blue-agent
      volumes:
        - name: azp-logs
          emptyDir:
            sizeLimit: 1Gi
        - name: azp-work
          ephemeral:
            volumeClaimTemplate:
              spec:
                accessModes:
                  - ReadWriteOnce
                resources:
                  requests:
                    storage: 10Gi
                storageClassName: managed-csi
                volumeMode: Filesystem
        - name: local-tmp
          ephemeral:
            volumeClaimTemplate:
              spec:
                accessModes:
                  - ReadWriteOnce
                resources:
                  requests:
                    storage: 1Gi
                storageClassName: managed-csi
                volumeMode: Filesystem
      dnsPolicy: ClusterFirst
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 25%
      maxSurge: 50%
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600
YAML

After removing the --dry-run option from the helm template command we can install the solution on OpenShift. It’s worth noting that there were some problems with the Deployment object in the Helm chart, so we will apply it directly later.

azure-devops-openshift-blue-agent

Before applying Deployment we have to add privileged SCC to the ServiceAccount used by the agent. By the way, adding privileged SCC is not the best approach to managing containers on OpenShift. However, the oc client installed in the image creates a directory during the login procedure

$ oc adm policy add-scc-to-user privileged -z agent-blue-agent
ShellSession

Once we deploy an agent on OpenShift we should see three running pods. They are trying to connect to the Default pool defined in the Azure DevOps for our self-hosted agent.

$ oc get po
NAME                                READY   STATUS    RESTARTS   AGE
agent-blue-agent-5458b9b76d-2gk2z   1/1     Running   0          13h
agent-blue-agent-5458b9b76d-nwcxh   1/1     Running   0          13h
agent-blue-agent-5458b9b76d-tcpnp   1/1     Running   0          13h
ShellSession

Connect Self-hosted Agent to Azure DevOps

Let’s switch to the Azure DevOps instance. Then, we should go to our project. After that, we need to go to the Organization Settings -> Agent pools. We need to create a new agent pool. The pool name depends on the name configured for the Blue Agent deployed on OpenShift. In our case, this name is Default.

Once we create a pool we should see all three instances of agents in the following list.

azure-devops-openshift-agents

We can switch to the OpenShift once again. Then, let’s take a look at the logs printed out by one of the agents. As you see, it has been started successfully and listens for the incoming jobs.

If agents are running on OpenShift and they successfully connect with Azure DevOps, we finished the first part of our exercise. Now, we can create a pipeline for our sample Spring Boot app.

Create Azure DevOps Pipeline for OpenShift

Azure Pipeline Definition

Firstly, we need to go to the sample-spring-boot-web repository. The pipeline is configured in the azure-pipelines.yml file in the repository root directory. Let’s take a look at it. It’s very simple simple. I’m just using the Azure DevOps Command-Line task to interact with OpenShift through the oc client. Of course, it has to define the target agent pool used for running its jobs (1). In the first step, we need to log in to the OpenShift cluster (2). We can use the internal address of the Kubernetes Service, since the agent is running inside the cluster. All the actions should be performed inside our sample myapp project (3). We need to create BuildConfig and Deployment objects using the templates from the repository (4). The pipeline uses its BuildId parameter to the output image. Finally, it starts the build configured by the previously applied BuildConfig object (5).

trigger:
- master

# (1)
pool:
  name: Default

steps:
# (2)
- task: CmdLine@2
  inputs:
    script: 'oc login https://172.30.0.1:443 -u kubeadmin -p $(OCP_PASSWORD) --insecure-skip-tls-verify=true'
  displayName: Login to OpenShift
# (3)
- task: CmdLine@2
  inputs:
    script: 'oc project myapp'
  displayName: Switch to project
# (4)
- task: CmdLine@2
  inputs:
    script: 'oc process -f ocp/openshift.yaml -o yaml -p IMAGE_TAG=v1.0-$(Build.BuildId) -p NAMESPACE=myapp | oc apply -f -'
  displayName: Create build
# (5)
- task: CmdLine@2
  inputs:
    script: 'oc start-build sample-spring-boot-web-bc -w'
    failOnStderr: true
  displayName: Start build
  timeoutInMinutes: 5
- task: CmdLine@2
  inputs:
    script: 'oc status'
  displayName: Check status
YAML

Integrate Azure Pipelines with OpenShift

We use the OpenShift Template object for defining the YAML manifest with Deployment and BuildConfig. The BuildConfig object typically manages the image build process on OpenShift. In order to build the image directly from the source code, it uses the source-2-image (S2I) tool. We need to set at least three parameters to configure the process properly. The first of them is the address of the output image (1). We can use the internal OpenShift repository available at image-registry.openshift-image-registry.svc:5000 or any other external registry like Quay or Docker Hub. We should also set the name of the builder image (2). Our app requires at least Java 21. Of course, the process requires the source code repository as the input (3). At the same time, we define the Deployment object. It uses the image previously built by the BuildConfig object (4). The whole template takes two input parameters: IMAGE_TAG and NAMESPACE (5).

ind: Template
apiVersion: template.openshift.io/v1
metadata:
  name: sample-spring-boot-web-tmpl
objects:
  - kind: BuildConfig
    apiVersion: build.openshift.io/v1
    metadata:
      name: sample-spring-boot-web-bc
      labels:
        build: sample-spring-boot-web-bc
    spec:
      # (1)
      output:
        to:
          kind: DockerImage
          name: 'image-registry.openshift-image-registry.svc:5000/${NAMESPACE}/sample-spring-boot-web:${IMAGE_TAG}'
      # (2)
      strategy:
        type: Source
        sourceStrategy:
          from:
            kind: ImageStreamTag
            namespace: openshift
            name: 'openjdk-21:stable'
      # (3)
      source:
        type: Git
        git:
          uri: 'https://github.com/piomin/sample-spring-boot-web.git'
  - kind: Deployment
    apiVersion: apps/v1
    metadata:
      name: sample-spring-boot-web
      labels:
        app: sample-spring-boot-web
        app.kubernetes.io/component: sample-spring-boot-web
        app.kubernetes.io/instance: sample-spring-boot-web
    spec:
      replicas: 1
      selector:
        matchLabels:
          deployment: sample-spring-boot-web
      template:
        metadata:
          labels:
            deployment: sample-spring-boot-web
        spec:
          containers:
            - name: sample-spring-boot-web
              # (4)
              image: 'image-registry.openshift-image-registry.svc:5000/${NAMESPACE}/sample-spring-boot-web:${IMAGE_TAG}'
              ports:
                - containerPort: 8080
                  protocol: TCP
                - containerPort: 8443
                  protocol: TCP
# (5)
parameters:
  - name: IMAGE_TAG
    displayName: Image tag
    description: The output image tag
    value: v1.0
    required: true
  - name: NAMESPACE
    displayName: Namespace
    description: The OpenShift Namespace where the ImageStream resides
    value: openshift
YAML

Currently, OpenShift doesn’t provide OpenJDK 21 image by default. So we need to manually import it to our cluster before running the pipeline.

$ oc import-image openjdk-21:stable \
  --from=registry.access.redhat.com/ubi9/openjdk-21:1.20-2.1725851045 \
  --confirm
ShellSession

Now, we can create a pipeline in Azure DevOps. In order to do it, we need to go to the Pipelines section, and then click the New pipeline button. After that, we just need to pass the address of our repository with the pipeline definition.

Our pipeline requires the OCP_PASSWORD input parameter with the OpenShift admin user password. We can set it as the pipeline secret variable. In order to do that, we need to edit the pipeline and then click the Variables button.

Run the Pipeline

Finally, we can run our pipeline. If everything finishes successfully, the status of the job is Success. It takes around one minute to execute all the steps defined in the pipeline.

We can see detailed logs for each step.

azure-devops-openshift-pipeline-logs

Each time, we run a pipeline a new build on OpenShift starts. Note, that the pipeline updates the BuildConfig object with the new version of the output image.

azure-devops-openshift-builds

We can take a look at detailed logs of each build. Within such a build OpenShift starts a new pod, which performs the whole process. It uses the S2I approach for building the image from the source code and pushing that image to the internal OpenShift registry.

Finally, let’s take a look at the Deployment. As you see, it was reloaded with the latest version of the image and worked fine.

$ oc describe deploy sample-spring-boot-web -n myapp
Name:                   sample-spring-boot-web
Namespace:              myapp
CreationTimestamp:      Wed, 11 Sep 2024 16:09:11 +0200
Labels:                 app=sample-spring-boot-web
                        app.kubernetes.io/component=sample-spring-boot-web
                        app.kubernetes.io/instance=sample-spring-boot-web
Annotations:            deployment.kubernetes.io/revision: 3
Selector:               deployment=sample-spring-boot-web
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  deployment=sample-spring-boot-web
  Containers:
   sample-spring-boot-web:
    Image:        image-registry.openshift-image-registry.svc:5000/myapp/sample-spring-boot-web:v1.0-134
    Ports:        8080/TCP, 8443/TCP
    Host Ports:   0/TCP, 0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  sample-spring-boot-web-78559697c9 (0/0 replicas created), sample-spring-boot-web-7985d6b844 (0/0 replicas created)
NewReplicaSet:   sample-spring-boot-web-5584447f5d (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  19m   deployment-controller  Scaled up replica set sample-spring-boot-web-5584447f5d to 1
  Normal  ScalingReplicaSet  17m   deployment-controller  Scaled down replica set sample-spring-boot-web-7985d6b844 to 0 from 1
ShellSession

By the way, Azure Pipelines jobs are “load-balanced” between the agents. Here’s the name of the agent used for the first pipeline run.

Here’s the name of the agent used for the second pipeline run.

There are three instances of the agent running on OpenShift. So, Azure DevOps can process max 3 pipeline runs. By the way, we could enable autoscaling for Blue Agent based on KEDA.

azure-devops-openshift-jobs

Final Thoughts

In this article, I showed the simplest and most OpenShift-native approach to building CI/CD pipelines on Azure DevOps. We just need to use the oc client on the agent image and the OpenShift BuildConfig object to orchestrate the whole process of building and deploying the Spring Boot app on the cluster. Of course, we could implement the same process in several different. For example, we could leverage the plugins for Kubernetes and completely omit the tools provided by OpenShift. On the other hand, it is possible to use Argo CD for the delivery phase, and Azure Pipelines for the integration phase.

2 COMMENTS

comments user
Paul Armstrong

Hey Piotr, as someone who is exploring the same space, I really appreciate the detail in your posts. Thank you.

    comments user
    piotr.minkowski

    Hi. Thanks. You’re welcome

Leave a Reply