IDP on OpenShift with Red Hat Developer Hub
This article will teach you how to build IDP (Internal Developer Platform) on the OpenShift cluster with the Red Hat Developer Hub solution. Red Hat Developer Hub is a developer portal built on top of the Backstage project. It simplifies the installation and configuration of Backstage in the Kubernetes-native environment through the operator and dynamic plugins. You can compare that process with the open-source Backstage installation on Kubernetes described in my previous article. If you need a quick intro to the Backstage platform you can also read my article Getting Started with Backstage.
A platform team manages an Internal Developer Platform (IDP) to build golden paths and enable developer self-service in the organization. It may consist of many different tools and solutions. On the other hand, Internal Developer Portals serve as the GUI interface through which developers can discover and access internal developer platform capabilities. In the context of OpenShift, Red Hat Developer Hub simplifies the adoption of several cluster services for developers (e.g. Kubernates-native CI/CD tools). Today, you will learn how to integrate Developer Hub with OpenShift Pipelines (Tekton) and OpenShift GitOps (Argo CD). Let’s begin!
Source Code
If you would like to try it by yourself, you may always take a look at my source code. Our sample GitHub repository contains software templates written in the Backstage technology called Skaffolder. In this article, we will analyze a template dedicated to OpenShift available in the templates/spring-boot-basic-on-openshift
directory. After cloning this repository, you should just follow my instructions.
Here’s the structure of our repository. It is pretty similar to the template for Spring Boot on Kubernetes described in my previous article about Backstage. Besides the template, it also contains the Argo CD and Tekton templates with YAML deployment manifests to apply on OpenShift.
.
├── README.md
├── backstage-templates.iml
├── skeletons
│ └── argocd
│ ├── argocd
│ │ └── app.yaml
│ └── manifests
│ ├── deployment.yaml
│ ├── pipeline.yaml
│ ├── service.yaml
│ ├── tasks.yaml
│ └── trigger.yaml
├── templates
│ └── spring-boot-basic-on-openshift
│ ├── skeleton
│ │ ├── README.md
│ │ ├── catalog-info.yaml
│ │ ├── devfile.yaml
│ │ ├── k8s
│ │ │ └── deployment.yaml
│ │ ├── pom.xml
│ │ ├── renovate.json
│ │ ├── skaffold.yaml
│ │ └── src
│ │ ├── main
│ │ │ ├── java
│ │ │ │ └── ${{values.javaPackage}}
│ │ │ │ ├── Application.java
│ │ │ │ ├── controller
│ │ │ │ │ └── ${{values.domainName}}Controller.java
│ │ │ │ └── domain
│ │ │ │ └── ${{values.domainName}}.java
│ │ │ └── resources
│ │ │ └── application.yml
│ │ └── test
│ │ ├── java
│ │ │ └── ${{values.javaPackage}}
│ │ │ └── ${{values.domainName}}ControllerTests.java
│ │ └── resources
│ │ └── k6
│ │ └── load-tests-add.js
│ └── template.yaml
└── templates.yaml
ShellSessionPrerequisites
Before we start the exercise, we need to prepare our OpenShift cluster. We have to install three operators: OpenShift GitOps (Argo CD), OpenShift Pipelines (Tekton), and of course, Red Hat Developer Hub.
Once we install the OpenShift GitOps, it automatically creates an instance of Argo CD in the openshift-gitops
namespace. That instance is managed by the openshift-gitops
ArgoCD
CRD object.
We need to override some default configuration settings there. Then, we will add a new user backstage
with privileges for creating applications, and projects and generating API keys. Finally, we change the default TLS termination method for Argo CD Route
to reencrypt
. It is required to integrate with the Backstage Argo CD plugin. We also add the demo namespace as an additional namespace to place Argo CD applications in.
apiVersion: argoproj.io/v1beta1
kind: ArgoCD
metadata:
name: openshift-gitops
namespace: openshift-gitops
spec:
sourceNamespaces:
- demo
server:
...
route:
enabled: true
tls:
termination: reencrypt
...
rbac:
defaultPolicy: ''
policy: |
g, system:cluster-admins, role:admin
g, cluster-admins, role:admin
p, backstage, applications, *, */*, allow
p, backstage, projects, *, *, allow
scopes: '[groups]'
extraConfig:
accounts.backstage: 'apiKey, login'
YAMLIn order to generate the apiKey
for the backstage
user, we need to sign in to Argo CD with the argocd
CLI as the admin
user. Then, we should run the following command for the backstage account and export the generated token as the ARGOCD_TOKEN
env variable:
$ export ARGOCD_TOKEN=$(argocd account generate-token --account backstage)
ShellSessionFinally, let’s obtain the long-lived API token for Kubernetes by creating a secret:
apiVersion: v1
kind: Secret
metadata:
name: default-token
namespace: backstage
annotations:
kubernetes.io/service-account.name: default
type: kubernetes.io/service-account-token
YAMLThen, we can copy and export it as the OPENSHIFT_TOKEN
environment variable with the following command:
$ export OPENSHIFT_TOKEN=$(kubectl get secret default-token -o go-template='{{.data.token | base64decode}}')
ShellSessionLet’s add the ClusterRole
view to the Backstage default ServiceAccount
:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: default-view-backstage
subjects:
- kind: ServiceAccount
name: default
namespace: backstage
roleRef:
kind: ClusterRole
name: view
apiGroup: rbac.authorization.k8s.io
YAMLConfigure Red Hat Developer Hub on OpenShift
After installing the Red Hat Developer Hub operator on OpenShift, we can use the Backstage
CRD to create and configure a new instance of the portal. Firstly, we will override some default settings using the app-config-rhdh
ConfigMap
(1). Then we will provide some additional secrets like tokens to the third-party party tools in the app-secrets-rhdh
Secret
(2). Finally, we will install and configure several useful plugins with the dynamic-plugins-rhdh
ConfigMap
(3). Here is the required configuration in the Backstage
CRD.
apiVersion: rhdh.redhat.com/v1alpha1
kind: Backstage
metadata:
name: developer-hub
namespace: backstage
spec:
application:
appConfig:
configMaps:
# (1)
- name: app-config-rhdh
mountPath: /opt/app-root/src
# (3)
dynamicPluginsConfigMapName: dynamic-plugins-rhdh
extraEnvs:
secrets:
# (2)
- name: app-secrets-rhdh
extraFiles:
mountPath: /opt/app-root/src
replicas: 1
route:
enabled: true
database:
enableLocalDb: true
YAMLOverride Default Configuration Settings
The instance of Backstage
will be deployed in the backstage
namespace. Since OpenShift exposes it as a Route
, the address of the portal on my cluster is https://backstage-developer-hub-backstage.apps.piomin.eastus.aroapp.io
(1). Firstly, we need to override that address in the app settings. Then we need to enable authentication through the GitHub OAuth with the GitHub Red Hat Developer Hub application (2). Then, we should set the proxy endpoint to integrate with Sonarqube through the HTTP Request Action plugin (3). Our instance of Backstage should also read templates from the particular URL location (4) and should be able to create repositories in GitHub (5).
kind: ConfigMap
apiVersion: v1
metadata:
name: app-config-rhdh
namespace: backstage
data:
app-config-rhdh.yaml: |
# (1)
app:
baseUrl: https://backstage-developer-hub-backstage.apps.piomin.eastus.aroapp.io
backend:
baseUrl: https://backstage-developer-hub-backstage.apps.piomin.eastus.aroapp.io
# (2)
auth:
environment: development
providers:
github:
development:
clientId: ${GITHUB_CLIENT_ID}
clientSecret: ${GITHUB_CLIENT_SECRET}
# (3)
proxy:
endpoints:
/sonarqube:
target: ${SONARQUBE_URL}/api
allowedMethods: ['GET', 'POST']
auth: "${SONARQUBE_TOKEN}:"
# (4)
catalog:
rules:
- allow: [Component, System, API, Resource, Location, Template]
locations:
- type: url
target: https://github.com/piomin/backstage-templates/blob/master/templates.yaml
# (5)
integrations:
github:
- host: github.com
token: ${GITHUB_TOKEN}
sonarqube:
baseUrl: https://sonarcloud.io
apiKey: ${SONARQUBE_TOKEN}
YAMLIntegrate with GitHub
In order to use GitHub auth we need to register a new app there. Go to the “Settings > Developer Settings > New GitHub App” in your GitHub account. Then, put the address of your Developer Hub instance in the “Homepage URL” field and the callback address in the “Callback URL” field (base URL + /api/auth/github/handler/frame
)
Then, let’s edit our GitHub app to generate a new secret as shown below.
The client ID and secret should be saved as the environment variables for future use. Note, that we also need to generate a new personal access token (“Settings > Developer Settings > Personal Access Tokens”).
export GITHUB_CLIENT_ID=<YOUR_GITHUB_CLIENT_ID>
exporg GITHUB_CLIENT_SECRET=<YOUR_GITHUB_CLIENT_SECRET>
export GITHUB_TOKEN=<YOUR_GITHUB_TOKEN>
ShellSessionWe already have a full set of required tokens and access keys, so we can create the app-secrets-rhdh
Secret
to store them on our OpenShift cluster.
$ oc create secret generic app-secrets-rhdh -n backstage \
--from-literal=GITHUB_CLIENT_ID=${GITHUB_CLIENT_ID} \
--from-literal=GITHUB_CLIENT_SECRET=${GITHUB_CLIENT_SECRET} \
--from-literal=GITHUB_TOKEN=${GITHUB_TOKEN} \
--from-literal=SONARQUBE_TOKEN=${SONARQUBE_TOKEN} \
--from-literal=SONARQUBE_URL=https://sonarcloud.io \
--from-literal=ARGOCD_TOKEN=${ARGOCD_TOKEN}
ShellSessionInstall and Configure Plugins
Finally, we can proceed to the plugins installation. Do you remember how we can do it with the open-source Backstage on Kubernetes? I described it in my previous article. Red Hat Developer Hub drastically simplifies that process with the idea of dynamic plugins. This approach is based on the Janus IDP project. Developer Hub on OpenShift comes with ~60 preinstalled plugins that allow us to integrate various third-party tools including Sonarqube, Argo CD, Tekton, Kubernetes, or GitHub. Some of them are enabled by default, some others are installed but disabled. We can verify it after signing in to the Backstage UI. We can easily verify it in the “Administration” section:
Let’s take a look at the ConfigMap
which contains a list of plugins to activate. It is pretty huge since we also provide configuration for the frontend plugins. Some plugins are optional. From the perspective of our exercise goal we need to activate at least the following list of plugins:
janus-idp-backstage-plugin-argocd
– to view the status of Argo CD synchronization in the UIjanus-idp-backstage-plugin-tekton
– to view the status of Tekton pipelines in the UIbackstage-plugin-kubernetes-backend-dynamic
– to integrate with the Kubernetes clusterbackstage-plugin-kubernetes
– to view the Kubernetes app pods in the UIbackstage-plugin-sonarqube
– to view the status of the Sonarqube scan in the UIroadiehq-backstage-plugin-argo-cd-backend-dynamic
– to create the Argo CDApplication
from the template
kind: ConfigMap
apiVersion: v1
metadata:
name: dynamic-plugins-rhdh
namespace: backstage
data:
dynamic-plugins.yaml: |
includes:
- dynamic-plugins.default.yaml
plugins:
- package: ./dynamic-plugins/dist/roadiehq-backstage-plugin-github-pull-requests
disabled: true
pluginConfig:
dynamicPlugins:
frontend:
roadiehq.backstage-plugin-github-pull-requests:
mountPoints:
- mountPoint: entity.page.overview/cards
importName: EntityGithubPullRequestsOverviewCard
config:
layout:
gridColumnEnd:
lg: "span 4"
md: "span 6"
xs: "span 12"
if:
allOf:
- isGithubPullRequestsAvailable
- mountPoint: entity.page.pull-requests/cards
importName: EntityGithubPullRequestsContent
config:
layout:
gridColumn: "1 / -1"
if:
allOf:
- isGithubPullRequestsAvailable
- package: './dynamic-plugins/dist/backstage-plugin-catalog-backend-module-github-dynamic'
disabled: false
pluginConfig: {}
- package: './dynamic-plugins/dist/janus-idp-backstage-plugin-argocd'
disabled: false
pluginConfig:
dynamicPlugins:
frontend:
janus-idp.backstage-plugin-argocd:
mountPoints:
- mountPoint: entity.page.overview/cards
importName: ArgocdDeploymentSummary
config:
layout:
gridColumnEnd:
lg: "span 8"
xs: "span 12"
if:
allOf:
- isArgocdConfigured
- mountPoint: entity.page.cd/cards
importName: ArgocdDeploymentLifecycle
config:
layout:
gridColumn: '1 / -1'
if:
allOf:
- isArgocdConfigured
- package: './dynamic-plugins/dist/janus-idp-backstage-plugin-tekton'
disabled: false
pluginConfig:
dynamicPlugins:
frontend:
janus-idp.backstage-plugin-tekton:
mountPoints:
- mountPoint: entity.page.ci/cards
importName: TektonCI
config:
layout:
gridColumn: "1 / -1"
if:
allOf:
- isTektonCIAvailable
- package: './dynamic-plugins/dist/janus-idp-backstage-plugin-topology'
disabled: false
pluginConfig:
dynamicPlugins:
frontend:
janus-idp.backstage-plugin-topology:
mountPoints:
- mountPoint: entity.page.topology/cards
importName: TopologyPage
config:
layout:
gridColumn: "1 / -1"
height: 75vh
if:
anyOf:
- hasAnnotation: backstage.io/kubernetes-id
- hasAnnotation: backstage.io/kubernetes-namespace
- package: './dynamic-plugins/dist/janus-idp-backstage-scaffolder-backend-module-sonarqube-dynamic'
disabled: false
pluginConfig: {}
- package: './dynamic-plugins/dist/backstage-plugin-kubernetes-backend-dynamic'
disabled: false
pluginConfig:
kubernetes:
customResources:
- group: 'tekton.dev'
apiVersion: 'v1beta1'
plural: 'pipelines'
- group: 'tekton.dev'
apiVersion: 'v1beta1'
plural: 'pipelineruns'
- group: 'tekton.dev'
apiVersion: 'v1beta1'
plural: 'taskruns'
- group: 'route.openshift.io'
apiVersion: 'v1'
plural: 'routes'
serviceLocatorMethod:
type: 'multiTenant'
clusterLocatorMethods:
- type: 'config'
clusters:
- name: ocp
url: https://api.piomin.eastus.aroapp.io:6443
authProvider: 'serviceAccount'
skipTLSVerify: true
skipMetricsLookup: true
serviceAccountToken: ${OPENSHIFT_TOKEN}
- package: './dynamic-plugins/dist/backstage-plugin-kubernetes'
disabled: false
pluginConfig:
dynamicPlugins:
frontend:
backstage.plugin-kubernetes:
mountPoints:
- mountPoint: entity.page.kubernetes/cards
importName: EntityKubernetesContent
config:
layout:
gridColumn: "1 / -1"
if:
anyOf:
- hasAnnotation: backstage.io/kubernetes-id
- hasAnnotation: backstage.io/kubernetes-namespace
- package: './dynamic-plugins/dist/backstage-plugin-sonarqube'
disabled: false
pluginConfig:
dynamicPlugins:
frontend:
backstage.plugin-sonarqube:
mountPoints:
- mountPoint: entity.page.overview/cards
importName: EntitySonarQubeCard
config:
layout:
gridColumnEnd:
lg: "span 4"
md: "span 6"
xs: "span 12"
if:
allOf:
- isSonarQubeAvailable
- package: './dynamic-plugins/dist/backstage-plugin-sonarqube-backend-dynamic'
disabled: false
pluginConfig: {}
- package: './dynamic-plugins/dist/roadiehq-backstage-plugin-argo-cd'
disabled: false
pluginConfig:
dynamicPlugins:
frontend:
roadiehq.backstage-plugin-argo-cd:
mountPoints:
- mountPoint: entity.page.overview/cards
importName: EntityArgoCDOverviewCard
config:
layout:
gridColumnEnd:
lg: "span 8"
xs: "span 12"
if:
allOf:
- isArgocdAvailable
- mountPoint: entity.page.cd/cards
importName: EntityArgoCDHistoryCard
config:
layout:
gridColumn: "1 / -1"
if:
allOf:
- isArgocdAvailable
- package: './dynamic-plugins/dist/roadiehq-scaffolder-backend-argocd-dynamic'
disabled: false
pluginConfig: {}
- package: ./dynamic-plugins/dist/roadiehq-backstage-plugin-argo-cd-backend-dynamic
disabled: false
pluginConfig:
argocd:
appLocatorMethods:
- type: 'config'
instances:
- name: main
url: "https://openshift-gitops-server-openshift-gitops.apps.piomin.eastus.aroapp.io"
token: "${ARGOCD_TOKEN}"
YAMLOnce we provide the whole configuration described above, we are ready to proceed with our Skaffolder template for the sample Spring Boot app.
Prepare Backstage Template for OpenShift
Our template consists of several steps. Firstly, we generate the app source code and push it to the app repository. Then, we register the component in the Backstage catalog and create a configuration repository for Argo CD. It contains app deployment manifests and the definition of the Tekton pipeline and trigger. Trigger is exposed as the Route
, and can be called from the GitHub repository through the webhook. Finally, we are creating the project in Sonarcloud, the application in Argo CD, and registering the webhook in the GitHub app repository. Here’s our Skaffolder template.
apiVersion: scaffolder.backstage.io/v1beta3
kind: Template
metadata:
name: spring-boot-basic-on-openshift-template
title: Create a Spring Boot app for OpenShift
description: Create a Spring Boot app for OpenShift
tags:
- spring-boot
- java
- maven
- tekton
- renovate
- sonarqube
- openshift
- argocd
spec:
owner: piomin
system: microservices
type: service
parameters:
- title: Provide information about the new component
required:
- orgName
- appName
- domainName
- repoBranchName
- groupId
- javaPackage
- apiPath
- namespace
- description
- registryUrl
- clusterDomain
properties:
orgName:
title: Organization name
type: string
default: piomin
appName:
title: App name
type: string
default: sample-spring-boot-app-openshift
domainName:
title: Name of the domain object
type: string
default: Person
repoBranchName:
title: Name of the branch in the Git repository
type: string
default: master
groupId:
title: Maven Group ID
type: string
default: pl.piomin.services
javaPackage:
title: Java package directory
type: string
default: pl/piomin/services
apiPath:
title: REST API path
type: string
default: /api/v1
namespace:
title: The target namespace on Kubernetes
type: string
default: demo
description:
title: Description
type: string
default: Spring Boot App Generated by Backstage
registryUrl:
title: Registry URL
type: string
default: image-registry.openshift-image-registry.svc:5000
clusterDomain:
title: OpenShift Cluster Domain
type: string
default: .apps.piomin.eastus.aroapp.io
steps:
- id: sourceCodeTemplate
name: Generating the Source Code Component
action: fetch:template
input:
url: ./skeleton
values:
orgName: ${{ parameters.orgName }}
appName: ${{ parameters.appName }}
domainName: ${{ parameters.domainName }}
groupId: ${{ parameters.groupId }}
javaPackage: ${{ parameters.javaPackage }}
apiPath: ${{ parameters.apiPath }}
namespace: ${{ parameters.namespace }}
- id: publish
name: Publishing to the Source Code Repository
action: publish:github
input:
allowedHosts: ['github.com']
description: ${{ parameters.description }}
repoUrl: github.com?owner=${{ parameters.orgName }}&repo=${{ parameters.appName }}
defaultBranch: ${{ parameters.repoBranchName }}
repoVisibility: public
- id: register
name: Registering the Catalog Info Component
action: catalog:register
input:
repoContentsUrl: ${{ steps.publish.output.repoContentsUrl }}
catalogInfoPath: /catalog-info.yaml
- id: configCodeTemplate
name: Generating the Config Code Component
action: fetch:template
input:
url: ../../skeletons/argocd
values:
orgName: ${{ parameters.orgName }}
appName: ${{ parameters.appName }}
registryUrl: ${{ parameters.registryUrl }}
namespace: ${{ parameters.namespace }}
repoBranchName: ${{ parameters.repoBranchName }}
targetPath: ./gitops
- id: publish
name: Publishing to the Config Code Repository
action: publish:github
input:
allowedHosts: ['github.com']
description: ${{ parameters.description }}
repoUrl: github.com?owner=${{ parameters.orgName }}&repo=${{ parameters.appName }}-config
defaultBranch: ${{ parameters.repoBranchName }}
sourcePath: ./gitops
repoVisibility: public
- id: sonarqube
name: Create a new project on Sonarcloud
action: http:backstage:request
input:
method: 'POST'
path: '/proxy/sonarqube/projects/create?name=${{ parameters.appName }}&organization=${{ parameters.orgName }}&project=${{ parameters.orgName }}_${{ parameters.appName }}'
headers:
content-type: 'application/json'
- id: create-argocd-resources
name: Create ArgoCD Resources
action: argocd:create-resources
input:
appName: ${{ parameters.appName }}
argoInstance: main
namespace: ${{ parameters.namespace }}
repoUrl: https://github.com/${{ parameters.orgName }}/${{ parameters.appName }}-config.git
path: 'manifests'
- id: create-webhook
name: Create GitHub Webhook
action: github:webhook
input:
repoUrl: github.com?repo=${{ parameters.appName }}&owner=${{ parameters.orgName }}
webhookUrl: https://el-${{ parameters.appName }}-${{ parameters.namespace }}.${{ parameters.clusterDomain }}
output:
links:
- title: Open the Source Code Repository
url: ${{ steps.publish.output.remoteUrl }}
- title: Open the Catalog Info Component
icon: catalog
entityRef: ${{ steps.register.output.entityRef }}
- title: SonarQube project URL
url: ${{ steps['create-sonar-project'].output.projectUrl }}
YAMLDefine Templates for OpenShift Pipelines
Compared to the article about Backstage on Kubernetes, we use Tekton instead of CircleCI as a build tool. Let’s take a look at the definition of our pipeline. It consists of four steps. In the final step, we use the OpenShift S2I mechanism to build the app image and push it to the local container registry.
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: ${{ values.appName }}
labels:
backstage.io/kubernetes-id: ${{ values.appName }}
spec:
params:
- description: branch
name: git-revision
type: string
default: master
tasks:
- name: git-clone
params:
- name: url
value: 'https://github.com/${{ values.orgName }}/${{ values.appName }}.git'
- name: revision
value: $(params.git-revision)
- name: sslVerify
value: 'false'
taskRef:
kind: ClusterTask
name: git-clone
workspaces:
- name: output
workspace: source-dir
- name: maven
params:
- name: GOALS
value:
- test
- name: PROXY_PROTOCOL
value: http
- name: CONTEXT_DIR
value: .
runAfter:
- git-clone
taskRef:
kind: ClusterTask
name: maven
workspaces:
- name: source
workspace: source-dir
- name: maven-settings
workspace: maven-settings
- name: sonarqube
params:
- name: SONAR_HOST_URL
value: 'https://sonarcloud.io'
- name: SONAR_PROJECT_KEY
value: ${{ values.appName }}
runAfter:
- maven
taskRef:
kind: Task
name: sonarqube-scanner
workspaces:
- name: source
workspace: source-dir
- name: sonar-settings
workspace: sonar-settings
- name: get-version
params:
- name: CONTEXT_DIR
value: .
runAfter:
- sonarqube
taskRef:
kind: Task
name: maven-get-project-version
workspaces:
- name: source
workspace: source-dir
- name: s2i-java
params:
- name: PATH_CONTEXT
value: .
- name: TLSVERIFY
value: 'false'
- name: MAVEN_CLEAR_REPO
value: 'false'
- name: IMAGE
value: >-
${{ values.registryUrl }}/${{ values.namespace }}/${{ values.appName }}:$(tasks.get-version.results.version)
runAfter:
- get-version
taskRef:
kind: ClusterTask
name: s2i-java
workspaces:
- name: source
workspace: source-dir
workspaces:
- name: source-dir
- name: maven-settings
- name: sonar-settings
YAMLIn order to run the pipeline after creating it, we need to apply the PipelineRun
object.
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: ${{ values.appName }}-init
spec:
params:
- name: git-revision
value: master
pipelineRef:
name: ${{ values.appName }}
serviceAccountName: pipeline
workspaces:
- name: source-dir
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
- name: sonar-settings
secret:
secretName: sonarqube-secret-token
- configMap:
name: maven-settings
name: maven-settings
YAMLIn order to call the pipeline via the webhook from the app source repository, we also need to create the Tekton TriggerTemplate
object. Once we push a change to the target repository, we trigger the run of the Tekton pipeline on the OpenShift cluster.
apiVersion: triggers.tekton.dev/v1alpha1
kind: TriggerTemplate
metadata:
name: ${{ values.appName }}
spec:
params:
- default: ${{ values.repoBranchName }}
description: The git revision
name: git-revision
- description: The git repository url
name: git-repo-url
resourcetemplates:
- apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: ${{ values.appName }}-run-
spec:
params:
- name: git-revision
value: $(tt.params.git-revision)
pipelineRef:
name: ${{ values.appName }}
serviceAccountName: pipeline
workspaces:
- name: source-dir
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
- name: sonar-settings
secret:
secretName: sonarqube-secret-token
- configMap:
name: maven-settings
name: maven-settings
YAMLDeploy the app on OpenShift
Here’s the template for the app Deployment
object:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ${{ values.appName }}
labels:
app: ${{ values.appName }}
app.kubernetes.io/name: spring-boot
backstage.io/kubernetes-id: ${{ values.appName }}
spec:
selector:
matchLabels:
app: ${{ values.appName }}
template:
metadata:
labels:
app: ${{ values.appName }}
backstage.io/kubernetes-id: ${{ values.appName }}
spec:
containers:
- name: ${{ values.appName }}
image: ${{ values.registryUrl }}/${{ values.namespace }}/${{ values.appName }}:1.0
ports:
- containerPort: 8080
name: http
livenessProbe:
httpGet:
port: 8080
path: /actuator/health/liveness
scheme: HTTP
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
port: 8080
path: /actuator/health/readiness
scheme: HTTP
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
resources:
limits:
memory: 1024Mi
YAMLHere’s the current version of the catalog-info.yaml
file.
apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
name: ${{ values.appName }}
title: ${{ values.appName }}
annotations:
janus-idp.io/tekton: ${{ values.appName }}
tektonci/build-namespace: ${{ values.namespace }}
github.com/project-slug: ${{ values.orgName }}/${{ values.appName }}
sonarqube.org/project-key: ${{ values.orgName }}_${{ values.appName }}
backstage.io/kubernetes-id: ${{ values.appName }}
argocd/app-name: ${{ values.appName }}
tags:
- spring-boot
- java
- maven
- tekton
- argocd
- renovate
- sonarqube
spec:
type: service
owner: piomin
lifecycle: experimental
YAMLNow, let’s create a new component in Red Hat Developer Hub using our template. In the first step, you should choose the “Create a Spring Boot App for OpenShift” template as shown below.
Then, provide all the parameters in the form. Probably you will have to override the default organization name to your GitHub account name and the address of your OpenShift cluster. Once you make all the required changes click the “Review” button, and then the “Create” button on the next screen. After that, Red Hat Developer Hub creates all the things we need.
After confirmation, Developer Hub redirects to the page with the progress information. There are 8 action steps defined. All of them should be finished successfully. Then, we can just click the “Open the Catalog Info Component” link.
Viewing Component in Red Hat Developer Hub UI
Our app overview tab contains general information about the component registered in Backstage, the status of the Sonarqube scan, and the status of the Argo CD synchronization process. We can switch to the several other available tabs.
In the “CI” tab, we can see the history of the OpenShift Pipelines runs. We can switch to the logs of each pipeline step by clicking on it.
If you are familiar with OpenShift, you can recognize that view as a topology view from the OpenShift Console developer perspective. It visualizes all the deployments in the particular namespace.
In the “CD” tab, we can see the history of Argo CD synchronization operations.
Final Thoughts
Red Hat Developer Hub simplifies installation and configuration of Backstage in the Kubernetes-native environment. It introduces the idea of dynamic plugins, which can be easily customized in the configuration files. You can compare this approach with my previous article about Backstage on Kubernetes.
2 COMMENTS