GitOps on Kubernetes for Postgres and Vault with Argo CD
In this article, you will learn how to prepare the GitOps process on Kubernetes for the Postgres database and Hashicorp Vault with Argo CD. I guess that you are using Argo CD widely on your Kubernetes clusters for managing standard objects like deployment, services, or secrets. However, our configuration around the apps usually contains several other additional tools like databases, message brokers, or secrets engines. Today, we will consider how to implement the GitOps approach for such tools.
We will do the same thing as described in that article, but fully with the GitOps approach applied by Argo CD. The main goal here is to integrate Postgres with the Vault database secrets engine to generate database credentials dynamically and initialize the DB schema for the sample Spring Boot app. In order to achieve these goals, we are going to install two Kubernetes operators: Atlas and Vault Config. Atlas is a tool for managing the database schema as code. Its Kubernetes Operator allows us to define the schema and apply it to our database using the CRD objects. The Vault Config Operator provided by the Red Hat Community of Practice does a very similar thing but for Hashicorp Vault.
Source Code
If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. I will explain the structure of our sample in detail later. So after cloning the Git repository you should just follow my instructions 🙂
How It Works
Before we start, let’s describe our sample scenario. Thanks to the database secrets engine Vault integrates with Postgres and generates its credentials dynamically based on configured roles. On the other hand, our sample Spring Boot app integrates with Vault and uses its database engine to authenticate against Postgres. All the aspects of that scenario are managed in the GitOps style. Argo CD installs Vault, Postgres, and additional operators on Kubernetes via their Helm charts. Then, it applies all the required CRD objects to configure both Vault and Postgres. We keep the whole configuration in a single Git repository in the form of YAML manifests.
Argo CD prepares the configuration on Vault and creates a table on Postgres for the sample Spring Boot app. Our app integrates with Vault through the Spring Cloud Vault project. It also uses Spring Data JPA to interact with the database. Here’s the illustration of our scenario.
Install Argo CD on Kubernetes
Traditionally, we need to start our GitOps exercise by installing Argo CD on the Kubernetes cluster. Of course, we can do it using the Helm chart. In the first step, we need to add the following repository:
$ helm repo add argo https://argoproj.github.io/argo-helm
ShellSessionWe will add one parameter to the argo-cm
ConfigMap
to ignore the MutatingWebhookConfiguration
kind. This step is not necessary. It allows us to ignore the specific resource generated by one of the Helm charts used in the further steps. Thanks to that we will have everything in Argo CD in the “green” color 🙂 Here’s the Helm values.yaml
file with the required configuration:
configs:
cm:
resource.exclusions: |
- apiGroups:
- admissionregistration.k8s.io
kinds:
- MutatingWebhookConfiguration
clusters:
- "*"
YAMLNow, we can install the Argo CD in the argocd
namespace using the configuration previously defined in the values.yml
file:
$ helm install argo-cd argo/argo-cd \
--version 6.7.8 \
-n argo \
--create-namespace
ShellSessionThat’s not all. Since the Atlas operator is available in the OCI-type Helm repository, we need to apply the following Secret
in the argocd
namespace. By default, Argo CD doesn’t allow the OCI-type repo, so we need to include the enableOCI
parameter in the definition.
apiVersion: v1
kind: Secret
metadata:
name: ghcr-io-helm-oci
namespace: argocd
labels:
argocd.argoproj.io/secret-type: repository
stringData:
name: ariga
url: ghcr.io/ariga
enableOCI: "true"
type: helm
YAMLLet’s take a look at the list of repositories in the Argo CD UI dashboard. You should see the “Successful” connection status.
Prepare Configuration Manifests for Argo CD
Config Repository Structure
Let me first explain the structure of our Git config repository. The additional configuration is stored in the apps
directory. It includes the CRD objects required to initialize the database schema or Vault engines. In the bootstrap
directory, we keep the values.yaml file for each Helm chart managed by Argo CD. It’s all that we need. The bootstrap-via-appset/bootstrap.yaml
contains the definition of Argo CD ApplicationSet
we need to apply to the Kubernetes cluster. This ApplicationSet
will generate all required Argo CD applications responsible for installing the charts and creating CRD objects.
.
├── apps
│ ├── postgresql
│ │ ├── database.yaml
│ │ ├── policies.yaml
│ │ ├── roles.yaml
│ │ └── schema.yaml
│ └── vault
│ └── job.yaml
├── bootstrap
│ ├── values
│ │ ├── atlas
│ │ │ └── values.yaml
│ │ ├── cert-manager
│ │ │ └── values.yaml
│ │ ├── postgresql
│ │ │ └── values.yaml
│ │ ├── vault
│ │ │ └── values.yaml
│ │ └── vault-config-operator
│ │ └── values.yaml
└── bootstrap-via-appset
└── bootstrap.yaml
ShellSessionBootstrap with the Argo CD ApplicationSet
Let’s take a look at the ApplicationSet
. It’s pretty interesting (I hope :)). I’m using here some relatively new Argo CD features like multiple sources (Argo CD 2.6
) or application sets template patch (Argo CD 2.10
). We need to generate an Argo CD Application
per each tool we want to install on Kubernetes (1). In the generators
section, we define parameters for Vault, PostgreSQL, Atlas Operator, Vault Config Operator, and Cert Manager (which is required by the Vault Config Operator). In the templatePatch
section, we prepare a list of source repositories used by each Argo CD Application (2). There is always a Helm chart repo, which refers to our Git repository containing dedicated values.yaml
files. For the Vault and PostgreSQL charts, we include another source containing CRDs or additional Kubernetes objects. We will discuss it later.
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: bootstrap-config
namespace: argocd
spec:
goTemplate: true
generators:
- list:
elements:
- chart: vault
name: vault
repo: https://helm.releases.hashicorp.com
revision: 0.27.0
namespace: vault
postInstall: true
- chart: postgresql
name: postgresql
repo: https://charts.bitnami.com/bitnami
revision: 12.12.10
namespace: default
postInstall: true
- chart: cert-manager
name: cert-manager
repo: https://charts.jetstack.io
revision: v1.14.4
namespace: cert-manager
postInstall: false
- chart: vault-config-operator
name: vault-config-operator
repo: https://redhat-cop.github.io/vault-config-operator
revision: v0.8.25
namespace: vault-config-operator
postInstall: false
- chart: charts/atlas-operator
name: atlas
repo: ghcr.io/ariga
revision: 0.4.2
namespace: atlas
postInstall: false
template:
metadata:
name: '{{.name}}'
annotations:
argocd.argoproj.io/sync-wave: "1"
spec:
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
destination:
namespace: '{{.namespace}}'
server: https://kubernetes.default.svc
project: default
templatePatch: |
spec:
sources:
- repoURL: '{{ .repo }}'
chart: '{{ .chart }}'
targetRevision: '{{ .revision }}'
helm:
valueFiles:
- $values/bootstrap/values/{{ .name }}/values.yaml
- repoURL: https://github.com/piomin/kubernetes-config-argocd.git
targetRevision: HEAD
ref: values
{{- if .postInstall }}
- repoURL: https://github.com/piomin/kubernetes-config-argocd.git
targetRevision: HEAD
path: apps/{{ .name }}
{{- end }}
YAMLOnce we apply the bootstrap-config
ApplicationSet
to the argocd
namespace, all the magic just happens. You should see five applications in the Argo CD UI dashboard. All of them are automatically synchronized (Argo CD autoSync
enabled) to the cluster. It does the whole job. Now, let’s analyze step-by-step what we have to put in that configuration.
The Argo CD ApplicationSet
generates five applications for installing all required tools. Here’s the Application
generated for installing Vault with Helm charts and applying an additional configuration stored in the apps/vault
directory.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: vault
namespace: argocd
spec:
destination:
namespace: vault
server: https://kubernetes.default.svc
project: default
sources:
- chart: vault
helm:
valueFiles:
- $values/bootstrap/values/vault/values.yaml
repoURL: https://helm.releases.hashicorp.com
targetRevision: 0.27.0
- ref: values
repoURL: https://github.com/piomin/kubernetes-config-argocd.git
targetRevision: HEAD
- path: apps/vault
repoURL: https://github.com/piomin/kubernetes-config-argocd.git
targetRevision: HEAD
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
YAMLConfigure Vault on Kubernetes
Customize Helm Charts
Let’s take a look at the Vault values.yaml
file. We run it in the development mode (single, in-memory node, no unseal needed). We will also enable the UI dashboard.
server:
dev:
enabled: true
ui:
enabled: true
bootstrap/values/vault/values.yamlWith the parameters visible above Argo CD installs Vault in the vault
namespace. Here’s a list of running pods:
$ kubectl get po -n vault
NAME READY STATUS RESTARTS AGE
vault-0 1/1 Running 0 1h
vault-agent-injector-7f7f68d457-fvsd2 1/1 Running 0 1h
ShellSessionIt also exposes Vault API under the 8200
port in the vault Kubernetes Service
.
$ kubectl get svc -n vault
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
vault ClusterIP 10.110.69.159 <none> 8200/TCP,8201/TCP 21h
vault-agent-injector-svc ClusterIP 10.111.24.183 <none> 443/TCP 21h
vault-internal ClusterIP None <none> 8200/TCP,8201/TCP 21h
vault-ui ClusterIP 10.110.160.239 <none> 8200/TCP 21h
ShellSessionFor the Vault Config Operator, we need to override the default address of Vault API to vault.vault.svc:8200
(an a). In order to do that, we need to set the VAULT_ADDR
env variable in the values.yaml
file. We also disable Prometheus monitoring and enable integration with Cert Manager. Thanks to “cert-manager” we don’t need to generate any certificates or keys manually.
enableMonitoring: false
enableCertManager: true
env:
- name: VAULT_ADDR
value: http://vault.vault:8200
bootstrap/values/vault-config-operator/values.yamlEnable Vault Config Operator
The Vault Config Operator needs to authenticate against Vault API using Kubernetes Authentication. So we need to configure a root Kubernetes Authentication mount point and role. Then we can create more roles or other Vault objects via the operator. Here’s the Kubernetes Job
responsible for configuring Kubernetes mount point and role. It uses the Vault image and the vault
CLI available inside that image. As you see, it creates the vault-admin
role allowed in the default
namespace.
apiVersion: batch/v1
kind: Job
metadata:
name: vault-admin-initializer
annotations:
argocd.argoproj.io/sync-wave: "3"
spec:
template:
spec:
containers:
- name: vault-admin-initializer
image: hashicorp/vault:1.15.2
env:
- name: VAULT_ADDR
value: http://vault.vault.svc:8200
command:
- /bin/sh
- -c
- |
export VAULT_TOKEN=root
sleep 10
vault auth enable kubernetes
vault secrets enable database
vault write auth/kubernetes/config kubernetes_host=https://kubernetes.default.svc:443 kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
vault write auth/kubernetes/role/vault-admin bound_service_account_names=default bound_service_account_namespaces=default policies=vault-admin ttl=1h
vault policy write vault-admin - <<EOF
path "/*" {
capabilities = ["create", "read", "update", "delete", "list","sudo"]
}
EOF
restartPolicy: Never
apps/vault/job.yamlArgo CD applies such a Job
after installing the Vault chart.
$ kubectl get job -n vault
NAME COMPLETIONS DURATION AGE
vault-admin-initializer 1/1 15s 1h
ShellSessionConfigure Vault via CRDs
Once a root Kubernetes authentication is ready, we can proceed to the CRD object creation. In the first step, we create objects responsible for configuring a connection to the Postgres database. In the DatabaseSecretEngineConfig
we set the connection URL, credentials, and the name of a Vault plugin used to interact with the database (postgresql-database-plugin
). We also define a list of allowed roles (postgresql-default-role
). In the next step, we define the postgresql-default-role
DatabaseSecretEngineRole
object. Of course, the name of the role should be the same as the name passed in the allowedRoles
list in the previous step. The role defines a target database connection name in Vault and the SQL statement for creating new users with privileges.
kind: DatabaseSecretEngineConfig
apiVersion: redhatcop.redhat.io/v1alpha1
metadata:
name: postgresql-database-config
annotations:
argocd.argoproj.io/sync-wave: "3"
argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
spec:
allowedRoles:
- postgresql-default-role
authentication:
path: kubernetes
role: vault-admin
connectionURL: 'postgresql://{{username}}:{{password}}@postgresql.default:5432?sslmode=disable'
path: database
pluginName: postgresql-database-plugin
rootCredentials:
passwordKey: postgres-password
secret:
name: postgresql
username: postgres
---
apiVersion: redhatcop.redhat.io/v1alpha1
kind: DatabaseSecretEngineRole
metadata:
name: postgresql-default-role
annotations:
argocd.argoproj.io/sync-wave: "3"
argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
spec:
creationStatements:
- CREATE ROLE "{{name}}" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}'; GRANT SELECT ON ALL TABLES IN SCHEMA public TO "{{name}}"; GRANT USAGE, SELECT ON ALL SEQUENCES IN SCHEMA public TO "{{name}}";
maxTTL: 10m0s
defaultTTL: 1m0s
authentication:
path: kubernetes
role: vault-admin
dBName: postgresql-database-config
path: database
apps/postgresql/database.yamlOnce Argo CD applies both DatabaseSecretEngineConfig
and DatabaseSecretEngineRole
objects, we can verify it works fine by generating database credentials using the vault read
command. We need to pass the name of the previously created role (postgresql-default-role
). Our sample app will do the same thing but through the Spring Cloud Vault module.
Finally, we can create a policy and role for our sample Spring Boot. The policy requires only the privilege to generate new credentials:
kind: Policy
apiVersion: redhatcop.redhat.io/v1alpha1
metadata:
name: database-creds-view
annotations:
argocd.argoproj.io/sync-wave: "3"
argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
spec:
authentication:
path: kubernetes
role: vault-admin
policy: |
path "database/creds/default" {
capabilities = ["read"]
}
apps/postgresql/policies.yamlNow, we have everything to proceed to the last step in this section. We need to create a Vault role with the Kubernetes authentication method dedicated to our sample app. In this role, we set the name and location of the Kubernetes ServiceAccount
and the name of the Vault policy created in the previous step.
kind: KubernetesAuthEngineRole
apiVersion: redhatcop.redhat.io/v1alpha1
metadata:
name: database-engine-creds-role
annotations:
argocd.argoproj.io/sync-wave: "3"
argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
spec:
authentication:
path: kubernetes
role: vault-admin
path: kubernetes
policies:
- database-creds-view
targetServiceAccounts:
- default
targetNamespaces:
targetNamespaces:
- default
apps/postgresql/roles.yamlManaging Postgres Schema with Atlas Operator
Finally, we can proceed to the last step in the configuration part. We will use the AtlasSchema
CRD object to configure the database schema for our sample app. The object contains two sections: credentials
and schema
. In the credentials section, we refer to the PostgreSQL Secret to obtain a password. In the schema section, we create the person table with the id
primary key.
apiVersion: db.atlasgo.io/v1alpha1
kind: AtlasSchema
metadata:
name: sample-spring-cloud-vault
annotations:
argocd.argoproj.io/sync-wave: "4"
argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
spec:
credentials:
scheme: postgres
host: postgresql.default
user: postgres
passwordFrom:
secretKeyRef:
key: postgres-password
name: postgresql
database: postgres
port: 5432
parameters:
sslmode: disable
schema:
sql: |
create table person (
id serial primary key,
name varchar(255),
gender varchar(255),
age int,
external_id int
);
apps/postgresql/schema.yamlHere’s the corresponding app @Entity
model class in the sample Spring Boot app.
@Entity
public class Person {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Integer id;
private String name;
private int age;
@Enumerated(EnumType.STRING)
private Gender gender;
private Integer externalId;
// GETTERS AND SETTERS ...
}
JavaOnce Argo CD applies the AtlasSchema
object, we can verify its status. As you see, it has been successfully executed on the target database.
We can log in to the database using psql
CLI and verify that the person
table exists in the postgres
database:
Run Sample Spring Boot App
Dependencies
For this demo, I created a simple Spring Boot application. It exposes REST API and connects to the PostgreSQL database. It uses Spring Data JPA to interact with the database. Here are the most important dependencies of our app in the Maven pom.xml
:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-bootstrap</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-vault-config-databases</artifactId>
</dependency>
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<scope>runtime</scope>
</dependency>
XMLThe first of them enables bootstrap.yml
processing on the application startup. The third one includes Spring Cloud Vault Database engine support.
Integrate with Vault using Spring Cloud Vault
The only thing we need to do is to provide the right configuration settings. Here’s the minimal set of the required dependencies to make it work without any errors. The following configuration is provided in the bootstrap.yml
file:
spring:
application:
name: sample-db-vault
datasource:
url: jdbc:postgresql://postgresql:5432/postgres #(1)
jpa:
hibernate:
ddl-auto: update
cloud:
vault:
config.lifecycle: #(2)
enabled: true
min-renewal: 10s
expiry-threshold: 30s
kv.enabled: false #(3)
uri: http://vault.vault:8200 #(4)
authentication: KUBERNETES #(5)
postgresql: #(6)
enabled: true
role: postgresql-default-role
backend: database
kubernetes: #(7)
role: database-engine-creds-role
YAMLLet’s analyze the configuration visible above in the details:
(1) Firstly, we need to set the database connection URL without any credentials. Our application uses standard properties for authentication against the database (spring.datasource.username
and spring.datasource.password
). Thanks to that, we don’t need to do anything else
(2) As you probably remember, the maximum TTL for the database lease is 10 minutes. We enable lease renewal every 30 seconds. Just for the demo purpose. You will see that Spring Cloud Vault will create new credentials in PostgreSQL every 30 seconds, and the application still works without any errors
(3) Vault KV is not needed here, since I’m using only the database engine
(4) The application is going to be deployed in the default namespace, while Vault is running in the vault
namespace. So, the address of Vault should include the namespace name
(5) (7) Our application uses the Kubernetes authentication method to access Vault. We just need to set the role name, which is database-engine-creds-role
. All other settings should be left with the default values
(6) We also need to enable postgres
database backend support. The name of the backend in Vault is database
and the name of the Vault role used for that engine is postgresql-default-role
.
Run the App on Kubernetes
Finally, we can run our sample app on Kubernetes by applying the following YAML manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: sample-app-deployment
spec:
selector:
matchLabels:
app: sample-app
template:
metadata:
labels:
app: sample-app
spec:
containers:
- name: sample-app
image: piomin/sample-app:1.0-gitops
ports:
- containerPort: 8080
serviceAccountName: default
---
apiVersion: v1
kind: Service
metadata:
name: sample-app
spec:
type: ClusterIP
selector:
app: sample-app
ports:
- port: 8080
YAMLOur app exposes REST API under the /persons
path. We can easily test it with curl
after enabling port forwarding as shown below:
$ kubectl port-forward svc/sample-app 8080:8080
$ curl http://localhost:8080/persons
ShellSessionFinal Thoughts
This article proves that we can effectively configure and manage tools like Postgres database or Hashicorp Vault on Kubernetes with Argo CD. The database schema or Vault configuration can be stored in the Git repository in the form of YAML manifests thanks to Atlas and Vault Config Kubernetes operators. Argo CD applies all required CRDs automatically, which results in the integration between Vault, Postgres, and our sample Spring Boot app.
Leave a Reply