Manage Secrets on Kubernetes with ArgoCD and Vault

In this article, you will learn how to integrate ArgoCD with HashiCorp Vault to manage secrets on Kubernetes. In order to use ArgoCD and Vault together during the GitOps process, we will use the following plugin. It replaces the placeholders inside YAML or JSON manifests with the values taken from Vault. What is important in our case, it also supports Helm templates.
You can use Vault in several different ways on Kubernetes. For example, you may integrate it directly with your Spring Boot app using the Spring Cloud Vault project. To read more about it please refer to that post on my blog.
Prerequisites
In this exercise, we are going to use Helm a lot. Our sample app Deployment
is based on the Helm template. We also use Helm to install Vault and ArgoCD on Kubernetes. Finally, we need to customize the ArgoCD Helm chart parameters to enable and configure ArgoCD Vault Plugin. So, before proceeding please ensure you have basic knowledge about Helm. Of course, you should also install it on your laptop. For me, it is possible with homebrew
:
$ brew install helm
Source Code
If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should just follow my instructions 🙂
Run and Configure Vault on Kubernetes
In the first step, we are going to install Vault on Kubernetes. We can easily do it using its official Helm chart. In order to simplify our exercise, we run it in development mode with a single server instance. Normally, you would configure it in HA mode. Let’s add the Hashicorp helm repository:
$ helm repo add hashicorp https://helm.releases.hashicorp.com
In order to enable development mode the parameter server.dev.enabled
should have the value true
. We don’t need to override any other default values:
$ helm install vault hashicorp/vault \
--set "server.dev.enabled=true"
To check if Vault is successfully installed on the Kubernetes cluster we can display a list of running pods:
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
vault-0 1/1 Running 0 25s
vault-agent-injector-9456c6d55-hx2fd 1/1 Running 0 21s
We can configure Vault in several different ways. One of the options is through the UI. To login there we may use the root
token generated only in development mode. Vault also exposes the HTTP API. The last available option is with the CLI. CLI is available inside the Vault pod, but as well we can install it locally. For me, it is possible using the brew install vault
command. Then we need to enable port-forwarding and export Vault local address as the VAULT_ADDR
environment variable:
$ kubectl port-forward vault-0 8200
$ export VAULT_ADDR=http://127.0.0.1:8200
$ vault status
Then just login to the Vault server using the root
token:

Enable Kubernetes Authentication on Vault
There are several authentication methods on Vault. However, since we run it on Kubernetes we should the method dedicated to that platform. What is important, this method is also supported by the ArgoCD Vault Plugin. Firstly, let’s enable the Kubernetes Auth method:
$ vault auth enable kubernetes
Then, we need to configure our authentication method. There are three required parameters: the URL of the Kubernetes API server, Kubernetes CA certificate, and a token reviewer JWT.
$ vault write auth/kubernetes/config \
token_reviewer_jwt="<your reviewer service account JWT>" \
kubernetes_host=<your Kubernetes API address> \
kubernetes_ca_cert=@ca.crt
In order to easily obtain all those parameters, you can the following three commands. Then you can set them also e.g. using the Vault UI.
$ echo "https://$( kubectl exec vault-0 -- env | grep KUBERNETES_PORT_443_TCP_ADDR| cut -f2 -d'='):443"
$ kubectl exec vault-0 \
-- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
$ echo $(kubectl exec vault-0 -- cat /var/run/secrets/kubernetes.io/serviceaccount/token)
Once we pass all the required parameters, we may proceed to the named role creation. ArgoCD Vault Plugin will use that role to authenticate against the Vault server. We need to provide the namespace with ArgoCD and the name of the Kubernetes service account used by the ArgoCD Repo Server. Our token expires after 24 hours.
$ vault write auth/kubernetes/role/argocd \
bound_service_account_names=argocd-repo-server \
bound_service_account_namespaces=argocd \
policies=argocd \
ttl=24h
That’s all for now. We will also need to create a test secret on Vault and configure a policy for the argocd
role. Before that, let’s take a look at our sample Spring Boot app and its Helm template.
Helm Template for Spring Boot App
Our app is very simple. It just exposes a single HTTP endpoint that returns the value of the environment variable inside a container. Here’s the REST controller class written in Kotlin.
@RestController
@RequestMapping("/persons")
class PersonController {
@Value("\${PASS:none}")
lateinit var pass: String
@GetMapping("/pass")
fun printPass() = pass
}
We will use a generic Helm chart to deploy our app on Kubernetes. Our Deployment
template contains a list of environment variables defined for the container.
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.app.name }}
spec:
replicas: {{ .Values.app.replicas }}
selector:
matchLabels:
app: {{ .Values.app.name }}
template:
metadata:
labels:
app: {{ .Values.app.name }}
spec:
containers:
- name: {{ .Values.app.name }}
image: {{ .Values.image.registry }}/{{ .Values.image.repository }}:{{ .Values.image.tag }}
ports:
{{- range .Values.app.ports }}
- containerPort: {{ .value }}
name: {{ .name }}
{{- end }}
{{- if .Values.app.envs }}
env:
{{- range .Values.app.envs }}
- name: {{ .name }}
value: {{ .value }}
{{- end }}
{{- end }}
There is also another file in the templates directory. It contains a definition of the Kubernetes Service
.
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.app.name }}
spec:
type: ClusterIP
selector:
app: {{ .Values.app.name }}
ports:
{{- range .Values.app.ports }}
- port: {{ .value }}
name: {{ .name }}
{{- end }}
Finally, let’s take a look at the Chart.yaml
file.
apiVersion: v2
name: sample-with-envs
description: A Helm chart for Kubernetes
type: application
version: 1.0
appVersion: "1.0"
Our goal is to use this Helm chart to deploy the sample Spring Boot app on Kubernetes with ArgoCD and Vault. Of course, before we do it we need to install ArgoCD.
Install ArgoCD with Vault Plugin on Kubernetes
Normally, it would be a very simple installation. But this time we need to customize the ArgoCD template to install it with the Vault plugin. Or more precisely, we have to customize the configuration of the ArgoCD Repository Server. It is one of the ArgoCD internal services. It maintains the local cache of the Git repository and generates Kubernetes manifests.

There are some different options for installing Vault plugin on ArgoCD. The full list of options is available here. Starting with the version 2.4.0
of ArgoCD it is possible to install it via a sidecar container. We will choose the option based on sidecar and initContainer
. You may read more about it here. However, our case would be different a little since we Helm instead of Kustomize for installing ArgoCD. To clarify, we need to do three things to install the Vault plugin on ArgoCD. Let’s analyze those steps:
- define
initContainer
on the ArgoCD Repository Server Deployment to downloadargocd-vault-plugin
binaries and mount them on the volume - define the
ConfigMap
containing theConfigManagementPlugin
CRD overriding a default behavior of Helm on ArgoCD - customize
argocd-repo-server
Deployment to mount the volume withargocd-vault-plugin
and theConfigMap
created in the previous step
After those steps, we would have to integrate the plugin with the running instance of the Vault server. We will use a previously create Vault argocd
role.
Firstly, let’s create the ConfigMap
to customize the default behavior of Helm on ArgoCD. After running the helm template command we will also run the argocd-vault-plugin
generate command to replace all inline placeholders with the secrets defined in Vault. The address and auth configuration of Vault are defined in the vault-configuration
secret.
apiVersion: v1
kind: ConfigMap
metadata:
name: cmp-plugin
data:
plugin.yaml: |
---
apiVersion: argoproj.io/v1alpha1
kind: ConfigManagementPlugin
metadata:
name: argocd-vault-plugin-helm
spec:
allowConcurrency: true
discover:
find:
command:
- sh
- "-c"
- "find . -name 'Chart.yaml' && find . -name 'values.yaml'"
generate:
command:
- bash
- "-c"
- |
helm template $ARGOCD_APP_NAME -n $ARGOCD_APP_NAMESPACE -f <(echo "$ARGOCD_ENV_HELM_VALUES") . |
argocd-vault-plugin generate -s vault-configuration -
lockRepo: false
Here’s the vault-configuration Secret
:
apiVersion: v1
kind: Secret
metadata:
name: vault-configuration
namespace: argocd
data:
AVP_AUTH_TYPE: azhz
AVP_K8S_ROLE: YXJnb2Nk
AVP_TYPE: dmF1bHQ=
VAULT_ADDR: aHR0cDovL3ZhdWx0LmRlZmF1bHQ6ODIwMA==
type: Opaque
To see the values let’s display the Secret
in Lens. Vault is running in the default
namespace, so its address is http://vault.default:8200
. The name of our role in Vault is argocd
. We also need to set the auth type as k8s
.

Finally, we need to customize the ArgoCD Helm installation. To achieve that let’s define the Helm values.yaml
file. It contains the definition of initContainer
and sidecar for argocd-repo-server
. We also mount the cmp-plugin
ConfigMap
to the Deployment
, and add additional privileges to the argocd-repo-service
ServiceAccount
to allow reading Secret
s.
repoServer:
rbac:
- verbs:
- get
- list
- watch
apiGroups:
- ''
resources:
- secrets
- configmaps
initContainers:
- name: download-tools
image: registry.access.redhat.com/ubi8
env:
- name: AVP_VERSION
value: 1.11.0
command: [sh, -c]
args:
- >-
curl -L https://github.com/argoproj-labs/argocd-vault-plugin/releases/download/v$(AVP_VERSION)/argocd-vault-plugin_$(AVP_VERSION)_linux_amd64 -o argocd-vault-plugin &&
chmod +x argocd-vault-plugin &&
mv argocd-vault-plugin /custom-tools/
volumeMounts:
- mountPath: /custom-tools
name: custom-tools
extraContainers:
- name: avp-helm
command: [/var/run/argocd/argocd-cmp-server]
image: quay.io/argoproj/argocd:v2.4.8
securityContext:
runAsNonRoot: true
runAsUser: 999
volumeMounts:
- mountPath: /var/run/argocd
name: var-files
- mountPath: /home/argocd/cmp-server/plugins
name: plugins
- mountPath: /tmp
name: tmp-dir
- mountPath: /home/argocd/cmp-server/config
name: cmp-plugin
- name: custom-tools
subPath: argocd-vault-plugin
mountPath: /usr/local/bin/argocd-vault-plugin
volumes:
- configMap:
name: cmp-plugin
name: cmp-plugin
- name: custom-tools
emptyDir: {}
In order to install ArgoCD on Kubernetes add the following Helm repository:
$ helm repo add argo https://argoproj.github.io/argo-helm
Let’s install it in the argocd
namespace using customized parameters in the values.yaml
file:
$ kubectl create ns argocd
$ helm install argocd argo/argo-cd -n argocd -f values.yaml
Sync Vault Secrets with ArgoCD
Once we deployed Vault and ArgoCD on Kubernetes we may proceed to the next step. Now, we are going to create a secret on Vault. Firstly, let’s enable the KV engine:
$ vault secrets enable kv-v2
Then, we can create a sample secret with the argocd
name and a single password
key:
$ vault kv put kv-v2/argocd password="123456"
ArgoCD Vault Plugin uses the argocd
policy to read secrets. So, in the next step, we need to create the following policy to enable reading the previously created secret:
$ vault policy write argocd - <<EOF
path "kv-v2/data/argocd" {
capabilities = ["read"]
}
EOF
Then, we may define the ArgoCD Application
for deploying our Spring Boot app on Kubernetes. The Helm template for Kubernetes manifests is available on the GitHub repository under the simple-with-envs
directory (1). As a tool for creating manifests we choose plugin
(2). However, we won’t set its name since we use a sidecar container with argocd-vault-plugin
. ArgoCD Vault plugin allows passing inline values in the application manifest. It reads the content defined inside the HELM_VALUES
environment variable (3) (depending on the environment variable name set inside cmp-plugin
ConfigMap
). And finally, the most important thing. ArgoCD Vault Plugin is looking for placeholders inside the <>
brackets. For inline values, it should have the following structure: <path:path_to_the_vault_secret#name_of_the_key>
(4). In our case, we define the environment variable PASS that uses the argocd
secret and the password key stored inside the KV engine.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: simple-helm
namespace: argocd
spec:
destination:
name: ''
namespace: default
server: https://kubernetes.default.svc
project: default
source:
path: simple-with-envs # (1)
repoURL: https://github.com/piomin/sample-generic-helm-charts.git
targetRevision: HEAD
plugin: # (2)
env:
- name: HELM_VALUES # (3)
value: |
image:
registry: quay.io
repository: pminkows/sample-kotlin-spring
tag: "1.4.30"
app:
name: sample-spring-boot-kotlin
replicas: 1
ports:
- name: http
value: 8080
envs:
- name: PASS
value: <path:kv-v2/data/argocd#password> # (4)
Finally, we can create the ArgoCD Application
. It should have the OutOfSync
status:

Let’s synchronize its state with the Git repository. We can do it using e.g. ArgoCD UI. If everything works fine you should see the green tile with your application name.

Then, let’s just verify the structure of our app Deployment
. You should see the value 123456
instead of the placeholder defined inside the ArgoCD Application
.

It is just a formality, but in the end, you can test the endpoint GET /persons/pass
exposed by our Spring Boot app. It prints the value of the PASS
environment variable. To do that you should also enable port-forwarding for the app.
$ kubectl port-forward svc/simple-helm 8080:8080
$ curl http://localhost:8080/persons/pass
Final Thoughts
GitOps approach becomes very popular in a Kubernetes-based environment. As always, one of the greatest challenges with that approach is security. Hashicorp Vault is one of the best tools for managing and protecting sensitive data. It can be easily installed on Kubernetes and included in your GitOps process. In this article, I showed how to use it together with other very popular solutions for deploying apps: ArgoCD and Helm.
4 COMMENTS