Monitor Kubernetes Cost Across Teams with Kubecost

Monitor Kubernetes Cost Across Teams with Kubecost

In this article, you will learn how to monitor the real-time cost of the Kubernetes cluster shared across several teams with Kubecost. We won’t focus on the cloud aspects of this tool, like cost optimization. Our main goal is to count cost sharing between several different teams using the same Kubernetes cluster. Sometimes it can be a very important aspect of working with Kubernetes – even in the on-premise environment. If you want to run several apps in your organization within the project, you probably need to guarantee an internal budget for it. Therefore tools that allow to count resource usage as a cost may be very useful in such a scenario. Kubecost seems to be the most popular tool in this area. Let’s try it 🙂

If you are interested in more advanced exercises related to Kubernetes and Prometheus metrics you can read about autoscaling with HPA in my article here.

Prerequisites

In order to perform the exercise you need to have a Kubernetes cluster. It can be a managed cluster on the cloud provider. However, you can also run Kubernetes locally as me, for example with Minikube. The only important thing is to reserve enough resources since we will run several apps. My proposal is to guarantee 16GB of memory and 4 CPUs:

$ minikube start --memory='16g' --cpus='4'

Once we will start the cluster we may proceed to the next section.

Install Kubecost on Kubernetes

We will use the Helm chart to install Kubecost on Minikube. Here’s the installation command:

$ helm install kubecost cost-analyzer \
    --repo https://kubecost.github.io/cost-analyzer/ \
    --namespace kubecost --create-namespace \
    --set kubecostProductConfigs.productKey.key="123"

It will install our tool in the kubecost namespace. Kubecost provides a UI dashboard for visualizing basic aspects related to cost monitoring.

The nice feature here is that it also installs the whole required stack for monitoring including Prometheus and Grafana.

$ kubectl get po -n kubecost
NAME                                          READY   STATUS    RESTARTS   AGE
kubecost-cost-analyzer-66b8cf968c-4j7bb       2/2     Running   0          4m
kubecost-grafana-5fcd9f86c6-44njc             2/2     Running   0          4m
kubecost-kube-state-metrics-c566bb85f-qp5tw   1/1     Running   0          4m
kubecost-prometheus-node-exporter-k2g5g       1/1     Running   0          4m
kubecost-prometheus-server-b8bd4479d-ldvx2    2/2     Running   0          4m

Finally, let’s just enable port forwarding according to the message after the Helm chart installation:

$ kubectl port-forward -n kubecost deployment/kubecost-cost-analyzer 9090

After that, we can access the UI under the http://localhost:9090 address:

kubernetes-cost-ui

Then, click the Settings tab and scroll down to the Labels section. We will find here a list of labels to use in our apps. As you see we can assign each app to a different team, department, or environment. Of course, we can also override the name of each label. However, I don’t think it is necessary for the purpose of our exercise.

kubernetes-cost-labels

Deploy Test Apps on Kubernetes

Let’s deploy some apps on Kubernetes in the different namespaces. First of all, I created five namespaces on my cluster (from demo-1 to demo-5):

$ kubectl create ns demo-1
$ kubectl create ns demo-2
$ kubectl create ns demo-3
$ kubectl create ns demo-4
$ kubectl create ns demo-5

There are two sample apps available. Both of them are written using the Spring Boot framework. The first of them connect to the Mongo database, while the second just uses an in-memory data store. Let’s begin with the Mongo Deployment. We will run it in three namespaces demo-1, demo-2, and demo-3. There are three labels included in the Deployment object: team, env, and department, which are considered by Kubecost. Here’s the illustration of the labeling strategy per namespace.

kubernetes-cost-labeling-strategy

Assuming I’m deploying Mongo in the demo-1, here’s the YAML manifest. Please, pay attention to the values of labels (1) (2) (3). We will change those values according to the illustration above and depending on the namespace.

apiVersion: v1
kind: ConfigMap
metadata:
  name: mongodb
data:
  database-name: admin
---
apiVersion: v1
kind: Secret
metadata:
  name: mongodb
type: Opaque
data:
  database-password: dGVzdDEyMw==
  database-user: dGVzdA==
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mongodb
  labels:
    app: mongodb
    team: team-a # (1)
    env: dev # (2)
    department: dep-a # (3) 
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mongodb
  template:
    metadata:
      labels:
        app: mongodb
        team: team-a # (1)
        env: dev # (2)
        department: dep-a # (3)
    spec:
      containers:
        - name: mongodb
          image: mongo:5.0
          ports:
            - containerPort: 27017
          env:
            - name: MONGO_INITDB_DATABASE
              valueFrom:
                configMapKeyRef:
                  name: mongodb
                  key: database-name
            - name: MONGO_INITDB_ROOT_USERNAME
              valueFrom:
                secretKeyRef:
                  name: mongodb
                  key: database-user
            - name: MONGO_INITDB_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mongodb
                  key: database-password
          resources:
            requests:
              memory: 256Mi
              cpu: 100m
---
apiVersion: v1
kind: Service
metadata:
  name: mongodb
  labels:
    app: mongodb
spec:
  ports:
    - port: 27017
      protocol: TCP
  selector:
    app: mongodb

In the next step, we will deploy our Spring Boot that connects to the Mongo database. Since it connects to Mongo, it has to run in the same namespaces. We should also use the same labeling strategy as in the previous Deployment.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-app
  labels:
    app: sample-app
    team: team-a
    env: dev
    department: dep-2
spec:
  replicas: 3
  selector:
    matchLabels:
      app: sample-app
  template:
    metadata:
      labels:
        app: sample-app
        team: team-a
        env: dev
        department: dep-2
    spec:
      containers:
      - name: sample-spring-boot-on-kubernetes
        image: piomin/sample-spring-boot-on-kubernetes:latest
        ports:
        - containerPort: 8080
        env:
          - name: MONGO_DATABASE
            valueFrom:
              configMapKeyRef:
                name: mongodb
                key: database-name
          - name: MONGO_USERNAME
            valueFrom:
              secretKeyRef:
                name: mongodb
                key: database-user
          - name: MONGO_PASSWORD
            valueFrom:
              secretKeyRef:
                name: mongodb
                key: database-password
          - name: MONGO_URL
            value: mongodb
        readinessProbe:
          httpGet:
            port: 8080
            path: /actuator/health/readiness
            scheme: HTTP
          timeoutSeconds: 1
          periodSeconds: 10
          successThreshold: 1
          failureThreshold: 3
        resources:
          requests:
            memory: 512Mi
            cpu: 200m
---
apiVersion: v1
kind: Service
metadata:
  name: sample-app
spec:
  type: ClusterIP
  selector:
    app: sample-app
  ports:
  - port: 8080

Now, let’s install both Mongo and our app on Kubernetes using the following commands (run the same actions for the demo-2 and demo-3 namespaces after changing the labels and replicas):

$ kubectl apply -f mongodb.yaml -n demo-1
$ kubectl apply -f spring-app.yaml -n demo-1

We will also deploy the Spring Boot app with an in-memory store in the next two namespaces demo-4 and demo-5. Of course, please remember the labeling strategy described above. Here’s the YAML manifest for the demo-4 namespace. To distinguish we can set the higher number of replicas in the demo-5 namespace, e.g. 5.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-app-im
  labels:
    app: sample-app-im
    team: team-c # (1)
    env: dev # (2)
    department: dep-3 # (3)
spec:
  replicas: 3
  selector:
    matchLabels:
      app: sample-app-im
  template:
    metadata:
      labels:
        app: sample-app-im
        team: team-c # (1)
        env: dev # (2)
        department: dep-3 # (3)
    spec:
      containers:
      - name: sample-spring-kotlin-microservice
        image: piomin/sample-spring-kotlin-microservice:latest
        ports:
        - containerPort: 8080
        resources:
          requests:
            memory: 384Mi
            cpu: 150m
---
apiVersion: v1
kind: Service
metadata:
  name: sample-app-im
spec:
  type: ClusterIP
  selector:
    app: sample-app-im
  ports:
  - port: 8080

Let’s apply the objects from the YAML manifest above (run the same action for the demo-5 after changing the labels and replicas):

$ kubectl apply -f spring-app-im.yaml -n demo-4

Here’s a full list of our deployments with labels and namespaces:

Once we deployed all required apps we can switch to the Kubecost dashboard.

Using Kubecost Dashboard for Kubernetes Cost Monitoring

In the meantime, I changed the currency to Polish “ZÅ‚oty” 🙂 We can create diagrams using various criteria. Let’s focus on the criteria we set in the custom Kubecost labels in all the deployments. In order to create diagrams go to the Monitor menu item.

We can choose between several available aggregation labels. It is also possible to choose a custom label not predefined by Kubecost. We can use a single aggregation or combine several labels together (multi-aggregation).

kubernetes-cost-ui-aggregation

Here’s the diagram that illustrates aggregation per namespace. We can filter out only important data. In that case, I defined the filter demo* to show only our namespaces with sample apps.

kubernetes-cost-diagram-namespace

Here’s a similar diagram, but aggregated by the team label. As you see we have all the three teams we defined in the label for our sample apps.

After that, we can change the diagram type into e.g. Proportional cost:

Let’s prepare the same diagram type, but using two search criteria. We combine the team with the department in the proportional costs:

kubernetes-cost-diagram-multi

And the last diagram in the article. It displays costs per environment (dev, test and prod). Instead of a cumulative cost, we can show e.g. hourly rate.

Final Thoughts

I had a lot of fun while visualizing the cost of my Kubernetes cluster with Kubecost. Of course, Kubecost has some additional features, but I focused mainly on cost visualization across teams and their apps. I have no major problems with running and understanding the core features of that tool.

Leave a Reply