Multi-node Kubernetes Cluster with Minikube
This article will teach you how to run and manage a multi-node Kubernetes cluster locally with Minikube. We will run this cluster on Docker. After that, we will enable some useful add-ons, install Kubernetes-native tools for monitoring and observability, and run a sample app that requires storage. You can compare this article with a similar post about the Azure Kubernetes Service.
Prerequisites
Before you begin, you need to install Docker on your local machine. Then you need to download and install Minikube. On macOS, we can do it using the Homebrew command as shown below:
$ brew install minikube
ShellSessionOnce we successfully installed Minikube, we can use its CLI. Let’s verify the version used in this article:
$ minikube version
minikube version: v1.33.1
commit: 5883c09216182566a63dff4c326a6fc9ed2982ff
ShellSessionSource Code
If you would like to try it by yourself, you may always take a look at my source code. In order to do that, you need to clone my GitHub repository. This time, we won’t work much with the source code. However, you can access the repository with the sample Spring Boot app that uses storage exposed on the Kubernetes cluster. Once you clone the repository, go to the volumes/files-app
directory. Then you should follow my instructions.
Create a Multi-node Kubernetes Cluster with Minikube
In order to create a multi-node Kubernetes cluster with Minikube, we need to use the --nodes
or -n
parameter in the minikube start
command. Additionally, we can increase the default value of memory and CPUs reserved for the cluster with the --memory
and --cpus
parameters. Here’s the required command to execute:
$ minikube start --memory='12g' --cpus='4' -n 3
ShellSessionBy the way, if you increase the resources assigned to the Minikube instance, you should also take care of resource reservations for Docker.
Once we run the minikube start
command, the cluster creation begins. You should see a similar output, if everything goes fine.
Now, we can use Minikube with the kubectl
tool:
$ kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:52879
CoreDNS is running at https://127.0.0.1:52879/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
ShellSessionWe can display a list of running nodes:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready control-plane 4h10m v1.30.0
minikube-m02 Ready <none> 4h9m v1.30.0
minikube-m03 Ready <none> 4h9m v1.30.0
ShellSessionSample Spring Boot App
Our Spring Boot app is simple. It exposes some REST endpoints for file-based operations on the target directory attached as a mounted volume. In order to expose REST endpoints, we need to include the Spring Boot Web starter. We will build the image using the Jib Maven plugin.
<properties>
<spring-boot.version>3.3.1</spring-boot.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-dependencies</artifactId>
<version>${spring-boot.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<build>
<plugins>
<plugin>
<groupId>com.google.cloud.tools</groupId>
<artifactId>jib-maven-plugin</artifactId>
<version>3.4.3</version>
</plugin>
</plugins>
</build>
XMLLet’s take a look at the main @RestController
in our app. It exposes endpoints for listing all the files inside the target directory (GET /files/all
), another one for creating a new file (POST /files/{name}
), and also for adding a new string line to the existing file (POST /files/{name}/line
).
package pl.piomin.services.files.controller;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.web.bind.annotation.*;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.StandardOpenOption;
import java.util.List;
import static java.nio.file.Files.list;
import static java.nio.file.Files.writeString;
@RestController
@RequestMapping("/files")
public class FilesController {
private static final Logger LOG = LoggerFactory.getLogger(FilesController.class);
@Value("${MOUNT_PATH:/mount/data}")
String root;
@GetMapping("/all")
public List<String> files() throws IOException {
return list(Path.of(root)).map(Path::toString).toList();
}
@PostMapping("/{name}")
public String createFile(@PathVariable("name") String name) throws IOException {
return Files.createFile(Path.of(root + "/" + name)).toString();
}
@PostMapping("/{name}/line")
public void addLine(@PathVariable("name") String name,
@RequestBody String line) {
try {
writeString(Path.of(root + "/" + name), line, StandardOpenOption.APPEND);
} catch (IOException e) {
LOG.error("Error while writing to file", e);
}
}
}
JavaUsually, I deploy the apps on Kubernetes with Skaffold. But this time, there are some issues with integration between the multi-node Minikube cluster and Skaffold. You can find a detailed description of those issues here. Therefore we build the image directly with the Jib Maven plugin and then just run the app with kubectl
CLI.
Install Addons and Tools
Minikube comes with a set of predefined add-ons for Kubernetes. We can install each of them with a single minikube addons enable <ADDON_NAME>
command. Although there are several plugins available, we still need to install some useful Kubernetes-native tools like Prometheus, for example using the Helm chart. In order to list all available plugins, we should execute the following command:
$ minikube addons list
ShellSessionInstall Addon for Storage
The default storage provider in Minikube doesn’t support the multi-node clusters. It also doesn’t implement the CSI interface and is not able to handle volume snapshots. Fortunately, Minikube offers the csi-hostpath-driver
addon for deploying the “CSI Hostpath Driver”. Since this plugin is disabled, we need to enable it.
$ minikube addons enable csi-hostpath-driver
ShellSessionThen, we can set the csi-hostpath-driver
as a default storage class for the dynamic volume claims.
$ kubectl patch storageclass csi-hostpath-sc -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
$ kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
ShellSessionInstall Monitoring Stack with Helm
The monitoring stack is not available as an add-on. However, we can easily install it using the Helm chart. We will use the official community chart for that kube-prometheus-stack
. Firstly, let’s add the required repository.
$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
ShellSessionThen, we can install the Prometheus monitoring stack in the monitoring namespace by executing the following command:
$ helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack \
-n monitoring --create-namespace
ShellSessionOnce you install Prometheus in your Minikube, you take advantage of the several default metrics exposed by this tool. For example, the Lens IDE automatically integrates with Prometheus metrics and displays the graphs with cluster overview.
We can also see the visualization of resource usage for all running pods, deployments, or stateful sets.
Install Postgres with Helm
We will also install the Postgres database for multi-node cluster testing purposes. Once again, there is a Helm chart that simplifies Postgres installation on Kubernetes. It is published in the Bitnami repository. Firstly, let’s add the required repository:
$ helm repo add bitnami https://charts.bitnami.com/bitnami
ShellSessionThen, we can install Postgres in the db
namespace. We increase the default number of instances to 3
.
$ helm install postgresql bitnami/postgresql \
--set readReplicas.replicaCount=3 \
-n db --create-namespace
ShellSessionThe chart creates the StatefulSet
object with 3
replicas.
$ kubectl get statefulset -n db
NAME READY AGE
postgresql 3/3 55m
ShellSessionWe can display a list of running pods. As you see, Kubernetes scheduled 2 pods on the minikube-m02
node, and a single pod on the minikube
node.
$ kubectl get po -n db -o wide
NAME READY STATUS RESTARTS AGE IP NODE
postgresql-0 1/1 Running 0 56m 10.244.1.9 minikube-m02
postgresql-1 1/1 Running 0 23m 10.244.1.10 minikube-m02
postgresql-2 1/1 Running 0 23m 10.244.0.4 minikube
ShellSessionUnder the hood, there are 3 persistence volumes created. They use a default csi-hostpath-sc
storage class and RWO
mode.
$ kubectl get pvc -n db
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
data-postgresql-0 Bound pvc-e9b55ce8-978a-44ae-8fab-d5d6f911f1f9 8Gi RWO csi-hostpath-sc <unset> 65m
data-postgresql-1 Bound pvc-d93af9ad-a034-4fbb-8377-f39005cddc99 8Gi RWO csi-hostpath-sc <unset> 32m
data-postgresql-2 Bound pvc-b683f1dc-4cd9-466c-9c99-eb0d356229c3 8Gi RWO csi-hostpath-sc <unset> 32m
ShellSessionBuild and Deploy Sample Spring Boot App on Minikube
In the first step, we build the app image. We use the Jib Maven plugin for that. I’m pushing the image to my own Docker registry under the piomin
name. So you can change to your registry account.
$ cd volumes/files-app
$ mvn clean compile jib:build -Dimage=piomin/files-app:latest
ShellSessionThe image is successfully pushed to the remote registry and is available under the piomin/files-app:latest
tag.
Let’s create a new namespace on Minikube. We will run our app in the demo namespace.
$ kubectl create ns demo
ShellSessionThen, let’s create the PersistenceVolumeClaim
. Since we will run multiple app pods distributed across all the Kubernetes nodes and the same volume is shared between all the instances we need the ReadWriteMany
mode.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data
namespace: demo
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
YAMLxxx
$ kubectl get pvc -n demo
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
data Bound pvc-08fe242a-6599-4282-b03c-ee38e092431e 1Gi RWX csi-hostpath-sc
ShellSessionAfter that, we can deploy our app. In order, to spread the pods across all the cluster nodes we need to define the PodAntiAffinity
rule (1). We will enable the running of only a single pod on each node. The deployment also mounts the data
volume into all the app pods (2) (3).
apiVersion: apps/v1
kind: Deployment
metadata:
name: files-app
namespace: demo
spec:
replicas: 3
selector:
matchLabels:
app: files-app
template:
metadata:
labels:
app: files-app
spec:
# (1)
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- files-app
topologyKey: "kubernetes.io/hostname"
containers:
- name: files-app
image: piomin/files-app:latest
imagePullPolicy: Always
resources:
requests:
memory: 200Mi
cpu: 100m
ports:
- containerPort: 8080
env:
- name: MOUNT_PATH
value: /mount/data
# (2)
volumeMounts:
- name: data
mountPath: /mount/data
# (3)
volumes:
- name: data
persistentVolumeClaim:
claimName: data
YAMLLet’s verify a list of running pods deploying the app.
$ kubectl get po -n demo
NAME READY STATUS RESTARTS AGE
files-app-84897d9b57-5qqdr 0/1 Pending 0 36m
files-app-84897d9b57-7gwgp 1/1 Running 0 36m
files-app-84897d9b57-bjs84 0/1 Pending 0 36m
ShellSessionAlthough, we created the RWX volume, only a single pod is running. As you see, the CSI Hostpath Provider doesn’t fully support the read-write-many mode on Minikube.
In order to solve that problem, we can enable the Storage Provisioner Gluster addon in Minikube.
$ minikube addons enable storage-provisioner-gluster
ShellSessionAfter enabling it, several new pods are running in the storage-gluster
namespace.
$ kubectl -n storage-gluster get pods
NAME READY STATUS RESTARTS AGE
glusterfile-provisioner-79cf7f87d5-87p57 1/1 Running 0 5m25s
glusterfs-d8pfp 1/1 Running 0 5m25s
glusterfs-mp2qx 1/1 Running 0 5m25s
glusterfs-rlnxz 1/1 Running 0 5m25s
heketi-778d755cd-jcpqb 1/1 Running 0 5m25s
ShellSessionAlso, there is a new default StorageClass
with the glusterfile
name.
$ kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
csi-hostpath-sc hostpath.csi.k8s.io Delete Immediate false 20h
glusterfile (default) gluster.org/glusterfile Delete Immediate false 19s
standard k8s.io/minikube-hostpath Delete Immediate false 21h
ShellSessionOnce we redeploy our app and recreate the PVC using a new default storage class, we can expose our sample Spring Boot app as a Kubernetes service:
apiVersion: v1
kind: Service
metadata:
name: files-app
spec:
selector:
app: files-app
ports:
- port: 8080
protocol: TCP
name: http
type: ClusterIP
YAMLThen, let’s enable port forwarding for that service to access it over the localhost:8080
:
$ kubectl port-forward svc/files-app 8080 -n demo
ShellSessionFinally, we can run some tests to list and create some files on the target volume:
$ curl http://localhost:8080/files/all
[]
$ curl http://localhost:8080/files/test1.txt -X POST
/mount/data/test1.txt
$ curl http://localhost:8080/files/test2.txt -X POST
/mount/data/test2.txt
$ curl http://localhost:8080/files/all
["/mount/data/test1.txt","/mount/data/test2.txt"]
$ curl http://localhost:8080/files/test1.txt/line -X POST -d "hello1"
$ curl http://localhost:8080/files/test1.txt/line -X POST -d "hello2"
ShellSessionAnd verify the content of a particular inside the volume:
Final Thoughts
In this article, I wanted to share my experience working with the multi-node Kubernetes cluster simulation on Minikube. It was a very quick introduction. I hope it helps 🙂
Leave a Reply