Getting Started with Azure Kubernetes Service
In this article, you will learn how to create and manage a Kubernetes cluster on Azure and run your apps on it. We will focus on the Azure features that simplify Kubernetes adoption. We will discuss such topics as enabling monitoring based on Prometheus or exposing an app outside of the cluster using the Ingress
object and Azure mechanisms. To proceed with that article, you don’t need to have a deep knowledge of Kubernetes. However, you may find a lot of articles about Kubernetes and cloud-native development on my blog. For example, if you are developing Java apps and running them on Kubernetes you may read the following article about best practices.
On the other hand, if you are interested in Azure and looking for some other approaches for running Java apps there, you can also refer to some previous posts on my blog. I have already described how to use such services as Azure Spring Apps or Azure Function for Java. For example, in that article, you can read how to integrate Spring Boot with Azure services using the Spring Cloud Azure project. For more information about Azure Function and Spring Cloud refer to that article.
Source Code
This time we won’t work much with a source code. However, if you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. After that, you should follow my further instructions.
Create Cluster with Azure Kubernetes Service
After signing in to Azure Portal we can create a resource group for our cluster. The name of my resource group is aks
. Then we need to find the Azure Kubernetes Service (AKS) in the marketplace. We are creating the instance of AKS in the aks
resource group.
We will redirected to the first page of the creation wizard. I will just put the name of my cluster, and leave all the others with the recommended values. The name of my cluster is piomin
. The default cluster preset configuration is “Dev/Test”, which is enough for our exercise. However, if you choose e.g. the “Production Standard” preset it will set the 3 availability zones and change the pricing tier for your cluster. Let’s click the “Next” button to proceed to the next page.
We also won’t change anything in the “Node pools” section. On the “Networking” page, we choose “Azure CNI” instead of "Kubenet"
as network configuration, and “Azure” instead of “Calico” as network policy. In comparison to Kubenet, the Azure CNI simplifies integration between Kubernetes and Azure Application Gateway.
We will also provide some changes in the Monitoring section. The main goal here is to enable managed Prometheus service for our cluster. In order to do it, we need to create a new workspace in Azure Monitor. The name of my workspace is prometheus
.
That’s all that we needed. Finally, we can create our first AKS cluster.
After some minutes our Kubernetes cluster is ready. We can display a list of resources created inside the aks
group. As you see, there are some resources related to the Prometheus or Azure Monitor and a single Kubernetes service “piomin”. It is our Kubernetes cluster. We can click it to see details.
Of course, we can manage the cluster using Azure Portal. However, we can also easily switch to the kubectl
CLI. Here’s the Kubernetes API server address for our cluster: piomin-xq30re6n.hcp.eastus.azmk8s.io
.
Manage AKS with CLI
We can easily import the AKS cluster credentials into our local Kubeconfig
file with the following az
command (piomin
is the name of the cluster, while aks
is the name of the resource group):
$ az aks get-credentials -n piomin -g aks
Once you execute the command visible above, it will add a new Kube context or override the existing one.
After that, we can switch to the kubectl
CLI. For example, we can display a list of Deployments
across all the namespaces:
$ kubectl get deploy -A
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system ama-logs-rs 1/1 1 1 56m
kube-system ama-metrics 1/1 1 1 52m
kube-system ama-metrics-ksm 1/1 1 1 52m
kube-system coredns 2/2 2 2 58m
kube-system coredns-autoscaler 1/1 1 1 58m
kube-system konnectivity-agent 2/2 2 2 58m
kube-system metrics-server 2/2 2 2 58m
Deploy Sample Apps on the AKS Cluster
Once we can interact with the Kubernetes cluster on Azure through the kubectl
CLI, we can run our first app there. In order to do it, firstly, go to the callme-service
directory. It contains a simple Spring Boot app that exposes REST endpoints. The Kubernetes manifests are located inside the k8s
directory. Let’s take a look at the deployment YAML manifest. It contains the Kubernetes Deployment
and Service
objects.
apiVersion: apps/v1
kind: Deployment
metadata:
name: callme-service
spec:
replicas: 1
selector:
matchLabels:
app: callme-service
template:
metadata:
labels:
app: callme-service
spec:
containers:
- name: callme-service
image: piomin/callme-service
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
env:
- name: VERSION
value: "v1"
---
apiVersion: v1
kind: Service
metadata:
name: callme-service
labels:
app: callme-service
spec:
type: ClusterIP
ports:
- port: 8080
name: http
selector:
app: callme-service
In order to simplify deployment on Kubernetes we can use Skaffold. It integrates with the kubectl
CLI. We just need to execute the following command to build the app from the source code and run it on AKS:
$ cd callme-service
$ skaffold run
After that, we will deploy a second app on the cluster. Go to the caller-service
directory. Here’s the YAML manifest with Kubernetes Service
and Deployment
.
apiVersion: apps/v1
kind: Deployment
metadata:
name: caller-service
spec:
replicas: 1
selector:
matchLabels:
app: caller-service
template:
metadata:
name: caller-service
labels:
app: caller-service
spec:
containers:
- name: caller-service
image: piomin/caller-service
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
env:
- name: VERSION
value: "v1"
---
apiVersion: v1
kind: Service
metadata:
name: caller-service
labels:
app: caller-service
spec:
type: ClusterIP
ports:
- port: 8080
name: http
selector:
app: caller-service
The caller-service app invokes an endpoint exposed by the callme-service
app. Here’s the implementation of Spring @RestController
responsible for that:
@RestController
@RequestMapping("/caller")
public class CallerController {
private static final Logger LOGGER = LoggerFactory
.getLogger(CallerController.class);
@Autowired
Optional<BuildProperties> buildProperties;
@Autowired
RestTemplate restTemplate;
@Value("${VERSION}")
private String version;
@GetMapping("/ping")
public String ping() {
LOGGER.info("Ping: name={}, version={}",
buildProperties.or(Optional::empty), version);
String response = restTemplate.getForObject(
"http://callme-service:8080/callme/ping", String.class);
LOGGER.info("Calling: response={}", response);
return "I'm caller-service " + version + ". Calling... " + response;
}
}
Once again, let’s build the app on Kubernetes with the Skaffold CLI:
$ cd caller-service
$ skaffold run
Let’s switch to the Azure Portal. In the Azure Kubernetes Service
page go to the workloads section. As you see, there are two Deployments
: callme-service
and caller-service
.
We can switch to the pods view.
Monitoring with Managed Prometheus
In order to access Prometheus metrics for our AKS cluster, we need to go to the prometheus
Azure Monitor workspace. In the first step, let’s take a look at the list of clusters assigned to that workspace.
Then, we can switch to the “Prometheus explorer” section. It allows us to provide the PromQL query to see a diagram illustrating the selected metric. You will find a full list of metrics collected for the AKS cluster in the following article. For example, we can visualize the RAM usage for both our apps running in the default
namespace. In order to do that, we should use the node_namespace_pod_container:container_memory_working_set_bytes
metric as shown below.
Exposing App Outside Azure Kubernetes
Install Azure Application Gateway on Kubernetes
In order to expose the service outside of the AKS, we need to create the Ingress
object. However, we must have an ingress controller installed on the cluster to satisfy an Ingress. Since we are running the cluster on Azure, our natural choice is the AKS Application Gateway Ingress Controller that configures the Azure Application Gateway. We can install it through the Azure Portal. Go to your AKS cluster page and then switch to the “Networking” section. After that just select the “Enable ingress controller” checkbox. The new ingress-appgateway
will be created and assigned to the AKS cluster.
Once it is ready, you can display its details. The ingress-appgateway
object exists in the same virtual network as Azure Kubernetes Service. There is a dedicated resource group – in my case MC_aks_piomin_eastus
. The gateway has the public IP addresses assigned. For me, it is 20.253.111.153
as shown below.
After installing the Azure Application Gateway addon on AKS, there is a new Deployment ingress-appgw-deployment
responsible for integration between the cluster and Azure Application Gateway service. It is our ingress controller.
Create Kubernetes Ingress
There is also a default IngressClass
object installed on the cluster. We can display a list of available ingress classes by executing the command visible below. Our IngressClass
object is available under the azure-application-gateway
name.
$ kubectl get ingressclass
NAME CONTROLLER PARAMETERS AGE
azure-application-gateway azure/application-gateway <none> 18m
Let’s take a look at the Ingress
manifest. It contains several standard fields inside the spec.rules.*
section. It exposes the callme-service
Kubernetes Service
under the 8080
port. Our Ingress
object needs to refer to the azure-application-gateway
IngressClass
. The Azure Application Gateway Ingress Controller (AGIC) will watch such an object. Once we apply the manifest, AGIC will automatically configure the Application Gateway instance. The Application Gateway contains some health checks to verify the status of the backend. Since the Spring Boot app exposes a liveness endpoint under the /actuator/health/liveness
path and 8080
port, we need to override the default settings. In order to do it, we need to use the appgw.ingress.kubernetes.io/health-probe-path
and appgw.ingress.kubernetes.io/health-probe-port
annotations.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: callme-ingress
namespace: default
annotations:
appgw.ingress.kubernetes.io/health-probe-hostname: localhost
appgw.ingress.kubernetes.io/health-probe-path: /actuator/health/liveness
appgw.ingress.kubernetes.io/health-probe-port: '8080'
spec:
ingressClassName: azure-application-gateway
rules:
- http:
paths:
- path: /callme
pathType: Prefix
backend:
service:
name: callme-service
port:
number: 8080
The Ingress for the caller-service
is very similar. We just need to change the path and the name of the backend service.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: caller-ingress
namespace: default
annotations:
appgw.ingress.kubernetes.io/health-probe-hostname: localhost
appgw.ingress.kubernetes.io/health-probe-path: /actuator/health/liveness
appgw.ingress.kubernetes.io/health-probe-port: '8080'
spec:
ingressClassName: azure-application-gateway
rules:
- http:
paths:
- path: /caller
pathType: Prefix
backend:
service:
name: caller-service
port:
number: 8080
Let’s take a look at the list of ingresses in the Azure Portal. They are available under the same address and port. There is just a difference in the target context path.
We can test both services using the gateway IP address and the right context path. Each app exposes the GET /ping
endpoint.
$ curl http://20.253.111.153/callme/ping
$ curl http://20.253.111.153/caller/ping
The Azure Application Gateway contains a list of backends. In the Kubernetes context, those backends are the IP addresses of running pods. As you both health checks respond with the HTTP 200 OK
code.
What’s next
We have already created a Kubernetes cluster, run the apps there, and exposed them to the external client. Now, the question is – how Azure may help in other activities. Let’s say, we want to install some additional software on the cluster. In order to do that, we need to go to the “Extensions + applications” section on the AKS cluster page. Then, we have to click the “Install an extension” button.
The link redirects us to the app marketplace. There are several different apps we can install in a simplified, graphical form. It could be a database, a message broker, or e.g. one of the Kubernetes-native tools like Argo CD.
We just need to create a new instance of Argo CD and fill in some basic information. The installer is based on the Argo CD Helm chart provided by Bitnami.
After a while, the instance of Argo CD is running on our cluster. We can display a list of installed extensions.
I installed Argo CD in the gitops
namespace. Let’s verify a list of pods running in that namespace after successful installation:
$ kubectl get pod -n gitops
NAME READY STATUS RESTARTS AGE
gitops-argo-cd-app-controller-6d6848f46c-8n44j 1/1 Running 0 4m46s
gitops-argo-cd-repo-server-5f7cccd9d5-bc6ts 1/1 Running 0 4m46s
gitops-argo-cd-server-5c656c9998-fsgb5 1/1 Running 0 4m46s
gitops-redis-master-0 1/1 Running 0 4m46s
And the last thing. As you remember, we exposed our apps outside the AKS cluster under the IP address. What about exposing them under the DNS name? Firstly, we need to have a DNS zone created on Azure. In this zone, we have to add a new record set containing the IP address of our application gateway. The name of the record set indicates the hostname of the gateway. In my case it is apps.cb57d.azure.redhatworkshops.io
.
After that, we need to change the definition of the Ingress
object. It should contain the hostname
field inside the rules
section, with the public DNS address of our gateway.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: caller-ingress
namespace: default
annotations:
appgw.ingress.kubernetes.io/health-probe-hostname: localhost
appgw.ingress.kubernetes.io/health-probe-path: /actuator/health/liveness
appgw.ingress.kubernetes.io/health-probe-port: '8080'
spec:
ingressClassName: azure-application-gateway
rules:
- host: apps.cb57d.azure.redhatworkshops.io
http:
paths:
- path: /caller
pathType: Prefix
backend:
service:
name: caller-service
port:
number: 8080
Final Thoughts
In this article, I focused on Azure features that simplify starting with the Kubernetes cluster. We covered such topics as cluster creation, monitoring, or exposing apps for external clients. Of course, these are not all the interesting features provided by Azure Kubernetes Service.
Leave a Reply