Development on Kubernetes with Telepresence and Skaffold
In this article, you will learn how to use Telepresence and Skaffold to improve development workflow on Kubernetes. In order to simplify the build of our Java applications, we will also use the Jib Maven plugin. All those tools give you great power to speed up your development process. That’s not my first article about Skaffold. If you are not familiar with this tool it is worth my article about it. Today I’ll focus on Telepresence, which is in fact one of my favorite tools. I hope, that after reading this article, you will say it back 🙂
Introduction
What’s Telepresence? It’s a very simple and powerful CLI tool for fast and local development for Kubernetes. Why it is simple? Because you can do almost everything using a single command. Telepresence is a CNCF sandbox project originally created by Ambassador Labs. It lets you run and test microservices locally against a remote Kubernetes cluster. It intercepts remote traffic and sends it to your local running instance. I won’t focus on the technical aspects. If you want to read more about it, you can refer to the following link.
Firstly, let’s analyze our case. There are three microservices: first-service
, caller-service
and callme-service
. All of them expose a single REST endpoint GET /ping
, which returns basic information about each microservice. In order to create applications, I’m using the Spring Boot framework. Our architecture is visible in the picture below. The first-service
is calling the endpoint exposed by the caller-service
. Then the caller-service
is calling the endpoint exposed by the callme-service
. Of course, we are going to deploy all the microservices on the remote Kubernetes cluster.
Assuming we have something to do with the caller-service
, we are also running it locally. So finally, our goal is to forward the traffic that is sent to the caller-service
on the Kubernetes cluster to our local instance. On the other hand, the local instance of the caller-service
should call the instance of the callme-service
running on the remote cluster. Looks hard? Let’s check it out!
Source Code
If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. The code used in this article is available in the dev
branch. Then you should just follow my instructions 🙂
Prerequisites
Before we start, we need to install several tools. Of course, we also need to have a running Kubernetes cluster (or e.g. OpenShift). We will use the following CLI tools:
kubectl
– to interact with the Kubernetes cluster. It is also used by Skaffoldskaffold
– here are installation instructions. It works perfectly fine on Linux, Macos as well as Windows tootelepresence
– here are installation instruction. I’m not sure about Windows since it is in developer preview there. However, I’m using it on Macos without any problems- Maven + JDK11 – of course we need to build applications locally before deploying to Kubernetes.
Build and deploy applications on Kubernetes with Skaffold and Jib
Our applications are as simple as possible. Let’s take a look at the callme-service
REST endpoint implementation. It just returns the name of the microservice and its version (v1
everywhere in this article):
@RestController
@RequestMapping("/callme")
public class CallmeController {
private static final Logger LOGGER =
LoggerFactory.getLogger(CallmeController.class);
@Autowired
BuildProperties buildProperties;
@Value("${VERSION}")
private String version;
@GetMapping("/ping")
public String ping() {
LOGGER.info("Ping: name={}, version={}",
buildProperties.getName(), version);
return "I'm callme-service " + version;
}
}
The endpoint visible above is called by the caller-service
. The same as before it also prints the name of the microservice and its version. But also, it appends the result received from the callme-service
. It calls the callme-service
endpoint using the Spring RestTemplate
and the name of the Kubernetes Service
.
@RestController
@RequestMapping("/caller")
public class CallerController {
private static final Logger LOGGER =
LoggerFactory.getLogger(CallerController.class);
@Autowired
BuildProperties buildProperties;
@Autowired
RestTemplate restTemplate;
@Value("${VERSION}")
private String version;
@GetMapping("/ping")
public String ping() {
LOGGER.info("Ping: name={}, version={}",
buildProperties.getName(), version);
String response = restTemplate
.getForObject("http://callme-service:8080/callme/ping", String.class);
LOGGER.info("Calling: response={}", response);
return "I'm caller-service " + version + ". Calling... " + response;
}
}
Finally, let’s take a look at the implementation of the first-service
@RestController
. It calls the caller-service
endpoint visible above.
@RestController
@RequestMapping("/first")
public class FirstController {
private static final Logger LOGGER =
LoggerFactory.getLogger(FirstController.class);
@Autowired
BuildProperties buildProperties;
@Autowired
RestTemplate restTemplate;
@Value("${VERSION}")
private String version;
@GetMapping("/ping")
public String ping() {
LOGGER.info("Ping: name={}, version={}",
buildProperties.getName(), version);
String response = restTemplate
.getForObject("http://caller-service:8080/caller/ping", String.class);
LOGGER.info("Calling: response={}", response);
return "I'm first-service " + version + ". Calling... " + response;
}
}
Here’s a definition of the Kubernetes Service
for e.g. callme-service
:
apiVersion: v1
kind: Service
metadata:
name: callme-service
labels:
app: callme-service
spec:
type: ClusterIP
ports:
- port: 8080
name: http
selector:
app: callme-service
You can see Kubernetes YAML manifests inside the k8s
directory for every single microservice. Let’s take a look at the example Deployment
manifest for the callme-service
.
apiVersion: apps/v1
kind: Deployment
metadata:
name: callme-service
spec:
replicas: 1
selector:
matchLabels:
app: callme-service
version: v1
template:
metadata:
labels:
app: callme-service
version: v1
spec:
containers:
- name: callme-service
image: piomin/callme-service
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
env:
- name: VERSION
value: "v1"
We can deploy each microservice independently, or all of them at once. Here’s a global Skaffold configuration for the whole project. You can find it in the root directory. As you see it uses Jib as a build tool and tries to find manifests inside k8s
directory of every single module.
apiVersion: skaffold/v2beta22
kind: Config
metadata:
name: simple-istio-services
build:
artifacts:
- image: piomin/first-service
jib:
project: first-service
- image: piomin/caller-service
jib:
project: caller-service
- image: piomin/callme-service
jib:
project: callme-service
tagPolicy:
gitCommit: {}
deploy:
kubectl:
manifests:
- '*/k8s/*.yaml'
Each Maven module has to include Jib Maven plugin.
<plugin>
<groupId>com.google.cloud.tools</groupId>
<artifactId>jib-maven-plugin</artifactId>
<version>3.1.1</version>
</plugin>
Finally, we can deploy all our microservices on Kubernetes with Skaffold. Jib is working in Dockerless mode, so you don’t have to run Docker on your machine. By default, it uses adoptopenjdk:11-jre
as a base following Java version defined in Maven pom.xml
. If you want to observe logs after running applications on Kubernetes just activate the --tail
option.
$ skaffold run --tail
Let’s just display a list of running pods to verify if the deployment was successful:
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
caller-service-688bd76c98-2m4gp 1/1 Running 0 3m1s
callme-service-75c7cf5bf-rfx69 1/1 Running 0 3m
first-service-7698465bcb-rvf77 1/1 Running 0 3m
Using Telepresence with Kubernetes
Let the party begin! After running all microservices on Kubernetes we will connect Telepresence to our cluster. The following command will run Telepresence deamon on your machine and connect it to the Kubernetes cluster (from current Kubecontext).
$ telepresence connect
If you see a similar result it means everything goes well.
Telepresence has already connected to your Kubernetes cluster, but it still not intercepting any traffic from the pods. You can verify it with the following command:
$ telepresence list
caller-service: ready to intercept (traffic-agent not yet installed)
callme-service: ready to intercept (traffic-agent not yet installed)
first-service : ready to intercept (traffic-agent not yet installed)
Ok, so now let’s intercept the traffic from the caller-service
.
$ telepresence intercept caller-service --port 8080:8080
Here’s my result after running the command visible above.
Now, the only thing we need to do is to run the caller-service
on the local machine. By default, it listens on the port 8080
:
$ mvn clean spring-boot:run
We can do it event smarter with the single Telepresence command instead of running them separately:
$ telepresence intercept caller-service --port 8080:8080 -- mvn clean spring-boot:run
Before we send a test request let’s analyze what happened. After running the telepresence intercept command Telepresence injects a sidecar container into the application pod. The name of this container is traffic-agent
. It is responsible for intercepting the traffic that comes to the caller-service
.
$ kubectl get pod caller-service-7577b9f6fd-ww7nv \
-o jsonpath='{.spec.containers[*].name}'
caller-service traffic-agent
Ok, now let’s just call the first-service
running on the remote Kubernetes cluster. I deployed it on OpenShift, so I can easily call it externally using the Route
object. If you run it on other plain Kubernetes you can create Ingress or just run the kubectl port-forward
command. Alternatively, you may also enable port forwarding on the skaffold run
command (--port-forward
option). Anyway, let’s call the first-service
/ping
endpoint. Here’s my result.
Here are the application logs from the Kubernetes cluster printed by the skaffold run
command. As you see it just prints the logs from callme-service
and first-service
:
Now, let’s take a look at the logs from the local instance of the caller-service
. Telepresence intercepts the traffic and sends it to the local instance of the application. Then this instance call the callme-service
on the remote cluster 🙂
Cleanup Kubernetes environment after using Telepresence
To clean up the environment just run the following command. It will remove the sidecar container from your application pod. However, if you start your Spring Boot application within the telepresence intercept command you just need to kill the local process with CTRL+C
(however the traffic-agent
container is still inside the pod).
$ telepresence uninstall --agent caller-service
After that, you can call the first-service
once again. Now, all the requests are not going out of the cluster.
In order to shutdown the Telepresence daemon and disconnect from the Kubernetes cluster just run the following command:
$ telepresence quit
Final Thoughts
You can also easily debug your microservices locally, just by running the same telepresence intercept
command and your application in debug mode. What’s important in this scenario – Telepresence does not force you to use any other particular tools or IDE. You can do everything the same as would you normally run or debug the application locally. I hope you will like that tool the same as me 🙂
Leave a Reply