Microservices on Knative with Spring Boot and GraalVM
In this article, you will learn how to run Spring Boot microservices that communicate with each other on Knative. I also show you how to prepare a native image of the Spring Boot application with GraalVM. Then we will run it on Kubernetes using Skaffold and the Jib Maven Plugin.
This article is the second in a series of my article about Knative. After publishing the first of them, Spring Boot on Knative, you were asking me about a long application startup time after scaling to zero. That’s why I resolved this Spring Boot issue by compiling it to a native image with GraalVM. The problem with startup time seems to be an important thing in a serverless approach.
On Knative you can run any type of application – not only a function. In this article, when I’m writing “microservices”, in fact, I’m thinking about service to service communication.
Source Code
If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should just follow my instructions 🙂
As the example of microservices in this article, I used two applications callme-service
and caller-service
. Both of them exposes a single endpoint, which prints a name of the application pod. The caller-service
application also calls the endpoint exposed by the callme-service
application.
On Kubernetes, both these applications will be deployed as Knative services in multiple revisions. We will also distribute traffic across those revisions using Knative routes. The picture visible below illustrates the architecture of our sample system.
1. Prepare Spring Boot microservices
We have two simple Spring Boot applications that expose a single REST endpoint, health checks, and run an in-memory H2 database. We use Hibernate and Lombok. Therefore, we need to include the following list of dependencies in Maven pom.xml
.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<version>1.18.16</version>
</dependency>
Each time we call the ping
endpoint it creates an event and stores it in the H2 database. The REST endpoint returns the name of a pod and namespace inside Kubernetes and the id of the event. That method will be useful in our manual tests on the cluster.
@RestController
@RequestMapping("/callme")
public class CallmeController {
@Value("${spring.application.name}")
private String appName;
@Value("${POD_NAME}")
private String podName;
@Value("${POD_NAMESPACE}")
private String podNamespace;
@Autowired
private CallmeRepository repository;
@GetMapping("/ping")
public String ping() {
Callme c = repository.save(new Callme(new Date(), podName));
return appName + "(id=" + c.getId() + "): " + podName + " in " + podNamespace;
}
}
Here’s our model class – Callme
. The model class inside the caller-service
application is pretty similar.
@Entity
@Getter
@Setter
@NoArgsConstructor
@RequiredArgsConstructor
public class Callme {
@Id
@GeneratedValue
private Integer id;
@Temporal(TemporalType.TIMESTAMP)
@NonNull
private Date addDate;
@NonNull
private String podName;
}
Also, let’s take a look at the first version of the ping
method in CallerController
. We will modify it later when we will discussing communication and tracing. For now, it is important to understand that this method also calls the ping method exposed by callme-service
and returns the whole response.
@GetMapping("/ping")
public String ping() {
Caller c = repository.save(new Caller(new Date(), podName));
String callme = callme();
return appName + "(id=" + c.getId() + "): " + podName + " in " + podNamespace
+ " is calling " + callme;
}
2. Prepare Spring Boot native image with GraalVM
Spring Native provides support for compiling Spring applications to native executables using the GraalVM native compiler. For more details about this project, you may refer to its documentation. Here’s the main class of our application.
@SpringBootApplication
public class CallmeApplication {
public static void main(String[] args) {
SpringApplication.run(CallmeApplication.class, args);
}
}
Hibernate does a lot of dynamic things at runtime. So we need to get Hibernate to enhance the entities in our application at build time. We need to add the following Maven plugin to our build.
<plugin>
<groupId>org.hibernate.orm.tooling</groupId>
<artifactId>hibernate-enhance-maven-plugin</artifactId>
<version>${hibernate.version}</version>
<executions>
<execution>
<configuration>
<failOnError>true</failOnError>
<enableLazyInitialization>true</enableLazyInitialization>
<enableDirtyTracking>true</enableDirtyTracking>
<enableExtendedEnhancement>false</enableExtendedEnhancement>
</configuration>
<goals>
<goal>enhance</goal>
</goals>
</execution>
</executions>
</plugin>
In this article, I’m using the latest version of Spring Native – 0.9.0. Since Spring Native is actively developed, there are significant changes between subsequent versions. If you compare it to some other articles based on the earlier versions, we don’t have to disable
proxyBeansMethods
, excludeSpringDataWebAutoConfiguration
, addspring-context-indexer
into dependencies or createhibernate.properties
. Cool! I can also use Buildpacks for building a native image.
So, now we just need to add the following dependency.
<dependency>
<groupId>org.springframework.experimental</groupId>
<artifactId>spring-native</artifactId>
<version>0.9.0</version>
</dependency>
The Spring AOT plugin performs ahead-of-time transformations required to improve native image compatibility and footprint.
<plugin>
<groupId>org.springframework.experimental</groupId>
<artifactId>spring-aot-maven-plugin</artifactId>
<version>${spring.native.version}</version>
<executions>
<execution>
<id>test-generate</id>
<goals>
<goal>test-generate</goal>
</goals>
</execution>
<execution>
<id>generate</id>
<goals>
<goal>generate</goal>
</goals>
</execution>
</executions>
</plugin>
3. Run native image on Knative with Buildpacks
Using Builpacks for creating a native image is our primary option. Although it requires a Docker daemon it works properly on every OS. However, we need to use the latest stable version of Spring Boot. In that case, it is 2.4.3. You can configure Buildpacks as well inside Maven pom.xml with the spring-boot-maven-plugin
. Since we need to build and deploy the application on Kubernetes in one step, I prefer configuration in Skaffold. We use paketobuildpacks/builder:tiny
as a builder image. It is also required to enable the native build option with the BP_BOOT_NATIVE_IMAGE
environment variable.
apiVersion: skaffold/v2beta11
kind: Config
metadata:
name: callme-service
build:
artifacts:
- image: piomin/callme-service
buildpacks:
builder: paketobuildpacks/builder:tiny
env:
- BP_BOOT_NATIVE_IMAGE=true
deploy:
kubectl:
manifests:
- k8s/ksvc.yaml
Skaffold configuration refers to our Knative Service
manifest. It is quite non-typical since we need to inject a pod and namespace names into the container. We also allow a maximum of 10 concurrent requests per single pod. If it is exceeded Knative scale up a number of running instances.
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: callme-service
spec:
template:
spec:
containerConcurrency: 10
containers:
- name: callme
image: piomin/callme-service
ports:
- containerPort: 8080
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
By default, Knative doesn’t allow to use Kubernetes fieldRef
feature. In order to enable it, we need to update the knative-features
ConfigMap
in the knative-serving
namespace. The required property name is kubernetes.podspec-fieldref
.
kind: ConfigMap
apiVersion: v1
metadata:
annotations:
namespace: knative-serving
labels:
serving.knative.dev/release: v0.16.0
data:
kubernetes.podspec-fieldref: enabled
Finally, we may build and deploy our Spring Boot microservices on Knative with the following command.
$ skaffold run
4. Run native image on Knative with Jib
The same as in my previous article about Knative we will build and run our applications on Kubernetes with Skaffold and Jib. Fortunately, Jib Maven Plugin has already introduced support for GraalVM “native images”. The Jib GraalVM Native Image Extension expects the native-image-maven-plugin
to do the heavy lifting of generating a “native image” (with the native-image:native-image
goal). Then the extension just simply copies the binary into a container image and sets it as executable.
Of course, unlike Java bytecode, a native image is not portable but platform-specific. The Native Image Maven Plugin doesn’t support cross-compilation, so the native-image should be built on the same OS as the runtime architecture. Since I build a GraalVM image of my applications on Ubuntu 20.10, I should use the same base Docker image for running containerized microservices. In that case, I chose image ubuntu:20.10 as shown below.
<plugin>
<groupId>com.google.cloud.tools</groupId>
<artifactId>jib-maven-plugin</artifactId>
<version>2.8.0</version>
<dependencies>
<dependency>
<groupId>com.google.cloud.tools</groupId>
<artifactId>jib-native-image-extension-maven</artifactId>
<version>0.1.0</version>
</dependency>
</dependencies>
<configuration>
<from>

</from>
<pluginExtensions>
<pluginExtension>
<implementation>com.google.cloud.tools.jib.maven.extension.nativeimage.JibNativeImageExtension</implementation>
</pluginExtension>
</pluginExtensions>
</configuration>
</plugin>
If you use Jib Maven Plugin you first need to build a native image. In order to build a native image of the application we also need to include a native-image-maven-plugin
. Of you need to build our application using GraalVM JDK.
<plugin>
<groupId>org.graalvm.nativeimage</groupId>
<artifactId>native-image-maven-plugin</artifactId>
<version>21.0.0.2</version>
<executions>
<execution>
<goals>
<goal>native-image</goal>
</goals>
<phase>package</phase>
</execution>
</executions>
</plugin>
So, the last in this section is just to run the Maven build. In my configuration, a native-image-maven-plugin
needs to be activated under the native-image
profile.
$ mvn clean package -Pnative-image
After the build native image of callme-service
is visible inside the target
directory.
The configuration of Skaffold is typical. We just need to enable Jib as a build tool.
apiVersion: skaffold/v2beta11
kind: Config
metadata:
name: callme-service
build:
artifacts:
- image: piomin/callme-service
jib: {}
deploy:
kubectl:
manifests:
- k8s/ksvc.yaml
Finally, we may build and deploy our Spring Boot microservices on Knative with the following command.
$ skaffold run
5. Communication between microservices on Knative
I deployed two revisions of each application on Knative. Just for comparison, the first version of deployed applications is compiled with OpenJDK. Only the latest version is basing on the GraalVM native image. Thanks to that we may compare startup time for both revisions.
Let’s take a look at a list of revisions after deploying both versions of our applications. The traffic is splitted 60% to the latest version, and 40% to the previous version of each application.
Under the hood, Knative creates Kubernetes Services
and multiple Deployments
. There is always a single Deployment
per each Knative Revision
. Also, there are multiple services, but always one of them is per all revisions. That Service
is an ExternalName
service type. Assuming you still want to split traffic across multiple revisions you should use exactly that service in your communication. The name of the service is callme-service
. However, we should use FQDN name with a namespace name and svc.cluster.local
suffix.
We can use Spring RestTemplate
for calling endpoint exposed by the callme-service
. In order to guarantee tracing for the whole request path, we need to propagate Zipkin headers between the subsequent calls. For communication, we will use a service with a fully qualified internal domain name (callme-service.serverless.svc.cluster.local
) as mentioned before.
@RestController
@RequestMapping("/caller")
public class CallerController {
private RestTemplate restTemplate;
CallerController(RestTemplate restTemplate) {
this.restTemplate = restTemplate;
}
@Value("${spring.application.name}")
private String appName;
@Value("${POD_NAME}")
private String podName;
@Value("${POD_NAMESPACE}")
private String podNamespace;
@Autowired
private CallerRepository repository;
@GetMapping("/ping")
public String ping(@RequestHeader HttpHeaders headers) {
Caller c = repository.save(new Caller(new Date(), podName));
String callme = callme(headers);
return appName + "(id=" + c.getId() + "): " + podName + " in " + podNamespace
+ " is calling " + callme;
}
private String callme(HttpHeaders headers) {
MultiValueMap<String, String> map = new LinkedMultiValueMap<>();
Set<String> headerNames = headers.keySet();
headerNames.forEach(it -> map.put(it, headers.get(it)));
HttpEntity httpEntity = new HttpEntity(map);
ResponseEntity<String> entity = restTemplate
.exchange("http://callme-service.serverless.svc.cluster.local/callme/ping",
HttpMethod.GET, httpEntity, String.class);
return entity.getBody();
}
}
In order to test the communication between our microservices we just need to invoke caller-service
via Knative Route
.
Let’s perform some test calls of the caller-service GET /caller/ping
endpoint. We should use the URL http://caller-service-serverless.apps.cluster-d556.d556.sandbox262.opentlc.com/caller/ping
.
In the first to requests caller-service calls the latest version of callme-service (compiled with GraalVM). In the third request it communicates with the older version callme-service (compiled with OpenJDK). Let’s compare the time of start for those two versions of the same application.
With GraalVM we have 0.3s instead of 5.9s. We should also keep in mind that our applications start an in-memory, embedded H2 database.
6. Configure tracing with Jaeger
In order to enable tracing for Knative, we need to update the knative-tracing
ConfigMap
in the knative-serving
namespace. Of course, we first need to install Jaeger in our cluster.
apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing
metadata:
name: knative-tracing
namespace: knative-serving
spec:
sample-rate: "1"
backend: zipkin
zipkin-endpoint: http://jaeger-collector.knative-serving.svc.cluster.local:9411/api/v2/spans
debug: "false"
You can also use Helm chart to install Jaeger. With this option, you need to execute the following Helm commands.
$ helm repo add jaegertracing https://jaegertracing.github.io/helm-charts
$ helm install jaeger jaegertracing/jaeger
Knative automatically creates Zipkin span headers. The only single goal for us is to propagate HTTP headers between the caller-service
and callme-service
applications. In my configuration, Knative sends 100% traces to Jaeger. Let’s take a look at some traces for GET /caller/ping
endpoint within Knative microservices mesh.
We can also take a look on the detailed view for every single request.
Conclusion
There are several important things you need to consider when you are running microservices on Knative. I focused on the aspects related to communication and tracing. I also showed that Spring Boot doesn’t have to start in a few seconds. With GraalVM it can start in milliseconds, so you can definitely consider it as a serverless framework. You may expect more articles about Knative soon!
2 COMMENTS