Serverless Java Functions on OpenShift
In this article, you will learn how to create and deploy serverless, Knative based functions on OpenShift. We will use a single kn
CLI command to build and run our applications on the cluster. How we can do that? With OpenShift Serverless Functions we may use the kn func
plugin that allows us to work directly with the source code. It uses Cloud Native Buildpacks API to create container images. It supports several runtimes like Node.js, Python, or Golang. However, we will try Java runtimes based on the Quarkus or Spring Boot frameworks.
Prerequisites
You need to have two things to be able to run this exercise by yourself. Firstly, you need to run Docker or Podman on your local machine, because CNCF Buildpacks use it for running build. If you are not familiar with Cloud Native Buildpacks you can read my article about it. I tried to configure Podman according to this part of the documentation, but I was not succeded (on macOS). With Docker, it just works, so I avoided other tries with Podman.
On the other hand, you need to have a target OpenShift cluster with the serverless module installed. You can run it locally using Code Ready Containers (crc
). But in my opinion, a better idea is to try the developer sandbox available online here. It contains all you need to start development – including OpenShift Serverless. It is by default available on the sandbox version of Openshift.
Finally, you need to install the oc
client and kn
CLI locally. Since we use the kn func
plugin, we need to install the Knative CLI version provided by RedHat. The detailed installation instruction is available here.
Source Code
If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then go to the serverless/functions
directory. After that, you should just follow my instructions. Let’s begin.
Create OpenShift Serverless function with Quarkus
We can generate a sample application source code using a single kn
command. We may choose between multiple runtimes and two templates. Currently, there are five runtimes available. By default it is node
(for Node.js applications), but you can also set quarkus
, springboot
, typescript
, go
or python
. Also, there are two templates available: http
for simple REST-based applications and events
for applications leveraging the Knative Eventing approach in communication. Let’s create our first application using the quarkus runtime and events
template.
$ kn func create -l quarkus -t events caller-function
Now, go to the order-function
directory and edit generated pom.xml
file. Firstly, we will replace a version of Java into 11
, and a version of Quarkus with the latest.
<properties>
<compiler-plugin.version>3.8.1</compiler-plugin.version>
<maven.compiler.parameters>true</maven.compiler.parameters>
<maven.compiler.source>11</maven.compiler.source>
<maven.compiler.target>11</maven.compiler.target>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
<quarkus-plugin.version>2.4.2.Final</quarkus-plugin.version>
<quarkus.platform.artifact-id>quarkus-universe-bom</quarkus.platform.artifact-id>
<quarkus.platform.group-id>io.quarkus</quarkus.platform.group-id>
<quarkus.platform.version>2.4.2.Final</quarkus.platform.version>
<surefire-plugin.version>3.0.0-M5</surefire-plugin.version>
</properties>
By default the kn func
plugin includes some dependencies in the test scope and a single dependency with the Quarkus Funqy Knative extension. There is also Quarkus Smallrye Health to automatically generate liveness and readiness health checks.
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-funqy-knative-events</artifactId>
</dependency>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-smallrye-health</artifactId>
</dependency>
By default, the kn func
generates a simple function that takes CloudEvent
as an input and sends the same event as an output. I will not change there much and just replace System.out
with the Logger
implementation in order to print logs.
public class Function {
@Inject
Logger logger;
@Funq
public CloudEvent<Output> function(CloudEvent<Input> input) {
logger.infof("New event: %s", input);
Output output = new Output(input.data().getMessage());
return CloudEventBuilder.create().build(output);
}
}
Assuming you have already logged in to your OpenShift cluster using the oc client you can proceed to the function deployment. In fact, you just need to go to your application directory and then run a single kn func
command as shown below.
$ kn func deploy -i quay.io/pminkows/caller-function -v
Once you run the command visible above the local is starting on Docker. If it finishes successfully, we are proceeding to the deployment phase.
In the application root directory, there is also an automatically generated configuration file func.yaml
.
name: caller-function
namespace: ""
runtime: quarkus
image: quay.io/pminkows/caller-function
imageDigest: sha256:5d3ef16e1282bc5f6367dff96ab7bb15487199ac3939e262f116657a83706245
builder: quay.io/boson/faas-jvm-builder:v0.8.4
builders: {}
buildpacks: []
healthEndpoints: {}
volumes: []
envs: []
annotations: {}
options: {}
labels: []
Create OpenShift Serveless functions with Spring Boot
Now, we will do exactly the same thing as before, but this time for the Spring Boot application. In order to create a Spring Boot function we just need to set springboot
as the runtime name. The name of our application is callme-function
.
$ kn func create -l springboot -t events callme-function
Then we go to the callme-function
directory. Firstly, let’s edit Maven pom.xml
. The same as for Quarkus I’ll set the latest version of Spring Boot.
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.6.1</version>
<relativePath />
</parent>
The generated application is built on top of the Spring Cloud Function project. We don’t need to add there anything.
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-function-web</artifactId>
</dependency>
Spring Boot function code is a little bit more complicated than generated by the Quarkus framework. It uses Spring functional programming style, where a Function
bean represents HTTP POST endpoint with input and output response.
@SpringBootApplication
public class SpringCloudEventsApplication {
private static final Logger LOGGER = Logger.getLogger(
SpringCloudEventsApplication.class.getName());
public static void main(String[] args) {
SpringApplication.run(SpringCloudEventsApplication.class, args);
}
@Bean
public Function<Message<Input>, Output> uppercase(CloudEventHeaderEnricher enricher) {
return m -> {
HttpHeaders httpHeaders = HeaderUtils.fromMessage(m.getHeaders());
LOGGER.log(Level.INFO, "Input CE Id:{0}", httpHeaders.getFirst(
ID));
LOGGER.log(Level.INFO, "Input CE Spec Version:{0}",
httpHeaders.getFirst(SPECVERSION));
LOGGER.log(Level.INFO, "Input CE Source:{0}",
httpHeaders.getFirst(SOURCE));
LOGGER.log(Level.INFO, "Input CE Subject:{0}",
httpHeaders.getFirst(SUBJECT));
Input input = m.getPayload();
LOGGER.log(Level.INFO, "Input {0} ", input);
Output output = new Output();
output.input = input.input;
output.operation = httpHeaders.getFirst(SUBJECT);
output.output = input.input != null ? input.input.toUpperCase() : "NO DATA";
return output;
};
}
@Bean
public CloudEventHeaderEnricher attributesProvider() {
return attributes -> attributes
.setSpecVersion("1.0")
.setId(UUID.randomUUID()
.toString())
.setSource("http://example.com/uppercase")
.setType("com.redhat.faas.springboot.events");
}
@Bean
public Function<String, String> health() {
return probe -> {
if ("readiness".equals(probe)) {
return "ready";
} else if ("liveness".equals(probe)) {
return "live";
} else {
return "OK";
}
};
}
}
Because there are two functions (@Bean
Function
) defined in the generated code, you need to add the following property in the application.properties
file.
spring.cloud.function.definition = uppercase;health
Deploying serverless functions on OpenShift
We have two sample applications deployed on the OpenShift cluster. The first of them is written in Quarkus, while the second of them in Spring Boot. Those applications will communicate with each other through events. So in the first step, we need to create a Knative Eventing broker.
$ kn broker create default
Let’s check if the broker has been successfully created.
$ kn broker list
NAME URL AGE CONDITIONS READY REASON
default http://broker-ingress.knative-eventing.svc.cluster.local/piomin-serverless/default 12m 5 OK / 5 True
Then, let’s display a list of running Knative services:
$ kn service list
NAME URL LATEST AGE CONDITIONS READY REASON
caller-function https://caller-function-piomin-serverless.apps.cluster-8e1d.8e1d.sandbox114.opentlc.com caller-function-00002 18m 3 OK / 3 True
callme-function https://callme-function-piomin-serverless.apps.cluster-8e1d.8e1d.sandbox114.opentlc.com callme-function-00006 11h 3 OK / 3 True
Send and receive CloudEvent with Quarkus
The architecture of our solution is visible in the picture below. The caller-function
receives events sent directly by us using the kn func
plugin. Then it processes the input event, creates a new CloudEvent
and sends it to the Knative Broker
. The broker just receives events. To provide more advanced event routing we need to define the Knative Trigger
. The trigger is able to filter events and then send them directly to the target sink. On the other hand, those events are received by the callme-function
.
Ok, so now we need to rewrite caller-function
to add a step of creating and sending CloudEvent
to the Knative Broker
. To do we need to declare and invoke the Quarkus REST client. Firstly, let’s add the required dependencies.
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-rest-client</artifactId>
</dependency>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-rest-client-jackson</artifactId>
</dependency>
In the next step, we will create a client interface with a declaration of a calling method. The CloudEvent
specification requires four HTTP headers to be set on the POST request. The context path of a Knative Broker is /piomin-serverless/default
, and we have already checked it out using the kn broker list
command.
@Path("/piomin-serverless/default")
@RegisterRestClient
public interface BrokerClient {
@POST
@Produces(MediaType.APPLICATION_JSON)
String sendEvent(Output event,
@HeaderParam("Ce-Id") String id,
@HeaderParam("Ce-Source") String source,
@HeaderParam("Ce-Type") String type,
@HeaderParam("Ce-Specversion") String version);
}
We also need to set the broker address in the application.properties
. We use a standard property mp-rest/url
handled by the MicroProfile REST client.
functions.BrokerClient/mp-rest/url = http://broker-ingress.knative-eventing.svc.cluster.local
Here’s the final implementation of our function in the caller-function
module. The type of event is caller.output
. In fact, we can set any name as an event type.
public class Function {
@Inject
Logger logger;
@Inject
@RestClient
BrokerClient client;
@Funq
public CloudEvent<Output> function(CloudEvent<Input> input) {
logger.infof("New event: %s", input);
Output output = new Output(input.data().getMessage());
CloudEvent<Output> outputCloudEvent = CloudEventBuilder.create().build(output);
client.sendEvent(output,
input.id(),
"http://caller-function",
"caller.output",
input.specVersion());
return outputCloudEvent;
}
}
Finally, we will create a Knative Trigger
. It gets events incoming to the broker and filters out by type. Only events with the type caller.output
are forwarded to the callme-function
.
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
name: callme-trigger
spec:
broker: default
filter:
attributes:
type: caller.output
subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: callme-function
uri: /uppercase
Now, we can send a test CloudEvent
to the caller-function
directly from the local machine with the following command (ensure you are calling it in the caller-function
directory):
$ kn func emit -d "Hello"
Summary
Let’s summarize our exercise. We built and deployed two applications (Quarkus and Spring Boot) directly from the source code into the target cluster. Thanks to the OpenShift Serverless Functions we didn’t have to provide any additional configuration to do that to deploy them as Knative services.
If there is no incoming traffic, Knative services are automatically scaled down to zero after 60 seconds (by default).
So, to generate some traffic we may use the kn func emit
command. It sends a CloudEvent
message directly to the target application. In our case, it is caller-function
(Quarkus). After receiving an input event the pod with the caller-function
starts. After startup, it sends a CloudEvent
message to the Knative Broker
. Finally, the event goes to the callme-service
(Spring Boot), which is also starting as shown below.
As you see OpenShift provides several simplifications when working with Knative. And what’s important, now you can easily test them by yourself using the developer sandbox version of OpenShift available online.
Leave a Reply