Contract Testing on Kubernetes with Microcks
This article will teach you how to design and perform contract testing on Kubernetes with Microcks. Microcks is a Kubernetes native tool for API mocking and testing. It supports several specifications including OpenAPI, AsyncAPI, a GraphQL schema, or a gRPC/Protobuf schema. In contrast to the other tools described in my blog before (Pact, Spring Cloud Contract) it performs a provider-driven contract testing. Moreover, Microcks runs tests against real endpoints and verifies them with the defined schema. In our case, we will deploy the Quarkus microservices on Kubernetes and then perform some contract tests using Microcks. Let’s begin.
If you want to compare Microcks with other testing tools, you may be interested in my article about contract testing with Quarkus and Pact available here.
Source Code
If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should just follow my instructions 🙂
Install Microcks on Kubernetes
We can install Microcks on Kubernetes with Helm chart or with the operator. Since I’m using OpenShift as a Kubernetes platform in this exercise, the simplest way is through the operator. Assuming we have already installed the OLM (Operator Lifecycle Manager) on Kubernetes we need to apply the following YAML manifest:
apiVersion: v1
kind: Namespace
metadata:
name: microcks
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: operatorgroup
namespace: microcks
spec:
targetNamespaces:
- microcks
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: my-microcks
namespace: microcks
spec:
channel: stable
name: microcks
source: operatorhubio-catalog
sourceNamespace: olm
If you want to easily manage operators on Kubernetes you need to install Operator Lifecycle Manager first. Here are the installation instructions: https://olm.operatorframework.io/docs/getting-started/. If you use OpenShift you don’t have to install anything.
With OpenShift we can install the Microcks operator using the UI dashboard. Once we do it, we need to create the object responsible for Microcks installation. In the OpenShift console, we need to click the “MicrocksInstall” link as shown below.
Microcks requires some additional staff like Keycloak or MongoDB to be installed on the cluster. Here’s the YAML manifest responsible for installation on Kubernetes.
apiVersion: microcks.github.io/v1alpha1
kind: MicrocksInstall
metadata:
name: my-microcksinstall
namespace: microcks
spec:
name: my-microcksinstall
version: 1.7.0
microcks:
replicas: 1
postman:
replicas: 1
keycloak:
install: true
persistent: true
volumeSize: 1Gi
mongodb:
install: true
persistent: true
volumeSize: 2Gi
By applying the YAML manifest with the MicrocksInstall
object, we will start the installation process. We should have the following list of running pods if it finished successfully:
$ kubectl get pod -n microcks
NAME READY STATUS RESTARTS AGE
microcks-ansible-operator-56ddbdcccf-6lrdl 1/1 Running 0 4h14m
my-microcksinstall-76b67f4f77-jdcj7 1/1 Running 0 4h13m
my-microcksinstall-keycloak-79c5f68f45-5nm69 1/1 Running 0 4h13m
my-microcksinstall-keycloak-postgresql-97f69c476-km6p2 1/1 Running 0 4h13m
my-microcksinstall-mongodb-846f7c7976-tl5jh 1/1 Running 0 4h13m
my-microcksinstall-postman-runtime-574d4bf7dc-xfct7 1/1 Running 0 4h13m
In order to prepare the contract testing process on Kubernetes with Microcks we need to access it dashboard. It is exposed by the my-microcksinstall
service under the 8080
port. I’m accessing it through the OpenShift Route
object. If you use Kubernetes you can enable port forwarding (the kubectl port-forward
command) for that service or expose it through the Ingress
object.
Here’s the Microcks UI dashboard. In the first step, we need to import our API documentation and samples by clicking the Importers button. Before we do it, we need to prepare the document in one of the supported specifications. In our case, it is the OpenAPI specification.
Create the Provider Side App
We create a simple app with Quarkus that exposes some REST endpoints. In order to access it go to the employee-service
directory in our sample Git repository. Here’s the controller class responsible for the endpoints implementation.
@Path("/employees")
@Produces(MediaType.APPLICATION_JSON)
public class EmployeeController {
private static final Logger LOGGER = LoggerFactory
.getLogger(EmployeeController.class);
@Inject
EmployeeRepository repository;
@POST
public Employee add(@Valid Employee employee) {
LOGGER.info("Employee add: {}", employee);
return repository.add(employee);
}
@Path("/{id}")
@GET
public Employee findById(@PathParam("id") Long id) {
LOGGER.info("Employee find: id={}", id);
return repository.findById(id);
}
@GET
public Set<Employee> findAll() {
LOGGER.info("Employee find");
return repository.findAll();
}
@Path("/department/{departmentId}")
@GET
public Set<Employee> findByDepartment(@PathParam("departmentId") Long departmentId) {
LOGGER.info("Employee find: departmentId={}", departmentId);
return repository.findByDepartment(departmentId);
}
@Path("/organization/{organizationId}")
@GET
public Set<Employee> findByOrganization(@PathParam("organizationId") Long organizationId) {
LOGGER.info("Employee find: organizationId={}", organizationId);
return repository.findByOrganization(organizationId);
}
}
If we add the module responsible for generating and exposing OpenAPI documentation we can easily access it under the /q/openapi
path. In order to achieve to include the following Maven dependency:
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-smallrye-openapi</artifactId>
</dependency>
Here’s the OpenAPI descriptor automatically generated by Quarkus for our employee-service
app.
openapi: 3.0.3
info:
title: employee-service API
version: "1.2"
paths:
/employees:
get:
tags:
- Employee Controller
responses:
"200":
description: OK
content:
application/json:
schema:
uniqueItems: true
type: array
items:
$ref: '#/components/schemas/Employee'
post:
tags:
- Employee Controller
requestBody:
content:
application/json:
schema:
$ref: '#/components/schemas/Employee'
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: '#/components/schemas/Employee'
/employees/department/{departmentId}:
get:
tags:
- Employee Controller
parameters:
- name: departmentId
in: path
required: true
schema:
format: int64
type: integer
responses:
"200":
description: OK
content:
application/json:
schema:
uniqueItems: true
type: array
items:
$ref: '#/components/schemas/Employee'
/employees/organization/{organizationId}:
get:
tags:
- Employee Controller
parameters:
- name: organizationId
in: path
required: true
schema:
format: int64
type: integer
responses:
"200":
description: OK
content:
application/json:
schema:
uniqueItems: true
type: array
items:
$ref: '#/components/schemas/Employee'
/employees/{id}:
get:
tags:
- Employee Controller
parameters:
- name: id
in: path
required: true
schema:
format: int64
type: integer
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: '#/components/schemas/Employee'
components:
schemas:
Employee:
required:
- organizationId
- departmentId
- name
- position
type: object
properties:
id:
format: int64
type: integer
organizationId:
format: int64
type: integer
departmentId:
format: int64
type: integer
name:
pattern: \S
type: string
age:
format: int32
maximum: 100
minimum: 1
type: integer
position:
pattern: \S
type: string
We need to add the examples
section to each responses
and parameters
section. Let’s begin with the GET /employees
endpoint. It returns all existing employees. The name of the example is not important. You can set any name you want. As the returned value we set the JSON array with three Employee
objects.
/employees:
get:
tags:
- Employee Controller
responses:
"200":
description: OK
content:
application/json:
schema:
uniqueItems: true
type: array
items:
$ref: '#/components/schemas/Employee'
examples:
all_persons:
value: |-
[
{"id": 1, "name": "Test User 1", "age": 20, "organizationId": 1, "departmentId": 1, "position": "developer"},
{"id": 2, "name": "Test User 2", "age": 30, "organizationId": 1, "departmentId": 2, "position": "architect"},
{"id": 3, "name": "Test User 3", "age": 40, "organizationId": 2, "departmentId": 3, "position": "developer"},
]
For comparison, let’s take a look at the OpenAPI docs for the POST /employees
endpoint. It returns a single JSON object as a response. We also had to add examples in the requestBody
section.
/employees:
post:
tags:
- Employee Controller
requestBody:
content:
application/json:
schema:
$ref: '#/components/schemas/Employee'
examples:
add_employee:
summary: Hire a new employee
description: Should return 200
value: '{"name": "Test User 4", "age": 50, "organizationId": 2, "departmentId": 3, "position": "tester"}'
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: '#/components/schemas/Employee'
examples:
add_employee:
value: |-
{"id": 4, "name": "Test User 4", "age": 50, "organizationId": 2, "departmentId": 3, "position": "tester"}
Finally, we can proceed to the endpoint GET /employees/department/{departmentId}
, which returns all the employees assigned to the particular department. This endpoint is called by another app – department-service
. We need to add the example value of the path variable referring to the id of a department. The same as before, we also have to include an example JSON in the responses
section.
/employees/department/{departmentId}:
get:
tags:
- Employee Controller
parameters:
- name: departmentId
in: path
required: true
schema:
format: int64
type: integer
examples:
find_by_dep_1:
summary: Main id of department
value: 1
responses:
"200":
description: OK
content:
application/json:
schema:
uniqueItems: true
type: array
items:
$ref: '#/components/schemas/Employee'
examples:
find_by_dep_1:
value: |-
[
{ "id": 1, "name": "Test User 1", "age": 20, "organizationId": 1, "departmentId": 1, "position": "developer" }
]
Create the Consumer Side App
As I mentioned before, we have another app – department-service
. It consumer endpoint exposed by the employee-service
. Here’s the implementation of the Quarkus declarative REST client responsible for calling the GET /employees/department/{departmentId}
endpoint:
@ApplicationScoped
@Path("/employees")
@RegisterRestClient(configKey = "employee")
public interface EmployeeClient {
@GET
@Path("/department/{departmentId}")
@Produces(MediaType.APPLICATION_JSON)
List<Employee> findByDepartment(@PathParam("departmentId") Long departmentId);
}
That client is then used by the department-service
REST controller to return all the employees in the particular organization with the division into departments.
@Path("/departments")
@Produces(MediaType.APPLICATION_JSON)
public class DepartmentController {
private static final Logger LOGGER = LoggerFactory
.getLogger(DepartmentController.class);
@Inject
DepartmentRepository repository;
@Inject
@RestClient
EmployeeClient employeeClient;
// ... implementation of other endpoints
@Path("/organization/{organizationId}/with-employees")
@GET
public Set<Department> findByOrganizationWithEmployees(@PathParam("organizationId") Long organizationId) {
LOGGER.info("Department find: organizationId={}", organizationId);
Set<Department> departments = repository.findByOrganization(organizationId);
departments.forEach(d -> d.setEmployees(employeeClient.findByDepartment(d.getId())));
return departments;
}
}
Finally, we need to configure the base URI for the client in the Quarkus application.properties
file. We can also set that address in the corresponding environment variable QUARKUS_REST_CLIENT_EMPLOYEE_URL
.
quarkus.rest-client.employee.url = http://employee:8080
Test API with Microcks
Once we implemented our apps, we can back to the Microcks dashboard. In the Importers section import your openapi.yml
file.
Once you will import the OpenAPI definition you can go to the APIs | Services
section. You should already have the record with your API description as shown below. The current version of our is 1.2
. The employee-service
exposes five REST endpoints.
We can display the details of the service by clicking on the record or on the Details button. It displays a list of available endpoints with a number of examples declared in the OpenAPI manifest.
Let’s display the details of the GET /employees/department/{departmentId}
endpoint. This endpoint is called by the department-service
. Microcks creates the mock on Kubernetes for the purposes of contract testing. The mock is available inside Kubernetes and as well as outside it through the OpenShift Route
. Now, our goal is to verify the compliance of our mock with the real service.
In order to verify the Microcks mock with the real service, we need to deploy our employee-service
app on Kubernetes. We can easily do it using the Quarkus Kubernetes extension. Thanks to that it is possible to build the app, build the image and deploy it on Kubernetes or OpenShift using a single Maven command. Here’s the dedicated Maven profile for that. For OpenShift, we need to include the single dependency quarkus-openshift
, and set the property quarkus.kubernetes.deploy
to true
.
<profile>
<id>openshift</id>
<activation>
<property>
<name>openshift</name>
</property>
</activation>
<dependencies>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-openshift</artifactId>
</dependency>
</dependencies>
<properties>
<quarkus.kubernetes.deploy>true</quarkus.kubernetes.deploy>
<quarkus.profile>openshift</quarkus.profile>
<quarkus.container-image.builder>openshift</quarkus.container-image.builder>
</properties>
</profile>
Then, we just need to activate the openshift profile during the Maven build. Go to the employee-service
directory and execute the following command:
$ mvn clean package -Popenshift
The internal address of our app is defined by the name of the Kubernetes service:
$ oc get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
employee-service ClusterIP 172.30.141.85 <none> 80/TCP 27h
Finally, we can our samples against real endpoints. We need to set the address of our service in the Test Endpoint field. Then, we choose OPEN_API_SCHEMA
as the Runner. Finally, we can execute the test by clicking the Launch test button.
Viewing and Analysing Test Results
Once we run the test we can display the results with the Microcks dashboard. By default, it calls all the endpoints defined in the OpenAPI manifest. We can override this behavior and choose a list of endpoints that should be tested.
We can verify the details of each test. For example, we can display a response from our employee-service
running on the cluster for the GET /employees/department/{departmentId}
endpoint.
We can automate the testing process with Microcks using its CLI. You can read more details about it in the docs. In order to execute the test in a similar way as we did before via the Microcks dashboard, we need to use the test
command. It requires several input arguments. We need to pass the name and version of our API, the URI of the test endpoint, and the type of runner. By default, Microcks creates a client in Keycloak. The client id is microcks-serviceaccount
. You need to log in to your Keycloak instance and copy the value of the client secret. As the microcksURL
argument, we need to pass the external address of Microcks with the /api
path. If you would run it inside the Kubernetes cluster, you could run the internal address http://my-microcksinstall.microcks.svc.cluster.local:8080/api
.
$ microcks-cli test 'employee-service API:1.2' \
http://employee-service.demo-microcks.svc.cluster.local \
OPEN_API_SCHEMA \
--microcksURL=https://my-microcksinstall-microcks.apps.pvxvtsz4.eastus.aroapp.io/api \
--keycloakClientId microcks-serviceaccount \
--keycloakClientSecret ab54d329-e435-41ae-a900-ec6b3fe15c54
Once we run a test, Microcks calculates the conformance score after finishing. It is a kind of grade that estimates how your API contract is actually covered by the samples you’ve attached to it. It is computed based on the number of samples you’ve got on each operation, and the complexity of dispatching rules of this operation. As you there is still a place for improvement in my case 🙂
Verify Contract on the Consumer Side
Let’s create a simple test that verifies the contract with employee-service
in the department-service
. It will call the department-service
GET /departments/organization/{organizationId}/with-employees
endpoint that interacts with the employee-service
. We won’t mock the REST client.
@QuarkusTest
public class DepartmentExternalContractTests {
@Test
void findByOrganizationWithEmployees() {
when().get("/departments/organization/{organizationId}/with-employees", 1L).then()
.statusCode(200)
.body("size()", notNullValue());
}
}
Instead of mocking the client directly in the test, we use the mock endpoint exposed by Microcks. We will use the internal of this mock for the employee-service
is https://my-microcksinstall.microcks.svc.cluster.local/rest/employee-service+API/1.2/employees
. We can leverage the Quarkus environment variable to set the address for the client. As I mentioned before we can use the QUARKUS_REST_CLIENT_EMPLOYEE_URL
environment variable.
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: maven-contract-tests
spec:
params:
- default: >-
image-registry.openshift-image-registry.svc:5000/openshift/java:latest
description: Maven base image
name: MAVEN_IMAGE
type: string
- default: .
description: >-
The context directory within the repository for sources on which we want
to execute maven goals.
name: CONTEXT_DIR
type: string
- name: EMPLOYEE_URL
default: http://my-microcksinstall.microcks.svc.cluster.local:8080/rest/employee-service+API/1.2
steps:
- image: $(params.MAVEN_IMAGE)
name: mvn-command
env:
- name: QUARKUS_REST_CLIENT_EMPLOYEE_URL
value: $(params.EMPLOYEE_URL)
resources: {}
script: >
#!/usr/bin/env bash
/usr/bin/mvn clean package -Pmicrocks
workingDir: $(workspaces.source.path)/$(params.CONTEXT_DIR)
workspaces:
- name: source
As you see in the code above, we are activating the microcks
profile during the Maven build. That’s because we want to run the test defined in DepartmentExternalContractTests
class only if the build is performed on Kubernetes.
<profile>
<id>microcks</id>
<activation>
<property>
<name>microcks</name>
</property>
</activation>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>${surefire-plugin.version}</version>
<configuration>
<includes>
<include>**/DepartmentExternalContractTests.java</include>
</includes>
</configuration>
</plugin>
</plugins>
</build>
</profile>
Finally, we can define and start a pipeline that clones our Git repository and runs the test against the endpoint mocked by the Microcks. Here’s the definition of our pipeline.
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: microcks-pipeline
spec:
tasks:
- name: git-clone
params:
- name: url
value: 'https://github.com/piomin/sample-quarkus-microservices.git'
- name: revision
value: master
- name: submodules
value: 'true'
- name: depth
value: '1'
- name: sslVerify
value: 'false'
- name: crtFileName
value: ca-bundle.crt
- name: deleteExisting
value: 'true'
- name: verbose
value: 'true'
- name: gitInitImage
value: >-
registry.redhat.io/openshift-pipelines/pipelines-git-init-rhel8@sha256:a538c423e7a11aae6ae582a411fdb090936458075f99af4ce5add038bb6983e8
- name: userHome
value: /tekton/home
taskRef:
kind: ClusterTask
name: git-clone
workspaces:
- name: output
workspace: source-dir
- name: maven-contract-tests
params:
- name: MAVEN_IMAGE
value: >-
image-registry.openshift-image-registry.svc:5000/openshift/java:latest
- name: CONTEXT_DIR
value: department-service
- name: EMPLOYEE_URL
value: >-
http://my-microcksinstall.microcks.svc.cluster.local:8080/rest/employee-service+API/1.2
runAfter:
- git-clone
taskRef:
kind: Task
name: maven-contract-tests
workspaces:
- name: source
workspace: source-dir
workspaces:
- name: source-dir
Final Thoughts
In this article, I described just one of several possible scenarios for contract testing on Kubernetes with Microcks. You can as well use it to test interactions, for example with Kafka or with GraphQL endpoints. Microcks can be a central point for testing contracts of apps running on Kubernetes. It is able to verify contracts by calling real endpoints exposed by the apps. It provides UI for visualizing and running tests, as well as CLI for automation.
2 COMMENTS