Running Jenkins Server with Configuration-As-Code

Some days ago I came across a newly created Jenkins plugin called Configuration as Code (JcasC). This plugin allows you to define Jenkins configuration in very popular format these days – YAML notation. It is interesting that such a plugin has not been created before, but better late than never. Of course, we could have use some other Jenkins plugins for that, like Job DSL Plugin, but it is based on Groovy language.
If you have any experience with Jenkins, you probably know how many plugins and other configuration settings it requires to have in order to work in your organization as a main CI server. With JcasC plugin, you can store such a configuration in human-readable declarative YAML files. In this article I’m going to show you how to create and run Jenkins with configuration as code letting you to build Java application using such tools like declarative pipelines, Git, Maven. I’ll also show how to manage sensitive data using Vault server.

1. Using Vault server

We will begin from running Vault server on the local machine. The easiest way to do that is with Docker image. By default, official Vault image is started in development mode. The following command runs an in-memory server, which listens on address 0.0.0.0:8200

docker run -d --name=vault --cap-add=IPC_LOCK -p 8200:8200 vault:0.9.0

There is one thing that should be clarified here. I do not run the newest version of Vault, because it forces us to call endpoints from version 2 of KV (Key-Value Secrets Engine) HTTP API, which is used for manipulating secrets. This version, in turn, is not supported by JcasC plugin that can communicate only with endpoints from version 1 of KV HTTP API. It does not apply to older version of Vault, for example 0.9.0, which allows to call KV in version 1. After running container we should obtain the token used for authentication against Vault from the console logs. To do that just run command docker logs vault and find the following fragment in the logs.

vault-token
Now, using this authentication token we may add credentials required for accessing Jenkins web dashboard and our account on Git repository host. Jenkins account will be identified by rootPassword key, while GitHub account by githubPassword key.

$ curl -H "X-Vault-Token:  5bcab13b-6cf5-2f58-8b37-34dca31bebde" --request POST -d '{"rootPassword":"your_root_password", "githubPassword":"your_github_password"}' https://192.168.99.100:8200/v1/secret/jenkins

To check out if the parameters has been saved on Vault just call GET method with the same context path.

$ curl -H "X-Vault-Token:  5bcab13b-6cf5-2f58-8b37-34dca31bebde" https://192.168.99.100:8200/v1/secret/jenkins

2. Building Jenkins image

The same as for Vault server we also run Jenkins on Docker container. However, we need to add some configuration settings to Jenkins official image before running it. JcasC plugin requires setting an environment variable that points to location of the current YAML configuration files. This variable can point to the following:

  • Path to a folder containing a set of config files
  • A full path to a single file
  • A URL pointing to a file served on the web, or for example your internal configuration server

The next step is to set some configuration settings required for establishing connection to Vault server. We have to pass the authentication token, the path of created key and the URL of running server. All these configuration settings are set as environment variables and may be overridden on container startup. The same rule applies to the location of JcasC configuration file. The following Dockerfile definition extends Jenkins base image, and add all the required parameters for running it using JcasC plugin and with secrets taken from Vault.

FROM jenkins/jenkins:lts
ENV CASC_JENKINS_CONFIG="/var/jenkins_home/jenkins.yml"
ENV CASC_VAULT_TOKEN=5bcab13b-6cf5-2f58-8b37-34dca31bebde
ENV CASC_VAULT_PATH=/secret/jenkins
ENV CASC_VAULT_URL=http://192.168.99.100:8200
COPY jenkins.yml ${CASC_JENKINS_CONFIG}
USER jenkins
RUN /usr/local/bin/install-plugins.sh configuration-as-code configuration-as-code-support git workflow-cps-global-lib

Now, let’s build the Docker image using Dockerfile visible above. Alternatively, you can just pull the image stored in my Docker Hub repository.

$ docker build -t piomin/jenkins-casc:1.0 .

Finally, you can run the container based on the built image with the following command. Of course, before that we need to prepare the YAML configuration file for JcasC plugin.

$ docker run -d --name jenkins-casc -p 8080:8080 -p 50000:50000 piomin/jenkins-casc:1.0 .

3. Preparing configuration

JcasC plugin provides many configuration settings that allows you to configure various components of your jenkins master installation. However, I will limit myself to defining the basic configuration used for building my sample Java application. We need the following Jenkins components to be configured after startup:

  1. A set of Jenkins plugins allowing to create declarative pipeline that checkouts source code from Git repository, builds it using Maven, and records JUnit test results
  2. Basic security realm containing credentials for a single Jenkins user. The user password is read from property rootPassword stored on Vault server
  3. JDK location directory
  4. Maven installation settings – Maven is not installed by default in Jenkins, so we have to set required version and tool name
  5. Credentials for accessing Git repository containing application source code
plugins: # (1)
  required:
    git: 3.9.1
    pipeline-model-definition: 1.3.2
    pipeline-stage-step: 2.3
    pipeline-maven: 3.5.12
    workflow-aggregator: 2.5
    junit: 1.25
  sites:
  - id: "default"
    url: "https://updates.jenkins.io/update-center.json"
jenkins:
  agentProtocols:
  - "JNLP4-connect"
  - "Ping"
  authorizationStrategy:
    loggedInUsersCanDoAnything:
      allowAnonymousRead: false
  crumbIssuer:
    standard:
      excludeClientIPFromCrumb: false
  disableRememberMe: false
  mode: NORMAL
  numExecutors: 2
  primaryView:
    all:
      name: "all"
  quietPeriod: 5
  scmCheckoutRetryCount: 0
  securityRealm: # (2)
    local:
      allowsSignup: false
      enableCaptcha: false
      users:
      - id: "piomin"
        password: ${rootPassword}
  slaveAgentPort: 50000
  views:
  - all:
      name: "all"
tool:
  git:
    installations:
    - home: "git"
      name: "Default"
  jdk: # (3)
    installations:
    - home: "/docker-java-home"
      name: "jdk"
  maven: # (4)
    installations:
    - name: "maven"
      properties:
      - installSource:
          installers:
          - maven:
              id: "3.5.4"
credentials: # (5)
  system:
    domainCredentials:
      - domain :
          name: "github.com"
          description: "GitHub"
        credentials:
          - usernamePassword:
              scope: SYSTEM
              id: github-piomin
              username: piomin
              password: ${githubPassword}

4. Exporting configuration

After running Jenkins with JcasC plugin installed you can easily export the current configuration to the file. First, navigate to section Manage Jenkins -> Configuration as Code.

jcasc-1

Then, after choosing Export Configuration button, the YAML file with Jenkins configuration will be downloaded to your machine. But following the comment visible below you cannot rely on that file, because this feature is still not stable. For my configuration it didn’t export Maven tool settings and list of Jenkins plugins. However, the JcasC plugin is probably still under active development, so I hope that feature will work succesfully soon.

jcasc-2

5. Running sample pipeline

Finally you can create and run pipeline for your sample application. Here’s the definition of my pipeline.

pipeline {
    agent any
    tools {
        maven 'maven'
    }
    stages {
        stage('Checkout') {
            steps {
                git url: 'https://github.com/piomin/sample-spring-boot-autoscaler.git', credentialsId: 'github-piomin', branch: 'master'
            }
        }
		stage('Test') {
            steps {
                dir('example-service') {
                    sh 'mvn clean test'
                }
            }
        }
        stage('Build') {
            steps {
                dir('example-service') {
                    sh 'mvn clean install'
                }
            }
        }
    }
    post {
        always {
            junit '**/target/reports/**/*.xml'
        }
    }
}

Summary

The idea around Jenkins Configuration as Code Plugin is the step in the right direction. I’ll be following the development of this product with the great interest. There are still some features that needs to be added to make it more useful, and some bugs that need to be fixed. But after that I’ll will definitely consider using this plugin for maintaining the current Jenkins master server inside my organization.

Spring Boot Autoscaler

One of more important reasons we are deciding to use such a tools like Kubernetes, Pivotal Cloud Foundry or HashiCorp’s Nomad is an availability of auto-scaling our applications. Of course those tools provides many other useful mechanisms, but we can implement auto-scaling by ourselves. At first glance it seems to be difficult, but assuming we use Spring Boot as a framework for building our applications and Jenkins as a CI server, it finally does not require a lot of work. Today, I’m going to show you how to implement such a solutions using the following frameworks/tools:

  • Spring Boot
  • Spring Boot Actuator
  • Spring Cloud Netflix Eureka
  • Jenkins CI

How it works?

Every Spring Boot application, which contains Spring Boot Actuator library can expose metrics under endpoint /actuator/metrics. There are many valuable metrics that gives you the detailed information about an application status. Some of them may be especially important when talking about autoscaling: JVM, CPU metrics, a number of running threads and a number of incoming HTTP requests. There is dedicated Jenkins pipeline responsible for monitoring application’s metrics by polling endpoint /actuator/metrics periodically. If any monitored metrics is below or above target range it runs new instance or shutdown a running instance of application using another Actuator endpoint /actuator/shutdown. Before that, it needs to fetch the current list of running instances of a single application in order to get an address of existing application selected for shutting down or the address of server with the smallest number of running instances for a new instance of application..

spring-autoscaler-1

After discussing an architecture of our system we may proceed to the development. Our application needs to meet some requirements: it has to expose metrics and endpoint for graceful shutdown, it needs to register in Eureka after after startup and deregister on shutdown, and finally it also should dynamically allocate running port randomly from the pool of free ports. Thanks to Spring Boot we may easily implement all these mechanisms if five minutes 🙂

Dynamic port allocation

Since it is possible to run many instances of application on a single machine we have to guarantee that there won’t be conflicts in port numbers. Fortunately, Spring Boot provides such mechanisms for an application. We just need to set port number to 0 inside application.yml file using server.port property. Because our application registers itself in eureka it also needs to send unique instanceId, which is by default generated as a concatenation of fields spring.cloud.client.hostname, spring.application.name and server.port.
Here’s current configuration of our sample application. I have changed the template of instanceId field by replacing number of port to randomly generated number.

spring:
  application:
    name: example-service
server:
  port: ${PORT:0}
eureka:
  instance:
    instanceId: ${spring.cloud.client.hostname}:${spring.application.name}:${random.int[1,999999]}

Enabling Actuator metrics

To enable Spring Boot Actuator we need to include the following dependency to pom.xml.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

We also have to enable exposure of actuator endpoints via HTTP API by setting property management.endpoints.web.exposure.include to '*'. Now, the list of all available metric names is available under context path /actuator/metrics, while detailed information for each metric under path /actuator/metrics/{metricName}.

Graceful shutdown

Besides metrics Spring Boot Actuator also provides endpoint for shutting down an application. However, in contrast to other endpoints this endpoint is not available by default. We have to set property management.endpoint.shutdown.enabled to true. After that we will be to stop our application by sending POST request to /actuator/shutdown endpoint.
This method of stopping application guarantees that service will unregister itself from Eureka server before shutdown.

Enabling Eureka discovery

Eureka is the most popular discovery server used for building microservices-based architecture with Spring Cloud. So, if you already have microservices and want to provide auto-scaling mechanisms for them, Eureka would be a natural choice. It contains IP address and port number of every registered instance of application. To enable Eureka on the client side you just need to include the following dependency to your pom.xml.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
</dependency>

As I have mentioned before we also have to guarantee an uniqueness of instanceId send to Eureka server by client-side application. It has been described in the step “Dynamic port allocation”.
The next step is to create application with embedded Eureka server. To achieve it we first need to include the following dependency into pom.xml.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-netflix-eureka-server</artifactId>
</dependency>

The main class should be annotated with @EnableEurekaServer.

@SpringBootApplication
@EnableEurekaServer
public class DiscoveryApp {

    public static void main(String[] args) {
        new SpringApplicationBuilder(DiscoveryApp.class).run(args);
    }

}

Client-side applications by default tries to connect with Eureka server on localhost under port 8761. We only need single, standalone Eureka node, so we will disable registration and attempts to fetching list of services form another instances of server.

spring:
  application:
    name: discovery-service
server:
  port: ${PORT:8761}
eureka:
  instance:
    hostname: localhost
  client:
    registerWithEureka: false
    fetchRegistry: false
    serviceUrl:
      defaultZone: http://localhost:8761/eureka/

The tests of the sample autoscaling system will be performed using Docker containers, so we need to prepare and build image with Eureka server. Here’s Dockerfile with image definition. It can be built using command docker build -t piomin/discovery-server:2.0 ..

FROM openjdk:8-jre-alpine
ENV APP_FILE discovery-service-1.0-SNAPSHOT.jar
ENV APP_HOME /usr/apps
EXPOSE 8761
COPY target/$APP_FILE $APP_HOME/
WORKDIR $APP_HOME
ENTRYPOINT ["sh", "-c"]
CMD ["exec java -jar $APP_FILE"]

Building Jenkins pipeline for autoscaling

The first step is to prepare Jenkins pipeline responsible for autoscaling. We will create Jenkins Declarative Pipeline, which runs every minute. Periodical execution may be configured with the triggers directive, that defines the automated ways in which the pipeline should be re-triggered. Our pipeline will communicate with Eureka server and metrics endpoints exposed by every microservice using Spring Boot Actuator.
The test service name is EXAMPLE-SERVICE, which is equal to value (big letters) of property spring.application.name defined inside application.yml file. The monitored metric is the number of HTTP listener threads running on Tomcat container. These threads are responsible for processing incoming HTTP requests.

pipeline {
    agent any
    triggers {
        cron('* * * * *')
    }
    environment {
        SERVICE_NAME = "EXAMPLE-SERVICE"
        METRICS_ENDPOINT = "/actuator/metrics/tomcat.threads.busy?tag=name:http-nio-auto-1"
        SHUTDOWN_ENDPOINT = "/actuator/shutdown"
    }
    stages { ... }
}

Integrating Jenkins pipeline with Eureka

The first stage of our pipeline is responsible for fetching list of services registered in service discovery server. Eureka exposes HTTP API with several endpoints. One of them is GET /eureka/apps/{serviceName}, which returns list of all instances of application with given name. We are saving the number of running instances and the URL of metrics endpoint of every single instance. These values would be accessed during next stages of pipeline.
Here’s the fragment of pipeline responsible for fetching list of running instances of application. The name of stage is Calculate. We use HTTP Request Plugin for HTTP connections.

stage('Calculate') {
	steps {
		script {
			def response = httpRequest "http://192.168.99.100:8761/eureka/apps/${env.SERVICE_NAME}"
			def app = printXml(response.content)
			def index = 0
			env["INSTANCE_COUNT"] = app.instance.size()
			app.instance.each {
				if (it.status == 'UP') {
					def address = "http://${it.ipAddr}:${it.port}"
					env["INSTANCE_${index++}"] = address 
				}
			}
		}
	}
}

@NonCPS
def printXml(String text) {
    return new XmlSlurper(false, false).parseText(text)
}

Here’s a sample response from Eureka API for our microservice. The response content type is XML.

spring-autoscaler-2

Integrating Jenkins pipeline with Spring Boot Actuator metrics

Spring Boot Actuator exposes endpoint with metrics, which allows to find metric by name and optionally by tag. In the fragment of pipeline visible below I’m trying to find the instance with metric below or above a defined threshold. If there is such an instance we stop the loop in order to proceed to the next stage, which performs scaling down or up. The ip addresses of running applications are taken from pipeline environment variable with prefix INSTANCE_, which has been saved in the previous stage.

stage('Metrics') {
	steps {
		script {
			def count = env.INSTANCE_COUNT
			for(def i=0; i<count; i++) {
				def ip = env["INSTANCE_${i}"] + env.METRICS_ENDPOINT
				if (ip == null)
					break;
				def response = httpRequest ip
				def objRes = printJson(response.content)
				env.SCALE_TYPE = returnScaleType(objRes)
				if (env.SCALE_TYPE != "NONE")
					break
			}
		}
	}
}

@NonCPS
def printJson(String text) {
    return new JsonSlurper().parseText(text)
}

def returnScaleType(objRes) {
def value = objRes.measurements[0].value
if (value.toInteger() > 100)
		return "UP"
else if (value.toInteger() < 20)
		return "DOWN"
else
		return "NONE"
}

Shutdown application instance

In the last stage of our pipeline we will shutdown the running instance or start new instance depending on the result saved in the previous stage. Shutdown may be easily performed by calling Spring Boot Actuator endpoint. In the following fragment of pipeline we pick the instance returned by Eureka as first. Then we send POST request to that ip address.
If we need to scale up our application we call another pipeline responsible for build fat JAR and launch it on our machine.

stage('Scaling') {
	steps {
		script {
			if (env.SCALE_TYPE == 'DOWN') {
				def ip = env["INSTANCE_0"] + env.SHUTDOWN_ENDPOINT
				httpRequest url:ip, contentType:'APPLICATION_JSON', httpMode:'POST'
			} else if (env.SCALE_TYPE == 'UP') {
				build job:'spring-boot-run-pipeline'
			}
			currentBuild.description = env.SCALE_TYPE
		}
	}
}

Here’s a full definition of our pipeline spring-boot-run-pipeline responsible for starting new instance of application. It clones the repository with application source code, builds binaries using Maven commands, and finally runs the application using java -jar command passing address of Eureka server as a parameter.

pipeline {
    agent any
    tools {
        maven 'M3'
    }
    stages {
        stage('Checkout') {
            steps {
                git url: 'https://github.com/piomin/sample-spring-boot-autoscaler.git', credentialsId: 'github-piomin', branch: 'master'
            }
        }
        stage('Build') {
            steps {
                dir('example-service') {
                    sh 'mvn clean package'
                }
            }
        }
        stage('Run') {
            steps {
                dir('example-service') {
                    sh 'nohup java -jar -DEUREKA_URL=http://192.168.99.100:8761/eureka target/example-service-1.0-SNAPSHOT.jar 1>/dev/null 2>logs/runlog &'
                }
            }
        }
    }
}

Remote extension

The algorithm discussed in the previous sections will work fine only for microservices launched on the single machine. If we would like to extend it to work with many machines, we will have to modify our architecture as shown below. Each machine has Jenkins agent running and communicating with Jenkins master. If we would like to start new instance of microservices on the selected machine, we have to run pipeline using agent running on that machine. This agent is responsible only for building application from source code and launching it on the target machine. The shutdown of instance is still performed just by calling HTTP endpoint.

spring-autoscaler-3

You can find more information about running Jenkins agents and connecting them with Jenkins master via JNLP protocol in my article Jenkins nodes on Docker containers. Assuming we have successfully launched some agents on the target machines we need to parametrize our pipelines in order to be able to select agent (and therefore the target machine) dynamically.
When we are scaling up our application we have to pass agent label to the downstream pipeline.

build job:'spring-boot-run-pipeline', parameters:[string(name: 'agent', value:"slave-1")]

The calling pipeline will be ran by agent labelled with given parameter.

pipeline {
    agent {
        label "${params.agent}"
    }
    stages { ... }
}

If we have more than one agent connected to the master node we can map their addresses into the labels. Thanks to that you would be able to map IP address of microservice instance fetched from Eureka to the target machine with Jenkins agent.

pipeline {
    agent any
    triggers {
        cron('* * * * *')
    }
    environment {
        SERVICE_NAME = "EXAMPLE-SERVICE"
        METRICS_ENDPOINT = "/actuator/metrics/tomcat.threads.busy?tag=name:http-nio-auto-1"
        SHUTDOWN_ENDPOINT = "/actuator/shutdown"
        AGENT_192.168.99.102 = "slave-1"
        AGENT_192.168.99.103 = "slave-2"
    }
    stages { ... }
}

Summary

In this article I have demonstrated how to use Spring Boot Actuator metrics in order to scale up/scale down your Spring Boot application. Using basic mechanisms provided by Spring Boot together with Spring Cloud Netflix Eureka and Jenkins you can implement auto-scaling for your applications without getting any other third-party tools. The case described in this article assumes using Jenkins agents on the remote machines to launch there new instance of application, but you may as well use a tool like Ansible for that. If you would decide to run Ansible playbooks from Jenkins you will not have to launch Jenkins agents on remote machines. The source code with sample applications is available on GitHub: https://github.com/piomin/sample-spring-boot-autoscaler.git.

Integration tests on OpenShift using Arquillian Cube and Istio

Building integration tests for applications deployed on Kubernetes/OpenShift platforms seems to be quite a big challenge. With Arquillian Cube, an Arquillian extension for managing Docker containers, it is not complicated. Kubernetes extension, being a part of Arquillian Cube, helps you write and run integration tests for your Kubernetes/Openshift application. It is responsible for creating and managing temporary namespace for your tests, applying all Kubernetes resources required to setup your environment and once everything is ready it will just run defined integration tests.
The one very good information related to Arquillian Cube is that it supports Istio framework. You can apply Istio resources before executing tests. One of the most important features of Istio is an ability to control of traffic behavior with rich routing rules, retries, delays, failovers, and fault injection. It allows you to test some unexpected situations during network communication between microservices like server errors or timeouts.
If you would like to run some tests using Istio resources on Minishift you should first install it on your platform. To do that you need to change some privileges for your OpenShift user. Let’s do that.

1. Enabling Istio on Minishift

Istio requires some high-level privileges to be able to run on OpenShift. To add those privileges to the current user we need to login as an user with cluster admin role. First, we should enable admin-user addon on Minishift by executing the following command.

$ minishift addons enable admin-user

After that you would be able to login as system:admin user, which has cluster-admin role. With this user you can also add cluster-admin role to other users, for example admin. Let’s do that.

$ oc login -u system:admin
$ oc adm policy add-cluster-role-to-user cluster-admin admin
$ oc login -u admin -p admin

Now, let’s create new project dedicated especially for Istio and then add some required privileges.

$ oc new-project istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-ingress-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z default -n istio-system
$ oc adm policy add-scc-to-user anyuid -z prometheus -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-egressgateway-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-citadel-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-ingressgateway-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-cleanup-old-ca-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-mixer-post-install-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-mixer-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-pilot-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-sidecar-injector-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-galley-service-account -n istio-system
$ oc adm policy add-scc-to-user privileged -z default -n myproject

Finally, we may proceed to Istio components installation. I downloaded the current newest version of Istio – 1.0.1. Installation file is available under install/kubernetes directory. You just have to apply it to your Minishift instance by calling oc apply command.

$ oc apply -f install/kubernetes/istio-demo.yaml

2. Enabling Istio for Arquillian Cube

I have already described how to use Arquillian Cube to run tests with OpenShift in the article Testing microservices on OpenShift using Arquillian Cube. In comparison with the sample described in that article we need to include dependency responsible for enabling Istio features.

<dependency>
	<groupId>org.arquillian.cube</groupId>
	<artifactId>arquillian-cube-istio-kubernetes</artifactId>
	<version>1.17.1</version>
	<scope>test</scope>
</dependency>

Now, we can use @IstioResource annotation to apply Istio resources into OpenShift cluster or IstioAssistant bean to be able to use some additional methods for adding, removing resources programmatically or polling an availability of URLs.
Let’s take a look on the following JUnit test class using Arquillian Cube with Istio support. In addition to the standard test created for running on OpenShift instance I have added Istio resource file customer-to-account-route.yaml. Then I have invoked method await provided by IstioAssistant. First test test1CustomerRoute creates new customer, so it needs to wait until customer-route is deployed on OpenShift. The next test test2AccountRoute adds account for the newly created customer, so it needs to wait until account-route is deployed on OpenShift. Finally, the test test3GetCustomerWithAccounts is ran, which calls the method responsible for finding customer by id with list of accounts. In that case customer-service calls method endpoint by account-service. As you have probably find out the last line of that test method contains an assertion to empty list of accounts: Assert.assertTrue(c.getAccounts().isEmpty()). Why? We will simulate the timeout in communication between customer-service and account-service using Istio rules.

@Category(RequiresOpenshift.class)
@RequiresOpenshift
@Templates(templates = {
        @Template(url = "classpath:account-deployment.yaml"),
        @Template(url = "classpath:deployment.yaml")
})
@RunWith(ArquillianConditionalRunner.class)
@IstioResource("classpath:customer-to-account-route.yaml")
@FixMethodOrder(MethodSorters.NAME_ASCENDING)
public class IstioRuleTest {

    private static final Logger LOGGER = LoggerFactory.getLogger(IstioRuleTest.class);
    private static String id;

    @ArquillianResource
    private IstioAssistant istioAssistant;

    @RouteURL(value = "customer-route", path = "/customer")
    private URL customerUrl;
    @RouteURL(value = "account-route", path = "/account")
    private URL accountUrl;

    @Test
    public void test1CustomerRoute() {
        LOGGER.info("URL: {}", customerUrl);
        istioAssistant.await(customerUrl, r -> r.isSuccessful());
        LOGGER.info("URL ready. Proceeding to the test");
        OkHttpClient httpClient = new OkHttpClient();
        RequestBody body = RequestBody.create(MediaType.parse("application/json"), "{\"name\":\"John Smith\", \"age\":33}");
        Request request = new Request.Builder().url(customerUrl).post(body).build();
        try {
            Response response = httpClient.newCall(request).execute();
            ResponseBody b = response.body();
            String json = b.string();
            LOGGER.info("Test: response={}", json);
            Assert.assertNotNull(b);
            Assert.assertEquals(200, response.code());
            Customer c = Json.decodeValue(json, Customer.class);
            this.id = c.getId();
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

    @Test
    public  void test2AccountRoute() {
        LOGGER.info("Route URL: {}", accountUrl);
        istioAssistant.await(accountUrl, r -> r.isSuccessful());
        LOGGER.info("URL ready. Proceeding to the test");
        OkHttpClient httpClient = new OkHttpClient();
        RequestBody body = RequestBody.create(MediaType.parse("application/json"), "{\"number\":\"01234567890\", \"balance\":10000, \"customerId\":\"" + this.id + "\"}");
        Request request = new Request.Builder().url(accountUrl).post(body).build();
        try {
            Response response = httpClient.newCall(request).execute();
            ResponseBody b = response.body();
            String json = b.string();
            LOGGER.info("Test: response={}", json);
            Assert.assertNotNull(b);
            Assert.assertEquals(200, response.code());
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

    @Test
    public void test3GetCustomerWithAccounts() {
        String url = customerUrl + "/" + id;
        LOGGER.info("Calling URL: {}", customerUrl);
        OkHttpClient httpClient = new OkHttpClient();
        Request request = new Request.Builder().url(url).get().build();
        try {
            Response response = httpClient.newCall(request).execute();
            String json = response.body().string();
            LOGGER.info("Test: response={}", json);
            Assert.assertNotNull(response.body());
            Assert.assertEquals(200, response.code());
            Customer c = Json.decodeValue(json, Customer.class);
            Assert.assertTrue(c.getAccounts().isEmpty());
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

}

3. Creating Istio rules

On of the interesting features provided by Istio is an availability of injecting faults to the route rules. we can specify one or more faults to inject while forwarding HTTP requests to the rule’s corresponding request destination. The faults can be either delays or aborts. We can define a percentage level of error using percent field for the both types of fault. In the following Istio resource I have defines 2 seconds delay for every single request sent to account-service.

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: account-service
spec:
  hosts:
    - account-service
  http:
  - fault:
      delay:
        fixedDelay: 2s
        percent: 100
    route:
    - destination:
        host: account-service
        subset: v1

Besides VirtualService we also need to define DestinationRule for account-service. It is really simple – we have just define version label of the target service.

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: account-service
spec:
  host: account-service
  subsets:
  - name: v1
    labels:
      version: v1

Before running the test we should also modify OpenShift deployment templates of our sample applications. We need to inject some Istio resources into the pods definition using istioctl kube-inject command as shown below.

$ istioctl kube-inject -f deployment.yaml -o customer-deployment-istio.yaml
$ istioctl kube-inject -f account-deployment.yaml -o account-deployment-istio.yaml

Finally, we may rewrite generated files into OpenShift templates. Here’s the fragment of Openshift template containing DeploymentConfig definition for account-service.

kind: Template
apiVersion: v1
metadata:
  name: account-template
objects:
  - kind: DeploymentConfig
    apiVersion: v1
    metadata:
      name: account-service
      labels:
        app: account-service
        name: account-service
        version: v1
    spec:
      template:
        metadata:
          annotations:
            sidecar.istio.io/status: '{"version":"364ad47b562167c46c2d316a42629e370940f3c05a9b99ccfe04d9f2bf5af84d","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs"],"imagePullSecrets":null}'
          name: account-service
          labels:
            app: account-service
            name: account-service
            version: v1
        spec:
          containers:
          - env:
            - name: DATABASE_NAME
              valueFrom:
                secretKeyRef:
                  key: database-name
                  name: mongodb
            - name: DATABASE_USER
              valueFrom:
                secretKeyRef:
                  key: database-user
                  name: mongodb
            - name: DATABASE_PASSWORD
              valueFrom:
                secretKeyRef:
                  key: database-password
                  name: mongodb
            image: piomin/account-vertx-service
            name: account-vertx-service
            ports:
            - containerPort: 8095
            resources: {}
          - args:
            - proxy
            - sidecar
            - --configPath
            - /etc/istio/proxy
            - --binaryPath
            - /usr/local/bin/envoy
            - --serviceCluster
            - account-service
            - --drainDuration
            - 45s
            - --parentShutdownDuration
            - 1m0s
            - --discoveryAddress
            - istio-pilot.istio-system:15007
            - --discoveryRefreshDelay
            - 1s
            - --zipkinAddress
            - zipkin.istio-system:9411
            - --connectTimeout
            - 10s
            - --statsdUdpAddress
            - istio-statsd-prom-bridge.istio-system:9125
            - --proxyAdminPort
            - "15000"
            - --controlPlaneAuthPolicy
            - NONE
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: INSTANCE_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: ISTIO_META_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: ISTIO_META_INTERCEPTION_MODE
              value: REDIRECT
            image: gcr.io/istio-release/proxyv2:1.0.1
            imagePullPolicy: IfNotPresent
            name: istio-proxy
            resources:
              requests:
                cpu: 10m
            securityContext:
              readOnlyRootFilesystem: true
              runAsUser: 1337
            volumeMounts:
            - mountPath: /etc/istio/proxy
              name: istio-envoy
            - mountPath: /etc/certs/
              name: istio-certs
              readOnly: true
          initContainers:
          - args:
            - -p
            - "15001"
            - -u
            - "1337"
            - -m
            - REDIRECT
            - -i
            - '*'
            - -x
            - ""
            - -b
            - 8095,
            - -d
            - ""
            image: gcr.io/istio-release/proxy_init:1.0.1
            imagePullPolicy: IfNotPresent
            name: istio-init
            resources: {}
            securityContext:
              capabilities:
                add:
                - NET_ADMIN
          volumes:
          - emptyDir:
              medium: Memory
            name: istio-envoy
          - name: istio-certs
            secret:
              optional: true
              secretName: istio.default

4. Building applications

The sample applications are implemented using Eclipse Vert.x framework. They use Mongo database for storing data. The connection settings are injected into pods using Kubernetes Secrets.

public class MongoVerticle extends AbstractVerticle {

	private static final Logger LOGGER = LoggerFactory.getLogger(MongoVerticle.class);

	@Override
	public void start() throws Exception {
		ConfigStoreOptions envStore = new ConfigStoreOptions()
				.setType("env")
				.setConfig(new JsonObject().put("keys", new JsonArray().add("DATABASE_USER").add("DATABASE_PASSWORD").add("DATABASE_NAME")));
		ConfigRetrieverOptions options = new ConfigRetrieverOptions().addStore(envStore);
		ConfigRetriever retriever = ConfigRetriever.create(vertx, options);
		retriever.getConfig(r -> {
			String user = r.result().getString("DATABASE_USER");
			String password = r.result().getString("DATABASE_PASSWORD");
			String db = r.result().getString("DATABASE_NAME");
			JsonObject config = new JsonObject();
			LOGGER.info("Connecting {} using {}/{}", db, user, password);
			config.put("connection_string", "mongodb://" + user + ":" + password + "@mongodb/" + db);
			final MongoClient client = MongoClient.createShared(vertx, config);
			final CustomerRepository service = new CustomerRepositoryImpl(client);
			ProxyHelper.registerService(CustomerRepository.class, vertx, service, "customer-service");	
		});
	}
}

MongoDB should be started on OpenShift before starting any applications, which connect to it. To achieve it we should insert Mongo deployment resource into Arquillian configuration file as env.config.resource.name field.
The configuration of Arquillian Cube is visible below. We will use an existing namespace myproject, which has already granted the required privileges (see Step 1). We also need to pass authentication token of user admin. You can collect it using command oc whoami -t after login to OpenShift cluster.

<extension qualifier="openshift">
	<property name="namespace.use.current">true</property>
	<property name="namespace.use.existing">myproject</property>
	<property name="kubernetes.master">https://192.168.99.100:8443</property>
	<property name="cube.auth.token">TYYccw6pfn7TXtH8bwhCyl2tppp5MBGq7UXenuZ0fZA</property>
	<property name="env.config.resource.name">mongo-deployment.yaml</property>
</extension>

The communication between customer-service and account-service is realized by Vert.x WebClient. We will set read timeout for the client to 1 second. Because Istio injects 2 seconds delay into the route, the communication is going to end with timeout.

public class AccountClient {

	private static final Logger LOGGER = LoggerFactory.getLogger(AccountClient.class);
	private Vertx vertx;

	public AccountClient(Vertx vertx) {
		this.vertx = vertx;
	}
	
	public AccountClient findCustomerAccounts(String customerId, Handler<AsyncResult<List>> resultHandler) {
		WebClient client = WebClient.create(vertx);
		client.get(8095, "account-service", "/account/customer/" + customerId).timeout(1000).send(res2 -> {
			if (res2.succeeded()) {
				LOGGER.info("Response: {}", res2.result().bodyAsString());
				List accounts = res2.result().bodyAsJsonArray().stream().map(it -> Json.decodeValue(it.toString(), Account.class)).collect(Collectors.toList());
				resultHandler.handle(Future.succeededFuture(accounts));
			} else {
				resultHandler.handle(Future.succeededFuture(new ArrayList()));
			}
		});
		return this;
	}
}

The full code of sample applications is available on GitHub in the repository https://github.com/piomin/sample-vertx-kubernetes/tree/openshift-istio-tests.

5. Running tests

You can the tests during Maven build or just using your IDE. As the first test1CustomerRoute test is executed. It adds new customer and save generated id for two next tests.

arquillian-istio-3

The next test is test2AccountRoute. It adds an account for the customer created during previous test.

arquillian-istio-2

Finally, the test responsible for verifying communication between microservices is running. It verifies if the list of accounts is empty, what is a result of timeout in communication with account-service.

arquillian-istio-1

Testing Microservices: Tools and Frameworks

There are some key challenges around microservices architecture testing that we are facing. The selection of right tools is one of that elements that helps us deal with the issues related to those challenges. First, let’s identify the most important elements involved into the process of microservices testing. These are some of them:

  • Teams coordination – with many independent teams managing their own microservices, it becomes very challenging to coordinate the overall process of software development and testing
  • Complexity – there are many microservices that communicate to each other. We need to ensure that every one of them is working properly and is resistant to the slow responses or failures from other microservices
  • Performance – since there are many independent services it is important to test the whole architecture under traffic close to the production

Let’s discuss some interesting frameworks helping that may help you in testing microservice-based architecture.

Components tests with Hoverfly

Hoverfly simulation mode may be especially useful for building component tests. During component tests we are verifying the whole microservice without communication over network with other microservices or external datastores. The following picture shows how such a test is performed for our sample microservice.

testing-microservices-1

Hoverfly provides simple DSL for creating simulations, and a JUnit integration for using it within JUnit tests. It may orchestrated via JUnit @Rule. We are simulating two services and then overriding Ribbon properties to resolve address of these services by client name. We should also disable communication with Eureka discovery by disabling registration after application boot or fetching list of services for Ribbon client. Hoverfly simulates responses for PUT and GET methods exposed by passenger-management and driver-management microservices. Controller is the main component that implements business logic in our application. It store data using in-memory repository component and communicates with other microservices through @FeignClient interfaces. By testing three methods implemented by the controller we are testing the whole business logic implemented inside trip-management service.

@SpringBootTest(properties = {
        "eureka.client.enabled=false",
        "ribbon.eureka.enable=false",
        "passenger-management.ribbon.listOfServers=passenger-management",
        "driver-management.ribbon.listOfServers=driver-management"
})
@RunWith(SpringRunner.class)
@AutoConfigureMockMvc
@FixMethodOrder(MethodSorters.NAME_ASCENDING)
public class TripComponentTests {

    ObjectMapper mapper = new ObjectMapper();

    @Autowired
    MockMvc mockMvc;

    @ClassRule
    public static HoverflyRule rule = HoverflyRule.inSimulationMode(SimulationSource.dsl(
            HoverflyDsl.service("passenger-management:80")
                    .get(HoverflyMatchers.startsWith("/passengers/login/"))
                    .willReturn(ResponseCreators.success(HttpBodyConverter.jsonWithSingleQuotes("{'id':1,'name':'John Walker'}")))
                    .put(HoverflyMatchers.startsWith("/passengers")).anyBody()
                    .willReturn(ResponseCreators.success(HttpBodyConverter.jsonWithSingleQuotes("{'id':1,'name':'John Walker'}"))),
            HoverflyDsl.service("driver-management:80")
                    .get(HoverflyMatchers.startsWith("/drivers/"))
                    .willReturn(ResponseCreators.success(HttpBodyConverter.jsonWithSingleQuotes("{'id':1,'name':'David Smith','currentLocationX': 15,'currentLocationY':25}")))
                    .put(HoverflyMatchers.startsWith("/drivers")).anyBody()
                    .willReturn(ResponseCreators.success(HttpBodyConverter.jsonWithSingleQuotes("{'id':1,'name':'David Smith','currentLocationX': 15,'currentLocationY':25}")))
    )).printSimulationData();

    @Test
    public void test1CreateNewTrip() throws Exception {
        TripInput ti = new TripInput("test", 10, 20, "walker");
        mockMvc.perform(MockMvcRequestBuilders.post("/trips")
                .contentType(MediaType.APPLICATION_JSON_UTF8)
                .content(mapper.writeValueAsString(ti)))
                .andExpect(MockMvcResultMatchers.status().isOk())
                .andExpect(MockMvcResultMatchers.jsonPath("$.id", Matchers.any(Integer.class)))
                .andExpect(MockMvcResultMatchers.jsonPath("$.status", Matchers.is("NEW")))
                .andExpect(MockMvcResultMatchers.jsonPath("$.driverId", Matchers.any(Integer.class)));
    }

    @Test
    public void test2CancelTrip() throws Exception {
        mockMvc.perform(MockMvcRequestBuilders.put("/trips/cancel/1")
                .contentType(MediaType.APPLICATION_JSON_UTF8)
                .content(mapper.writeValueAsString(new Trip())))
                .andExpect(MockMvcResultMatchers.status().isOk())
                .andExpect(MockMvcResultMatchers.jsonPath("$.id", Matchers.any(Integer.class)))
                .andExpect(MockMvcResultMatchers.jsonPath("$.status", Matchers.is("IN_PROGRESS")))
                .andExpect(MockMvcResultMatchers.jsonPath("$.driverId", Matchers.any(Integer.class)));
    }

    @Test
    public void test3PayTrip() throws Exception {
        mockMvc.perform(MockMvcRequestBuilders.put("/trips/payment/1")
                .contentType(MediaType.APPLICATION_JSON_UTF8)
                .content(mapper.writeValueAsString(new Trip())))
                .andExpect(MockMvcResultMatchers.status().isOk())
                .andExpect(MockMvcResultMatchers.jsonPath("$.id", Matchers.any(Integer.class)))
                .andExpect(MockMvcResultMatchers.jsonPath("$.status", Matchers.is("PAYED")));
    }

}

The tests visible above verify only positive scenarios. What about testing some unexpected behaviour like network delays or server errors? With Hoverfly we can easily simulate such a behaviour and define some negative scenarios. In the following fragment of code I have defined three scenarios. In the first of them target service has been delayed 2 seconds. In order to simulate timeout on the client side I had to change default readTimeout for Ribbon load balancer and then disabled Hystrix circuit breaker for Feign client. The second test simulates HTTP 500 response status from passenger-management service. The last scenario assumes empty response from method responsible for searching the nearest driver.

@SpringBootTest(properties = {
        "eureka.client.enabled=false",
        "ribbon.eureka.enable=false",
        "passenger-management.ribbon.listOfServers=passenger-management",
        "driver-management.ribbon.listOfServers=driver-management",
        "feign.hystrix.enabled=false",
        "ribbon.ReadTimeout=500"
})
@RunWith(SpringRunner.class)
@AutoConfigureMockMvc
public class TripNegativeComponentTests {

    private ObjectMapper mapper = new ObjectMapper();
    @Autowired
    private MockMvc mockMvc;

    @ClassRule
    public static HoverflyRule rule = HoverflyRule.inSimulationMode(SimulationSource.dsl(
            HoverflyDsl.service("passenger-management:80")
                    .get("/passengers/login/test1")
                    .willReturn(ResponseCreators.success(HttpBodyConverter.jsonWithSingleQuotes("{'id':1,'name':'John Smith'}")).withDelay(2000, TimeUnit.MILLISECONDS))
                    .get("/passengers/login/test2")
                    .willReturn(ResponseCreators.success(HttpBodyConverter.jsonWithSingleQuotes("{'id':1,'name':'John Smith'}")))
                    .get("/passengers/login/test3")
                    .willReturn(ResponseCreators.serverError()),
            HoverflyDsl.service("driver-management:80")
                    .get(HoverflyMatchers.startsWith("/drivers/"))
                    .willReturn(ResponseCreators.success().body("{}"))
            ));

    @Test
    public void testCreateTripWithTimeout() throws Exception {
        mockMvc.perform(MockMvcRequestBuilders.post("/trips").contentType(MediaType.APPLICATION_JSON).content(mapper.writeValueAsString(new TripInput("test", 15, 25, "test1"))))
                .andExpect(MockMvcResultMatchers.status().isOk())
                .andExpect(MockMvcResultMatchers.jsonPath("$.id", Matchers.nullValue()))
                .andExpect(MockMvcResultMatchers.jsonPath("$.status", Matchers.is("REJECTED")));
    }

    @Test
    public void testCreateTripWithError() throws Exception {
        mockMvc.perform(MockMvcRequestBuilders.post("/trips").contentType(MediaType.APPLICATION_JSON).content(mapper.writeValueAsString(new TripInput("test", 15, 25, "test3"))))
                .andExpect(MockMvcResultMatchers.status().isOk())
                .andExpect(MockMvcResultMatchers.jsonPath("$.id", Matchers.nullValue()))
                .andExpect(MockMvcResultMatchers.jsonPath("$.status", Matchers.is("REJECTED")));
    }

    @Test
    public void testCreateTripWithNoDrivers() throws Exception {
        mockMvc.perform(MockMvcRequestBuilders.post("/trips").contentType(MediaType.APPLICATION_JSON).content(mapper.writeValueAsString(new TripInput("test", 15, 25, "test2"))))
                .andExpect(MockMvcResultMatchers.status().isOk())
                .andExpect(MockMvcResultMatchers.jsonPath("$.id", Matchers.nullValue()))
                .andExpect(MockMvcResultMatchers.jsonPath("$.status", Matchers.is("REJECTED")));
    }

}

All the timeouts and errors in communication with external microservices are handled by the bean annotated with @ControllerAdvice. In such cases trip-management microservice should not return server error response, but 200 OK with JSON response containing field status equals to REJECTED.

@ControllerAdvice
public class TripControllerErrorHandler extends ResponseEntityExceptionHandler {

    @ExceptionHandler({RetryableException.class, FeignException.class})
    protected ResponseEntity handleFeignTimeout(RuntimeException ex, WebRequest request) {
        Trip trip = new Trip();
        trip.setStatus(TripStatus.REJECTED);
        return handleExceptionInternal(ex, trip, null, HttpStatus.OK, request);
    }

}

Contract tests with Pact

The next type of test strategy usually implemented for microservices-based architecture is consumer-driven contract testing. In fact, there are some tools especially dedicated for such type of tests. One of them is Pact. Contract testing is a way to ensure that services can communicate with each other without implementing integration tests. A contract is signed between two sides of communication: consumer and provider. Pact assumes that contract code is generated and published on the consumer side, and than verified by the provider.

Pact provides tool that can store and share the contracts between consumers and providers. It is called Pact Broker. It exposes a simple RESTful API for publishing and retrieving pacts, and embedded web dashboard for navigating the API. We can easily run Pact Broker on the local machine using its Docker image.

micro-testing-2

We will begin from running Pact Broker. Pact Broker requires running instance of postgresql, so first we have to launch it using Docker image, and then link our broker container with that container.

docker run -d --name postgres -p 5432:5432 -e POSTGRES_USER=oauth -e POSTGRES_PASSWORD=oauth123 -e POSTGRES_DB=oauth postgres
docker run -d --name pact-broker --link postgres:postgres -e PACT_BROKER_DATABASE_USERNAME=oauth -e PACT_BROKER_DATABASE_PASSWORD=oauth123 -e PACT_BROKER_DATABASE_HOST=postgres -e PACT_BROKER_DATABASE_NAME=oauth -p 9080:80 dius/pact-broker

The next step is to implement contract tests on the consumer side. We will use JVM implementation of Pact library for that. It provides PactProviderRuleMk2 object responsible for creating stubs of the provider service. We should annotate it with JUnit @Rule. Ribbon will forward all requests to passenger-management to the stub address – in that case localhost:8180. Pact JVM supports annotations and provides DSL for building test scenarios. Test method responsible for generating contract data should be annotated with @Pact. It is important to set fields state and provider, because then generated contract would be verified on the provider side using these names. Generated pacts are verified inside the same test class by the methods annotated with @PactVerification. Field fragment points to the name of the method responsible for generating pact inside the same test class. The contract is tested using PassengerManagementClient @FeignClient.

@RunWith(SpringRunner.class)
@SpringBootTest(properties = {
        "driver-management.ribbon.listOfServers=localhost:8190",
        "passenger-management.ribbon.listOfServers=localhost:8180",
        "ribbon.eureka.enabled=false",
        "eureka.client.enabled=false",
        "ribbon.ReadTimeout=5000"
})
public class PassengerManagementContractTests {

    @Rule
    public PactProviderRuleMk2 stubProvider = new PactProviderRuleMk2("passengerManagementProvider", "localhost", 8180, this);
    @Autowired
    private PassengerManagementClient passengerManagementClient;

    @Pact(state = "get-passenger", provider = "passengerManagementProvider", consumer = "passengerManagementClient")
    public RequestResponsePact callGetPassenger(PactDslWithProvider builder) {
        DslPart body = new PactDslJsonBody().integerType("id").stringType("name").numberType("balance").close();
        return builder.given("get-passenger").uponReceiving("test-get-passenger")
                .path("/passengers/login/test").method("GET").willRespondWith().status(200).body(body).toPact();
    }

    @Pact(state = "update-passenger", provider = "passengerManagementProvider", consumer = "passengerManagementClient")
    public RequestResponsePact callUpdatePassenger(PactDslWithProvider builder) {
        return builder.given("update-passenger").uponReceiving("test-update-passenger")
                .path("/passengers").method("PUT").bodyWithSingleQuotes("{'id':1,'amount':1000}", "application/json").willRespondWith().status(200)
                .bodyWithSingleQuotes("{'id':1,'name':'Adam Smith','balance':5000}", "application/json").toPact();
    }

    @Test
    @PactVerification(fragment = "callGetPassenger")
    public void verifyGetPassengerPact() {
        Passenger passenger = passengerManagementClient.getPassenger("test");
        Assert.assertNotNull(passenger);
        Assert.assertNotNull(passenger.getId());
    }

    @Test
    @PactVerification(fragment = "callUpdatePassenger")
    public void verifyUpdatePassengerPact() {
        Passenger passenger = passengerManagementClient.updatePassenger(new PassengerInput(1L, 1000));
        Assert.assertNotNull(passenger);
        Assert.assertNotNull(passenger.getId());
    }

}

Just running the tests is not enough. We also have to publish pacts generated during tests to Pact Broker. In order to achieve it we have to include the following Maven plugin to our pom.xml and then execute command mvn clean install pact:publish.

<plugin>
	<groupId>au.com.dius</groupId>
	<artifactId>pact-jvm-provider-maven_2.12</artifactId>
	<version>3.5.21</version>
	<configuration>
		<pactBrokerUrl>http://192.168.99.100:9080</pactBrokerUrl>
	</configuration>
</plugin>

Pact provides support for Spring on the provider side. Thanks to that we may use MockMvc controllers or inject properties from application.yml into the test class. Here’s dependency declaration that has to be included to our pom.xml

<dependency>
	<groupId>au.com.dius</groupId>
	<artifactId>pact-jvm-provider-spring_2.12</artifactId>
	<version>3.5.21</version>
	<scope>test</scope>
</dependency>

Now , the contract is being verified on the provider side. We need to pass provider name inside @Provider annotation and name of states for every verification test inside @State. These values has been during the tests on the consumer side inside @Pact annotation (fields state and provider).

@RunWith(SpringRestPactRunner.class)
@Provider("passengerManagementProvider")
@PactBroker
public class PassengerControllerContractTests {

    @InjectMocks
    private PassengerController controller = new PassengerController();
    @Mock
    private PassengerRepository repository;
    @TestTarget
    public final MockMvcTarget target = new MockMvcTarget();

    @Before
    public void before() {
        MockitoAnnotations.initMocks(this);
        target.setControllers(controller);
    }

    @State("get-passenger")
    public void testGetPassenger() {
        target.setRunTimes(3);
        Mockito.when(repository.findByLogin(Mockito.anyString()))
                .thenReturn(new Passenger(1L, "Adam Smith", "test", 4000))
                .thenReturn(new Passenger(3L, "Tom Hamilton", "hamilton", 400000))
                .thenReturn(new Passenger(5L, "John Scott", "scott", 222));
    }

    @State("update-passenger")
    public void testUpdatePassenger() {
        target.setRunTimes(1);
        Passenger passenger = new Passenger(1L, "Adam Smith", "test", 4000);
        Mockito.when(repository.findById(1L)).thenReturn(passenger);
        Mockito.when(repository.update(Mockito.any(Passenger.class)))
                .thenReturn(new Passenger(1L, "Adam Smith", "test", 5000));
    }
}

Pact Broker host and port are injected from application.yml file.

pactbroker:
  host: "192.168.99.100"
  port: "8090"

Performance tests with Gatling

An important step of testing microservices before deploying them on production is performance testing. One of interesting tools in this area is Gatling. It is highly capable load testing tool written in Scala. It means that we also have to use Scala DSL in order to build test scenarios. Let’s begin from adding required library to pom.xml file.

<dependency>
	<groupId>io.gatling.highcharts</groupId>
	<artifactId>gatling-charts-highcharts</artifactId>
	<version>2.3.1</version>
</dependency>

Now, we may proceed to the test. In the scenario visible above we are testing two endpoints exposed by trip-management: POST /trips and PUT /trips/payment/${tripId}. In fact, this scenario verifies the whole functionality of our sample system, where we are setting up trip and then pay for it after finish.
Every test class using Gatling needs to extend Simulation class. We are defining scenario using scenario method and then setting its name. We may define multiple executions inside single scenario. After every execution of POST /trips method test save generated id returned by the service. Then it inserts that id into the URL used for calling method PUT /trips/payment/${tripId}. Every single test expects response with 200 OK status.
Gatling provides two interesting features, which are worth mentioning. You can see how they are used in the following performance test. First of them is feeder. It is used for polling records and injecting their content into the test. Feed rPassengers selects one of five defined logins randomly. The final test result may be verified using Assertions API. It is responsible for verifying global statistics like response time or number of failed requests matches expectations for a whole simulation. In the scenario visible below the criterium is max response time that needs to be lower 100 milliseconds.

class CreateAndPayTripPerformanceTest extends Simulation {

  val rPassengers = Iterator.continually(Map("passenger" -> List("walker","smith","hamilton","scott","holmes").lift(Random.nextInt(5)).get))

  val scn = scenario("CreateAndPayTrip").feed(rPassengers).repeat(100, "n") {
    exec(http("CreateTrip-API")
      .post("http://localhost:8090/trips")
      .header("Content-Type", "application/json")
      .body(StringBody("""{"destination":"test${n}","locationX":${n},"locationY":${n},"username":"${passenger}"}"""))
      .check(status.is(200), jsonPath("$.id").saveAs("tripId"))
    ).exec(http("PayTrip-API")
      .put("http://localhost:8090/trips/payment/${tripId}")
      .header("Content-Type", "application/json")
      .check(status.is(200))
    )
  }

  setUp(scn.inject(atOnceUsers(20))).maxDuration(FiniteDuration.apply(5, TimeUnit.MINUTES))
    .assertions(global.responseTime.max.lt(100))

}

In order to run Gatling performance test you need to include the following Maven plugin to your pom.xml. You may run a single scenario or run multiple scenarios. After including the plugin you only need to execute command mvn clean gatling:test.

<plugin>
	<groupId>io.gatling</groupId>
	<artifactId>gatling-maven-plugin</artifactId>
	<version>2.2.4</version>
	<configuration>
		<simulationClass>pl.piomin.performance.tests.CreateAndPayTripPerformanceTest</simulationClass>
	</configuration>
</plugin>

Here are some diagrams illustrating result of performance tests for our microservice. Because maximum response time has been greater than set inside assertion (100ms), the test has failed.

microservices-testing-2

and …

microservices-testing-3

Summary

The right selection of tools is not the most important element phase of microservices testing. However, right tools can help you facing the key challenges related to it. Hoverfly allows to create full component tests that verifies if your microservice is able to handle delays or error from downstream services. Pact helps you to organize team by sharing and verifying contracts between independently developed microservices. Finally, Gatling can help you implementing load tests for selected scenarios, in order to verify an end-to-end performance of your system.
The source code used as a demo for this article is available on GitHub: https://github.com/piomin/sample-testing-microservices.git. If you find this article interesting for you you may be also interested in some other articles related to this subject: