Using logstash-logging-spring-boot-starter for logging with Spring Boot and Logstash

I have already described some implementation details related to my library logstash-logging-spring-boot-starter for HTTP request/response logging in one of the previous articles Logging with Spring Boot and Elastic Stack. The article has been published some weeks ago, and since that time some important features has been added to this library. Today I’m going to summarise all those changes and describe all the features provided by the library.

Continue reading “Using logstash-logging-spring-boot-starter for logging with Spring Boot and Logstash”

Deploying Spring Boot Application on OpenShift with Dekorate

More advanced deployments to Kubernetes or OpenShift are a bit troublesome for developers. In comparison to Kubernetes OpenShift provides S2I (Source-2-Image) mechanism, which may help reduce a time required for preparation of application deployment descriptors. Although S2I is quite useful for developers, it solves only simple use cases and does not provide unified approach to building deployment configuration from a source code. Dekorate (https://dekorate.io), the recently created open-source project, tries to solve that problem. This project seems to be very interesting. It appears to be confirmed by RedHat, which has already announced a decision on including Dekorate to Red Hat OpenShift Application Runtimes as a “Tech Preview”. Continue reading “Deploying Spring Boot Application on OpenShift with Dekorate”

Quick Guide to Microservices with Quarkus on Openshift

You had an opportunity to read many articles about building microservices with such frameworks like Spring Boot or Micronaut on my blog. There is another one very interesting framework dedicated for microservices architecture, which is becoming increasing popular – Quarkus. It is being introduced as a next-generation Kubernetes/Openshift native Java framework. It is built on top of well-known Java standards like CDI, JAX-RS and Eclipse MicroProfile which distinguishes it from Spring Boot. Continue reading “Quick Guide to Microservices with Quarkus on Openshift”

Kafka In Microservices With Micronaut

Today we are going to build some microservices communicating with each other asynchronously through Apache Kafka topics. We use Micronaut Framework, which provides dedicated library for integration with Kafka. Let’s take a brief look at the architecture of our sample system. We have 4 microservices: order-service, trip-service, driver-service and passenger-service. The implementation of these applications is very simple. All of them have in-memory storage and connect to the same Kafka instance.
Continue reading “Kafka In Microservices With Micronaut”

JPA Data Access with Micronaut Data

When I have writing some articles comparing Spring and Micronaut frameworks recently, I have taken a note of many comments about lack of built-in ORM and data repositories support in Micronaut. Spring provides this feature for a long time through Spring Data project. The good news is that the Micronaut team is close to complete work on first version of their project with ORM support. The project called Micronaut Predator (short for Precomputed Data Repositories) is still under active development, and currently we may access just the snapshot version. However, the authors are introducing it is as more efficient with reduced memory consumption than competitive solutions like Spring Data or Grails GORM. In short, this could be achieved thanks to Ahead of Time (AoT) compilation to pre-compute queries for repository interfaces that are then executed by a thin, lightweight runtime layer, and avoiding usage of reflection or runtime proxies. Continue reading “JPA Data Access with Micronaut Data”

Continuous Delivery with OpenShift and Jenkins: A/B Testing

One of the reason you could decide to use OpenShift instead of some other containerized platforms (for example Kubernetes) is out-of-the-box support for continuous delivery pipelines. Without proper tools the process of releasing software in your organization may be really time-consuming and painful. The quickness of that process becoming especially important if you deliver software to production frequently. Currently, the most popular use case for it is microservices-based architecture, where you have many small, independent applications.

Continue reading “Continuous Delivery with OpenShift and Jenkins: A/B Testing”

Logging with Spring Boot and Elastic Stack

In this article I’ll introduce my library for logging designed especially for Spring Boot RESTful web application. The main assumptions regarding this library are:

  • Logging all incoming HTTP requests and outgoing HTTP responses with full body
  • Integration with Elastic Stack through Logstash using logstash-logback-encoder library
  • Possibility for enabling logging on a client-side for most commonly used components in Spring Boot application: RestTemplate and OpenFeign
  • Generating and propagating correlationId across all communication within a single API endpoint call
  • Calculating and storing execution time for each request
  • A library should be auto-configurable – you don’t have to do anything more than including it as a dependency to your application to make it work

Continue reading “Logging with Spring Boot and Elastic Stack”

Micronaut Tutorial: Security

This is the third part of my tutorial to Micronaut Framework. This time we will discuss the most interesting Micronaut security features. I have already described core mechanisms for IoC and dependency injection in the first part of my tutorial, and I have also created a guide to building simple REST server-side application in the second part. For more details you may refer to:

Continue reading “Micronaut Tutorial: Security”

Micronaut Tutorial: Server Application

In this part of my tutorial to Micronaut framework we are going to create simple HTTP server-side application running on Netty. We have already discussed the most interesting core features of Micronaut like beans, scopes or unit testing in the first part of that tutorial. For more details you may refer to my article Micronaut Tutorial: Beans and Scopes.

Assuming we have a basic knowledge about core mechanisms of Micronaut we may proceed to the key part of that framework and discuss how to build simple microservice application exposing REST API over HTTP. Continue reading “Micronaut Tutorial: Server Application”

Micronaut Tutorial: Beans and Scopes

Micronaut is a relatively new JVM-based framework. It is especially designed for building modular, easy testable microservice applications. Micronaut is heavily inspired by Spring and Grails frameworks, which is not a surprise, if we consider it has been developed by the creators of Grails framework. It is based on Java’s annotation processing, IoC (Inversion of Control) and DI (Dependency Injection).

Micronaut implements the JSR-330 (java.inject) specification for dependency injection. It supports constructor injection, field injection, JavaBean and method parameter injection. In this part of tutorial I’m going to give some tips on how to:

  • define and register beans in the application context
  • use built-in scopes
  • inject configuration to your application
  • automatically test your beans during application build with JUnit 5

Continue reading “Micronaut Tutorial: Beans and Scopes”

Performance Comparison Between Spring Boot and Micronaut

Today we will compare two frameworks used for building microservices on the JVM: Spring Boot and Micronaut. First of them, Spring Boot is currently the most popular and opinionated framework in the JVM world. On the other side of the barrier is staying Micronaut, quickly gaining popularity framework especially designed for building serverless functions or low memory-footprint microservices. We will be comparing version 2.1.4 of Spring Boot with 1.0.0.RC1 of Micronaut. The comparison criteria are:

  • memory usage (heap and non-heap)
  • the size in MB of generated fat JAR file
  • the application startup time
  • the performance of application, in the meaning of average response time from the REST endpoint during sample load testing

Continue reading “Performance Comparison Between Spring Boot and Micronaut”

The Future of Spring Cloud Microservices After Netflix Era

If somebody would ask you about Spring Cloud, the first thing that comes into your mind will probably be Netflix OSS support. Support for such tools like Eureka, Zuul or Ribbon is provided not only by Spring, but also by some other popular frameworks used for building microservices architecture like Apache Camel, Vert.x or Micronaut. Currently, Spring Cloud Netflix is the most popular project being a part of Spring Cloud. It has around 3.2k stars on GitHub, while the second best has around 1.4k. Therefore, it is quite surprising that Pivotal has announced that most of Spring Cloud Netflix modules are entering maintenance mode. You can read more about in the post published on the Spring blog by Spencer Gibb https://spring.io/blog/2018/12/12/spring-cloud-greenwich-rc1-available-now. Continue reading “The Future of Spring Cloud Microservices After Netflix Era”

Elasticsearch with Spring Boot

Elasticsearch is a full-text search engine especially designed for working with large data sets. Following this description it is a natural choice to use it for storing and searching application logs. Together with Logstash and Kibana it is a part of powerful solution called Elastic Stack, that has already been described in some of my previous articles.
Keeping application logs is not the only one use case for Elasticsearch. It is often used as a secondary database for the application, that has primary relational database. Such an approach can be especially useful if you have to perform full-text search over large data set or just store many historical records that are no longer modified by the application. Of course there is always question about advantages and disadvantages of that approach.
When you are working with two different data sources that contain the same data, you have to first think about synchronization. You have several options. Depending on the relational database vendor, you can leverage binary or transaction logs, which contain the history of SQL updates. This approach requires some middleware that reads logs and then puts data to Elasticsearch. You can always move the whole responsibility to the database side (trigger) or into Elasticsearch side (JDBC plugins). Continue reading “Elasticsearch with Spring Boot”

Redis in Microservices Architecture

Redis can be widely used in microservices architecture. It is probably one of the few popular software solutions that may be leveraged by your application in such many different ways. Depending on the requirements it can acts as a primary database, cache, message broker. While it is also a key/value store we can use it as a configuration server or discovery server in your microservices architecture. Although it is usually defined as an in-memory data structure, we can also run it in persistent mode.
Today, I’m going to show you some examples of using Redis with microservices built on top of Spring Boot and Spring Cloud frameworks. These application will communicate between each other asynchronously using Redis Pub/Sub, using Redis as a cache or primary database, and finally used Redis as a configuration server. Continue reading “Redis in Microservices Architecture”

A Magic Around Spring Boot Externalized Configuration

There are some things I really like in Spring Boot, and one of them is an externalized (external) configuration. Spring Boot allows you to configure your application in many ways. You have 17 levels of loading configuration properties into application. All of them are described in the 24th Chapter of Spring Boot documentation available here.

This article was inspired by some last talks with developers about problems with the configuration of their applications. They haven’t heard about some interesting features that may be used to make it more flexible and clear. Continue reading “A Magic Around Spring Boot Externalized Configuration”

Introduction to Spring Data Redis

Redis is an in-memory data structure store with optional durability, used as database, cache and message broker. Currently, it is the most most popular tool in the key/value stores category: https://db-engines.com/en/ranking/key-value+store. The easiest way to integrate your application with Redis is through Spring Data Redis. You can use Spring RedisTemplate directly for that or you might as well use Spring Data Redis repositories. There are some limitations when you integrate with Redis via Spring Data Redis repositories. They require at least Redis Server version 2.8.0 and do not work with transactions. Therefore you need to disable transaction support for RedisTemplate, which is leveraged by Redis repositories. Continue reading “Introduction to Spring Data Redis”

Kotlin Microservices with Micronaut, Spring Cloud and JPA

Micronaut Framework provides support for Kotlin built upon Kapt compiler plugin. It also implements the most popular cloud-native patterns like distributed configuration, service discovery and client-side load balancing. These features allows to include your application built on top of Micronaut into the existing microservices-based system. The most popular example of such approach may be an integration with Spring Cloud ecosystem. If you have already used Spring Cloud, it is very likely you built your microservices-based architecture using Eureka discovery server and Spring Cloud Config as a configuration server. Beginning from version 1.1 Micronaut supports both these popular tools being a part of Spring Cloud project. That’s a good news, because in version 1.0 the only supported distributed solution was Consul, and there were no possibility to use Eureka discovery together with Consul property source (running them together ends with exception). Continue reading “Kotlin Microservices with Micronaut, Spring Cloud and JPA”

Microservices Integration Tests with Hoverfly and Testcontainers

Building good integration tests of a system consisting of several microservices may be quite a challenge. Today I’m going to show you how to use such tools like Hoverfly and Testcontainers to implement such the tests. I have already written about Hoverfly in my previous articles, as well as about Testcontainers. If you are interested in some intro to these framework you may take a look on the following articles:

Continue reading “Microservices Integration Tests with Hoverfly and Testcontainers”

Testing Spring Boot Integration with Vault and Postgres using Testcontainers Framework

I have already written many articles, where I was using Docker containers for running some third-party solutions integrated with my sample applications. Building integration tests for such applications may not be an easy task without Docker containers. Especially, if our application integrates with databases, message brokers or some other popular tools. If you are planning to build such integration tests you should definitely take a look on Testcontainers (https://www.testcontainers.org/). Testcontainers is a Java library that supports JUnit tests, providing fast and lightweight way for running instances of common databases, Selenium web browsers, or anything else that can run in a Docker container. It provides modules for the most popular relational and NoSQL databases like Postgres, MySQL, Cassandra or Neo4j. It also allows to run popular products like Elasticsearch, Kafka, Nginx or HashiCorp’s Vault. Today I’m going to show you more advanced sample of JUnit tests that use Testcontainers to check out an integration between Spring Boot/Spring Cloud application, Postgres database and Vault. For the purposes of that example we will use the case described in one of my previous articles Secure Spring Cloud Microservices with Vault and Nomad. Let us recall that use case. Continue reading “Testing Spring Boot Integration with Vault and Postgres using Testcontainers Framework”

Quick Guide to Microservices with Micronaut Framework

Micronaut framework has been introduced as an alternative to Spring Boot for building microservice applications. At first glance it is very similar to Spring. It also implements such patterns like dependency injection and inversion of control based on annotations, however it uses JSR-330 (java.inject) for doing it. It has been designed specially in order to building serverless functions, Android applications, and low memory-footprint microservices. This means that it should faster startup time, lower memory usage or easier unit testing than competitive frameworks. However, today I don’t want to focus on those characteristics of Micronaut. I’m going to show you how to build simple microservices-based system using this framework. You can easily compare it with Spring Boot and Spring Cloud by reading my previous article about the same subject Quick Guide to Microservices with Spring Boot 2.0, Eureka and Spring Cloud. Does Micronaut have a change to gain the same popularity as Spring Boot? Let’s find out. Continue reading “Quick Guide to Microservices with Micronaut Framework”

Kotlin Microservice with Spring Boot

You may find many examples of microservices built with Spring Boot on my blog, but the most of them is written in Java. With the rise in popularity of Kotlin language it is more often used with Spring Boot for building backend services. Starting with version 5 Spring Framework has introduced first-class support for Kotlin. In this article I’m going to show you example of microservice build with Kotlin and Spring Boot 2. I’ll describe some interesting features of Spring Boot, which can treated as a set of good practices when building backend, REST-based microservices. Continue reading “Kotlin Microservice with Spring Boot”

Running Java Microservices on OpenShift using Source-2-Image

One of the reason you would prefer OpenShift instead of Kubernetes is the simplicity of running new applications. When working with plain Kubernetes you need to provide already built image together with the set of descriptor templates used for deploying it. OpenShift introduces Source-2-Image feature used for building reproducible Docker images from application source code. With S2I you don’t have provide any Kubernetes YAML templates or build Docker image by yourself, OpenShift will do it for you. Let’s see how it works. The best way to test it locally is via Minishift. But the first step is to prepare sample applications source code. Continue reading “Running Java Microservices on OpenShift using Source-2-Image”

RabbitMQ Cluster with Consul and Vault

Almost two years ago I wrote an article about RabbitMQ clustering RabbitMQ in cluster. It was one of the first post on my blog, and it’s really hard to believe it has been two years since I started this blog. Anyway, one of the question about the topic described in the mentioned article inspired me to return to that subject one more time. That question pointed to the problem of an approach to setting up the cluster. This approach assumes that we are manually attaching new nodes to the cluster by executing the command rabbitmqctl join_cluster with cluster name as a parameter. If I remember correctly it was the only one available method of creating cluster at that time. Today we have more choices, what illustrates an evolution of RabbitMQ during last two years. Continue reading “RabbitMQ Cluster with Consul and Vault”

Secure Spring Cloud Microservices with Vault and Nomad

One of the significant topics related to microservices security is managing and protecting sensitive data like tokens, passwords or certificates used by your application. As a developer you probably often implement a software that connects with external databases, message brokers or just the other applications. How do you store the credentials used by your application? To be honest, most of the software code I have seen in my life just stored a sensitive data as a plain text in the configuration files. Thanks to that, I could always be able to retrieve the credentials to every database I needed at a given time just by looking at the application source code. Of course, we can always encrypt sensitive data, but if we working with many microservices having separated databases I may not be very comfortable solution. Continue reading “Secure Spring Cloud Microservices with Vault and Nomad”

Microservices with Spring Cloud Alibaba

Some days ago Spring Cloud has announced a support for several Alibaba components used for building microservices-based architecture. The project is still under the incubation stage, but there is a plan for graduating it from incubation to officially join a Spring Cloud Release Train in 2019. The currently released version 0.0.2.RELEASE is compatible with Spring Boot 2, while older version 0.0.1.RELEASE is compatible with Spring Boot 1.x. This project seems to be very interesting, and currently it is the most popular repository amongst Spring Cloud Incubator repositories (around 1.5k likes on GitHub).
Currently, the most commonly used Spring Cloud project for building microservices architecture is Spring Cloud Netflix. As you probably know this project provides Netflix OSS integrations for Spring Boot apps, including service discovery (Eureka), circuit breaker (Hystrix), intelligent routing (Zuul) and client side load balancing (Ribbon). The first question that came to my mind when I was reading about Spring Cloud Alibaba was: ’Can Spring Cloud Alibaba be an alternative for Spring Cloud Netflix ?’. The answer is yes, but not entirely. Spring Cloud Alibaby still integrates with Ribbon, which is used for load balancing based on service discovery. Netflix Eureka server is replaced in that case by Nacos.
Nacos (Dynamic Naming and Configuration Service) is an easy-to-use platform designed for dynamic service discovery and configuration and service management. It helps you to build cloud native applications and microservices platform easily. Following that definition you can use Nacos for:

  • Service Discovery – you can register your microservice and discover other microservices via a DNS or HTTP interface. It also provides real-time healthchecks for registered services
  • Distributed Configuration – dynamic configuration service provided by Nacos allows you to manage configurations of all services in a centralized and dynamic manner across all environments. In fact, you can replace Spring Cloud Config Server using it
  • Dynamic DNS – it supports weighted routing, making it easier to implement mid-tier load balancing, flexible routing policies, flow control, and simple DNS resolution services

Spring Cloud supports another popular Alibaba component – Sentinel. Sentinel is responsible for flow control, concurrency, circuit breaking and load protection.

Our sample system consisting of three microservices and API gateway is very similar to the architecture described in my article Quick Guide to Microservices with Spring Boot 2.0, Eureka and Spring Cloud. The only difference is in tools used for configuration management and service discovery. Microservice organization-service calls some endpoints exposed by department-service, while department-service calls endpoints exposed by employee-service. An inter-service communication is realized using OpenFeign client. The complexity of the whole system is hidden behind an API gateway implemented using Netflix Zuul.

alibaba-9

1. Running Nacos server

You can run Nacos on both Windows and Linux systems. First, you should download latest stable release provided on the site https://github.com/alibaba/nacos/releases. After unzipping you have to run it in standalone mode by executing the following command.

cmd nacos/bin/startup.cmd -m standalone

By default, Nacos is starting on port 8848. It provides HTTP API under context /nacos/v1, and admin web console under address http://localhost:8848/nacos. If you take a look on the logs you will find out that it is just an application written using Spring Framework.

2. Dependencies

As I have mentioned before Spring Cloud Alibaba is still under incubation stage, therefore it is not included into Spring Cloud Release Train. That’s why we need to include special BOM for Alibaba inside dependency management section in pom.xml. We will also use the newest stable version of Spring Cloud, which is now Finchley.SR2.

<dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-dependencies</artifactId>
			<version>Finchley.SR2</version>
			<type>pom</type>
			<scope>import</scope>
		</dependency>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-alibaba-dependencies</artifactId>
			<version>0.2.0.RELEASE</version>
			<type>pom</type>
			<scope>import</scope>
		</dependency>
	</dependencies>
</dependencyManagement>

Spring Cloud Alibaba provides three starters for the currently supported components. These are spring-cloud-starter-alibaba-nacos-discovery for service discovery with Nacos, spring-cloud-starter-alibaba-nacos-config for distributed configuration Nacos, and spring-cloud-starter-alibaba-sentinel for Sentinel dependencies.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-alibaba-nacos-config</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-alibaba-sentinel</artifactId>
</dependency>

3. Enabling distributed configuration with Nacos

To enable configuration management with Nacos we only need to include starter spring-cloud-starter-alibaba-nacos-config. It does not provide auto-configured address of Nacos server, so we need to explicitly set it for the application inside bootstrap.yml file.

spring:
  application:
    name: employee-service
  cloud:
    nacos:
      config:
        server-addr: localhost:8848

Our application tries to connect with Nacos and fetch configuration provided inside file with the same name as value of property spring.application.name. Currently, Spring Cloud Alibaba supports only .properties file, so we need to create configuration inside file employee-service.properties. Nacos comes with an elegant way of creating and managing configuration properties. We can use web admin console for that. The field Data ID visible on the picture below is in fact the name of our configuration file. The list of configuration properties should be placed inside Configuration Content field.

alibaba-1

The good news related with Spring Cloud Alibaba is that it dynamically refresh application configuration after modifications on Nacos. The only thing you have to do in your application is to annotate the beans that should be refreshed with @RefreshScope or @ConfigurationProperties. Now, let’s consider the following situation. We will modify our configuration a little to add some properties with test data as shown below.

alibaba-4

Here’s the implementation of our repository bean. It injects all configuration properties with prefix repository.employees into the list of employees.

@Repository
@ConfigurationProperties(prefix = "repository")
public class EmployeeRepository {

	private List<Employee> employees = new ArrayList<>();
	
	public List<Employee> getEmployees() {
		return employees;
	}

	public void setEmployees(List<Employee> employees) {
		this.employees = employees;
	}
	
	public Employee add(Employee employee) {
		employee.setId((long) (employees.size()+1));
		employees.add(employee);
		return employee;
	}
	
	public Employee findById(Long id) {
		Optional<Employee> employee = employees.stream().filter(a -> a.getId().equals(id)).findFirst();
		if (employee.isPresent())
			return employee.get();
		else
			return null;
	}
	
	public List<Employee> findAll() {
		return employees;
	}
	
	public List<Employee> findByDepartment(Long departmentId) {
		return employees.stream().filter(a -> a.getDepartmentId().equals(departmentId)).collect(Collectors.toList());
	}
	
	public List<Employee> findByOrganization(Long organizationId) {
		return employees.stream().filter(a -> a.getOrganizationId().equals(organizationId)).collect(Collectors.toList());
	}

}

Now, you can change some values of properties as shown on the picture below. Then, if you call employee-service, that is available on port 8090 (http://localhost:8090) you should see the full list of employees with modified values.

alibaba-3

The same configuration properties should be created for our two other microservices department-service and organization-service. Assuming you have already done it, your should have the following configuration entries on Nacos.

alibaba-5

4. Enabling service discovery with Nacos

To enable service discovery with Nacos you first need to include starter spring-cloud-starter-alibaba-nacos-discovery. The same as for the configuration server you also need to set address of Nacos server inside bootstrap.yml file.

spring:
  application:
    name: employee-service
  cloud:
    nacos:
      discovery:
        server-addr: localhost:8848

The last step is to enable discovery client for the application by annotating the main class with @EnableDiscoveryClient.

@SpringBootApplication
@EnableDiscoveryClient
@EnableSwagger2
public class EmployeeApplication {

	public static void main(String[] args) {
		SpringApplication.run(EmployeeApplication.class, args);
	}
	
}

If you provide the same implementation for all our microservices and run them you will see the following list of registered application in Nacos web console.

alibaba-7

5. Inter-service communication

Communication between microservices is realized using the standard Spring Cloud components: RestTemplate or OpenFeign client. By default, load balancing is realized by Ribbon client. The only difference in comparison to Spring Cloud Netflix is discovery server used as service registry in the communication process. Here’s the implementation of Feign client in department-service responsible for integration with endpoint GET /department/{departmentId} exposed by employee-service.

@FeignClient(name = "employee-service")
public interface EmployeeClient {

	@GetMapping("/department/{departmentId}")
	List<Employee> findByDepartment(@PathVariable("departmentId") Long departmentId);
	
}

Don’t forget to enable Feign clients for Spring Boot application.

@SpringBootApplication
@EnableDiscoveryClient
@EnableFeignClients
@EnableSwagger2
public class DepartmentApplication {

	public static void main(String[] args) {
		SpringApplication.run(DepartmentApplication.class, args);
	}
	
}

We should also run multiple instances of employee-service in order to test load balancing on the client side. Before doing that we could enable dynamic generation of port number by setting property server.port to 0 inside configuration stored on Nacos. Now, we can run many instances of single service using the same configuration settings without risk of the port number conflict for a single microservice. Let’s scale up number of employee-service instances.

alibaba-8

If you would like to test an inter-service communication you can call the following methods that uses OpenFeign client for calling endpoints exposed by other microservices: GET /organization/{organizationId}/with-employees from department-service, and GET /{id}/with-departments, GET /{id}/with-departments-and-employees, GET /{id}/with-employees from organization-service.

6. Running API Gateway

Now it is a time to run the last component in our architecture – an API Gateway. It is built on top of Spring Cloud Netflix Zuul. It also uses Nacos a s a discovery and configuration server.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-alibaba-nacos-config</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-netflix-zuul</artifactId>
</dependency>

After including required dependencies we need to enable Zuul proxy and discovery client for the application.

@SpringBootApplication
@EnableDiscoveryClient
@EnableZuulProxy
@EnableSwagger2
public class ProxyApplication {

	public static void main(String[] args) {
		SpringApplication.run(ProxyApplication.class, args);
	}
	
}

Here’s the configuration of Zuul routes defined for our three sample microservices.

zuul:
  routes:
    department:
      path: /department/**
      serviceId: department-service
    employee:
      path: /employee/**
      serviceId: employee-service
    organization:
      path: /organization/**
      serviceId: organization-service

After running gateway exposes Swagger2 specification for API exposed by all defined microservices. Assuming you have run it on port 8080, you can access it under address http://localhost:8080/swagger-ui.html. Thanks to that you can all the methods from one, single location.

spring-cloud-3

Conclusion

Sample applications source code is available on GitHub under repository sample-spring-microservices-new in branch alibaba: https://github.com/piomin/sample-spring-microservices-new/tree/alibaba. The main purpose of this article was to show you how to replace some popular Spring Cloud components with Alibaba Nacos used for service discovery and configuration management. Spring Cloud Alibaba project is at an early stage of development, so we could probably expect some new interesting features near the future. You can find some other examples on Spring Cloud Alibaba Github site here https://github.com/spring-cloud-incubator/spring-cloud-alibaba/tree/master/spring-cloud-alibaba-examples.

Reactive programming with Project Reactor

If you are building reactive microservices you would probably have to merge data streams from different source APIs into a single result stream. It inspired me to create this article containing some most common scenarios of using reactive streams in microservice-based architecture during inter-service communication. I have already described some aspects related to reactive programming with Spring based on Spring WebFlux and Spring Data JDBC projects in the following articles:

Continue reading “Reactive programming with Project Reactor”

Introduction to Reactive APIs with Postgres, R2DBC, Spring Data JDBC and Spring WebFlux

There are pretty many technologies listed in the title of this article. Spring WebFlux has been introduced with Spring 5 and Spring Boot 2 as a project for building reactive-stack web applications. I have already described how to use it together with Spring Boot and Spring Cloud for building reactive microservices in that article: Reactive Microservices with Spring WebFlux and Spring Cloud. Spring 5 has also introduced some projects supporting reactive access to NoSQL databases like Cassandra, MongoDB or Couchbase. But there were still a lack in support for reactive to access to relational databases. The change is coming together with R2DBC (Reactive Relational Database Connectivity) project. That project is also being developed by Pivotal members. It seems to be very interesting initiative, however it is rather at the beginning of the road. Anyway, there is a module for integration with Postgres, and we will use it for our demo application. R2DBC will not be the only one new interesting solution described in this article. I also show you how to use Spring Data JDBC – another really interesting project released recently.
It is worth mentioning some words about Spring Data JDBC. This project has been already released, and is available under version 1.0. It is a part of bigger Spring Data framework. It offers a repository abstraction based on JDBC. The main reason of creating that library is allow to access relational databases using Spring Data way (through CrudRepository interfaces) without including JPA library to the application dependencies. Of course, JPA is still certainly the main persistence API used for Java applications. Spring Data JDBC aims to be much simpler conceptually than JPA by not implementing popular patterns like lazy loading, caching, dirty context, sessions. It also provides only very limited support for annotation-based mapping. Finally, it provides an implementation of reactive repositories that uses R2DBC for accessing relational database. Although that module is still under development (only SNAPSHOT version is available), we will try to use it in our demo application. Let’s proceed to the implementation.

Including dependencies

We use Kotlin for implementation. So first, we include some required Kotlin dependencies.

<dependency>
	<groupId>org.jetbrains.kotlin</groupId>
	<artifactId>kotlin-stdlib</artifactId>
	<version>${kotlin.version}</version>
</dependency>
<dependency>
	<groupId>com.fasterxml.jackson.module</groupId>
	<artifactId>jackson-module-kotlin</artifactId>
</dependency>
<dependency>
	<groupId>org.jetbrains.kotlin</groupId>
	<artifactId>kotlin-reflect</artifactId>
</dependency>
<dependency>
	<groupId>org.jetbrains.kotlin</groupId>
	<artifactId>kotlin-test-junit</artifactId>
	<version>${kotlin.version}</version>
	<scope>test</scope>
</dependency>

We should also add kotlin-maven-plugin with support for Spring.

<plugin>
	<groupId>org.jetbrains.kotlin</groupId>
	<artifactId>kotlin-maven-plugin</artifactId>
	<version>${kotlin.version}</version>
	<executions>
		<execution>
			<id>compile</id>
			<phase>compile</phase>
			<goals>
				<goal>compile</goal>
			</goals>
		</execution>
		<execution>
			<id>test-compile</id>
			<phase>test-compile</phase>
			<goals>
				<goal>test-compile</goal>
			</goals>
		</execution>
	</executions>
	<configuration>
		<args>
			<arg>-Xjsr305=strict</arg>
		</args>
		<compilerPlugins>
			<plugin>spring</plugin>
		</compilerPlugins>
	</configuration>
</plugin>

Then, we may proceed to including frameworks required for the demo implementation. We need to include the special SNAPSHOT version of Spring Data JDBC dedicated for accessing database using R2DBC. We also have to add some R2DBC libraries and Spring WebFlux. As you may see below only Spring WebFlux is available in stable version (as a part of Spring Boot RELEASE).

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-webflux</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.data</groupId>
	<artifactId>spring-data-jdbc</artifactId>
	<version>1.0.0.r2dbc-SNAPSHOT</version>
</dependency>
<dependency>
	<groupId>io.r2dbc</groupId>
	<artifactId>r2dbc-spi</artifactId>
	<version>1.0.0.M5</version>
</dependency>
<dependency>
	<groupId>io.r2dbc</groupId>
	<artifactId>r2dbc-postgresql</artifactId>
	<version>1.0.0.M5</version>
</dependency>

It is also important to set dependency management for Spring Data project.

<dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>org.springframework.data</groupId>
			<artifactId>spring-data-releasetrain</artifactId>
			<version>Lovelace-RELEASE</version>
			<scope>import</scope>
			<type>pom</type>
		</dependency>
	</dependencies>
</dependencyManagement>

Repositories

We are using well known Spring Data style of CRUD repository implementation. In that case we need to create interface that extends ReactiveCrudRepository interface.
Here’s the implementation of repository for managing Employee objects.

interface EmployeeRepository : ReactiveCrudRepository<Employee, Int< {
    @Query("select id, name, salary, organization_id from employee e where e.organization_id = $1")
    fun findByOrganizationId(organizationId: Int) : Flux<Employee>
}

Here’s the another implementation of repository – this time for managing Organization objects.

interface OrganizationRepository : ReactiveCrudRepository<Organization, Int< {
}

Implementing Entities and DTOs

Kotlin provides a convenient way of creating entity class by declaring it as data class. When using Spring Data JDBC we have to set primary key for entity by annotating the field with @Id. It assumes the key is automatically incremented by database. If you are not using auto-increment columns, you have to use a BeforeSaveEvent listener, which sets the ID of the entity. However, I tried to set such a listener for my entity, but it just didn’t work with reactive version of Spring Data JDBC.
Here’s an implementation of Employee entity class. What is worth mentioning Spring Data JDBC will automatically map class field organizationId into database column organization_id.

data class Employee(val name: String, val salary: Int, val organizationId: Int) {
    @Id 
    var id: Int? = null
}

Here’s an implementation of Organization entity class.

data class Organization(var name: String) {
    @Id 
    var id: Int? = null
}

R2DBC does not support any lists or sets. Because I’d like to return list with employees inside Organization object in one of API endpoints I have created DTO containing such a list as shown below.

data class OrganizationDTO(var id: Int?, var name: String) {
    var employees : MutableList = ArrayList()
    constructor(employees: MutableList) : this(null, "") {
        this.employees = employees
    }
}

The SQL scripts corresponding to the created entities are visible below. Field type serial will automatically creates sequence and attach it to the field id.

CREATE TABLE employee (
    name character varying NOT NULL,
    salary integer NOT NULL,
    id serial PRIMARY KEY,
    organization_id integer
);
CREATE TABLE organization (
    name character varying NOT NULL,
    id serial PRIMARY KEY
);

Building sample web applications

For the demo purposes we will build two independent applications employee-service and organization-service. Application organization-service is communicating with employee-service using WebFlux WebClient. It gets the list of employees assigned to the organization, and includes them to response together with Organization object. Sample applications source code is available on GitHub under repository sample-spring-data-webflux: https://github.com/piomin/sample-spring-data-webflux.
Ok, let’s begin from declaring Spring Boot main class. We need to enable Spring Data JDBC repositories by annotating the main class with @EnableJdbcRepositories.

@SpringBootApplication
@EnableJdbcRepositories
class EmployeeApplication

fun main(args: Array<String>) {
    runApplication<EmployeeApplication>(*args)
}

Working with R2DBC and Postgres requires some configuration. Probably due to an early stage of progress in development of Spring Data JDBC and R2DBC there is no Spring Boot auto-configuration for Postgres. We need to declare connection factory, client, and repository inside @Configuration bean.

@Configuration
class EmployeeConfiguration {

    @Bean
    fun repository(factory: R2dbcRepositoryFactory): EmployeeRepository {
        return factory.getRepository(EmployeeRepository::class.java)
    }

    @Bean
    fun factory(client: DatabaseClient): R2dbcRepositoryFactory {
        val context = RelationalMappingContext()
        context.afterPropertiesSet()
        return R2dbcRepositoryFactory(client, context)
    }

    @Bean
    fun databaseClient(factory: ConnectionFactory): DatabaseClient {
        return DatabaseClient.builder().connectionFactory(factory).build()
    }

    @Bean
    fun connectionFactory(): PostgresqlConnectionFactory {
        val config = PostgresqlConnectionConfiguration.builder() //
                .host("192.168.99.100") //
                .port(5432) //
                .database("reactive") //
                .username("reactive") //
                .password("reactive123") //
                .build()

        return PostgresqlConnectionFactory(config)
    }

}

Finally, we can create REST controllers that contain the definition of our reactive API methods. With Kotlin it does not take much space. The following controller definition contains three GET methods that allows to find all employees, all employees assigned to a given organization or a single employee by id.

@RestController
@RequestMapping("/employees")
class EmployeeController {

    @Autowired
    lateinit var repository : EmployeeRepository

    @GetMapping
    fun findAll() : Flux<Employee> = repository.findAll()

    @GetMapping("/{id}")
    fun findById(@PathVariable id : Int) : Mono<Employee> = repository.findById(id)

    @GetMapping("/organization/{organizationId}")
    fun findByorganizationId(@PathVariable organizationId : Int) : Flux<Employee> = repository.findByOrganizationId(organizationId)

    @PostMapping
    fun add(@RequestBody employee: Employee) : Mono<Employee> = repository.save(employee)

}

Inter-service Communication

For the OrganizationController the implementation is a little bit more complicated. Because organization-service is communicating with employee-service, we first need to declare reactive WebFlux WebClient builder.

@Bean
fun clientBuilder() : WebClient.Builder {
	return WebClient.builder()
}

Then, similar to the repository bean the builder is being injected into the controller. It is used inside findByIdWithEmployees method for calling method GET /employees/organization/{organizationId} exposed by employee-service. As you can see on the code fragment below it provides reactive API and return Flux object containing list of found employees. This list is injected into OrganizationDTO object using zipWith Reactor method.

@RestController
@RequestMapping("/organizations")
class OrganizationController {

    @Autowired
    lateinit var repository : OrganizationRepository
    @Autowired
    lateinit var clientBuilder : WebClient.Builder

    @GetMapping
    fun findAll() : Flux<Organization> = repository.findAll()

    @GetMapping("/{id}")
    fun findById(@PathVariable id : Int) : Mono<Organization> = repository.findById(id)

    @GetMapping("/{id}/withEmployees")
    fun findByIdWithEmployees(@PathVariable id : Int) : Mono<OrganizationDTO> {
        val employees : Flux<Employee> = clientBuilder.build().get().uri("http://localhost:8090/employees/organization/$id")
                .retrieve().bodyToFlux(Employee::class.java)
        val org : Mono = repository.findById(id)
        return org.zipWith(employees.collectList())
                .map { tuple -> OrganizationDTO(tuple.t1.id as Int, tuple.t1.name, tuple.t2) }
    }

    @PostMapping
    fun add(@RequestBody employee: Organization) : Mono<Organization> = repository.save(employee)

}

How it works?

Before running the tests we need to start Postgres database. Here’s the Docker command used for running Postgres container. It is creating user with password, and setting up default database.

$ docker run -d --name postgres -p 5432:5432 -e POSTGRES_USER=reactive -e POSTGRES_PASSWORD=reactive123 -e POSTGRES_DB=reactive postgres

Then we need to create some tests tables, so you have to run SQL script placed in the section Implementing Entities and DTOs. After that you can start our test applications. If you do not override default settings provided inside application.yml files employee-service is listening on port 8090, and organization-service on port 8095. The following picture illustrates the architecture of our sample system.
spring-data-1
Now, let’s add some test data using reactive API exposed by the applications.

$ curl -d '{"name":"Test1"}' -H "Content-Type: application/json" -X POST http://localhost:8095/organizations
$ curl -d '{"name":"Name1", "balance":5000, "organizationId":1}' -H "Content-Type: application/json" -X POST http://localhost:8090/employees
$ curl -d '{"name":"Name2", "balance":10000, "organizationId":1}' -H "Content-Type: application/json" -X POST http://localhost:8090/employees

Finally you can call GET organizations/{id}/withEmployees method, for example using your web browser. The result should be similar to the result visible on the following picture.

spring-data-2

Kotlin Microservices with Ktor

Ktor is a framework for building asynchronous applications on the server and client side. It is fully written in Kotlin. The main goal of Ktor is to provide an end-to-end multiplatform application framework for connected applications. It allows to easily build web applications and HTTP services, so we can be use it for building microservices-based architecture. Let’s discuss the main features of Ktor framework by the example of a simple system consisting of two microservices. Continue reading “Kotlin Microservices with Ktor”

5 Things You Will Like in Kotlin as a Java Developer

Kotlin language is gaining more and more popularity recently. It is widely used no longer just in mobile apps development, but also for server-side systems. As you probably know is a statically typed programming language that runs on the JVM. That’s why it is often compared with Java language. One of the main reasons of Kotlin popularity is a simplicity. It cleans and removes a lot of the code bloat from Java. However, it is also very similar to Java, so that any experienced Java developer can pick up Kotlin in a few hours.
In this article I’m going to discuss some interesting Kotlin features used for server-side development in comparison to Java. Here’s my personal list of favourite Kotlin features unavailable for Java language.

1. Collections and Generics

I really like Java, but sometimes working with generic collections may be an unpleasant experience, especially if you have to use wildcard types. The good news are that Kotlin doesn’t have any wildcard types. Instead, it provides two other features called declaration-site variance and type projections. Now, let’s consider the following class hierarchy.

abstract class Vehicle {
	
}

class Truck extends Vehicle {
	
}

class PassengerCar extends Vehicle {

}

I defined a generic repository that contains all objects with a given type.

public class Repository<T> {

	List<T> l = new ArrayList<>();
	
	public void addAll(List<T> l) {
		l.addAll(l);
	}
	
	public void add(T t) {
		l.add(t);
	}
}

Now, I would like to store all the vehicles in that repository, so I declare Repository r = new Repository<Vehicle>(). But invoking repository method addAll with List<Truck> as a parameter you will receive the following error.
kotlin-2
You can change the declaration of addAll method to accept parameter that declared like that: public void addAll(List<? extends T> l), and it works fine..
Of course, this situation has a logical explanation. First, generic types in Java are invariant, what in fact means that List<Truck> is not a subtype of List<Vehicle>, although Truck is a subtype of Vehicle. The addAll method takes wildcard type argument <? extends T> as a parameter, what indicates that this method accepts a collection of objects of T or some subtype of T, not just T itself. The List<Truck> is a subtype of List<? extends Vehicle>, but the target list is still List<Vehicle>. I don’t want to get into details about this behaviour – you can read more about it in Java specification. The important thing for us is that Kotlin is solving this problem using feature called Declaration-site variance. If we add the out modifier to the MutableList parameter inside addAll method declaration the compiler will allow to add a list of Truck objects. The smart explanation of that process is provided on the Kotlin site: ‘In “clever words” they say that the class C is covariant in the parameter T, or that T is a covariant type parameter. You can think of C as being a producer of T’s, and NOT a consumer of T’s.’

class Repository<T> {

    var l: MutableList<T> = ArrayList()

    fun addAll(objects: MutableList<out T>) {
        l.addAll(objects)
    }

    fun add(o: T) {
        l.add(o)
    }

}

fun main(args: Array<String>) {
    val r = Repository<Vehicle>()
    var l1: MutableList<Truck> = ArrayList()
    l1.add(Truck())
    r.addAll(l1)
    println("${r.l.size}")
}

2. Data classes

You probably excellent know Java POJOs (Plain Old Java Object). If you are following Java good practices such a class should implement getters, setters, hashCode and equals methods, and also toString method for logging needs. Such an implementation may take up a lot of space even for simple class with only four fields – as shown below (methods auto-generated using Eclipse IDE).

public class Person {

	private Integer id;
	private String firstName;
	private String lastName;
	private int age;

	public Person(Integer id, String firstName, String lastName) {
		this.id = id;
		this.firstName = firstName;
		this.lastName = lastName;
	}

	public Integer getId() {
		return id;
	}

	public void setId(Integer id) {
		this.id = id;
	}

	public String getFirstName() {
		return firstName;
	}

	public void setFirstName(String firstName) {
		this.firstName = firstName;
	}

	public String getLastName() {
		return lastName;
	}

	public void setLastName(String lastName) {
		this.lastName = lastName;
	}

	public int getAge() {
		return age;
	}

	public void setAge(int age) {
		this.age = age;
	}

	@Override
	public int hashCode() {
		final int prime = 31;
		int result = 1;
		result = prime * result + ((firstName == null) ? 0 : firstName.hashCode());
		result = prime * result + ((id == null) ? 0 : id.hashCode());
		result = prime * result + ((lastName == null) ? 0 : lastName.hashCode());
		return result;
	}

	@Override
	public boolean equals(Object obj) {
		if (this == obj)
			return true;
		if (obj == null)
			return false;
		if (getClass() != obj.getClass())
			return false;
		Person other = (Person) obj;
		if (firstName == null) {
			if (other.firstName != null)
				return false;
		} else if (!firstName.equals(other.firstName))
			return false;
		if (id == null) {
			if (other.id != null)
				return false;
		} else if (!id.equals(other.id))
			return false;
		if (lastName == null) {
			if (other.lastName != null)
				return false;
		} else if (!lastName.equals(other.lastName))
			return false;
		return true;
	}

	@Override
	public String toString() {
		return "Person [id=" + id + ", firstName=" + firstName + ", lastName=" + lastName + "]";
	}

}

To avoid many additional lines of code inside your POJO classes you may use project Lombok. It provides a set of annotations that can be used on the class to deliver implementations of getters/setters, equals and hashCode methods. It is also possible to annotate your class with @Data, that bundles all the features of @ToString, @EqualsAndHashCode, @Getter / @Setter and @RequiredArgsConstructor together. So, with Lombok’s @Data the POJO is going to look like as shown below – assuming you don’t require a constructor with parameters.

@Data
public class Person {

	private Integer id;
	private String firstName;
	private String lastName;
	private int age;
	
}

Including and using Lombok with Java application is quite simple and supported by all the main developer IDEs, but Kotlin solves this issue out-of-the-box. It provides functionality called data classes, which is enabled after adding keyword data to the class definition. The compiler automatically derives the methods from all properties declared in the primary constructor:

  • equals()/hashCode() pair
  • toString() method
  • componentN() functions corresponding to the properties in their order of declaration
  • copy() function

Because Kotlin internally generates a default getter and setter for mutable properties (declared as var), and a getter for read-only properties (declared as val) the similar implementation of Person Java POJO in Kotlin will look as shown below.

data class Person(val firstName: String, val lastName: String, val id: Int) {

    var age: Int = 0

}

What’s worth mentioning the compiler only uses the properties defined inside the primary constructor for the automatically generated functions. So, the field age, which is declared inside class body, will not be used by toString, equals, hashCode, and copy implementations.

3. Names for test methods

Now, let’s implement some test cases that proofs the features described in the step 2 works properly. The following three tests are comparing two objects with different values of age property, trying to add the same object to the Java HashSet twice, and checking if componentN method of data class is returning properties in the right order.

@Test fun `Test person equality excluding "age" property`() {
	val person = Person("John", "Smith", 1)
	person.age = 35
	val person2 = Person("John", "Smith", 1)
	person2.age = 45
	Assert.assertEquals(person, person2)
}

@Test fun `Test person componentN method for properties`() {
	val person = Person("John", "Smith", 1)
	Assert.assertEquals("John", person.component1())
	Assert.assertEquals("Smith", person.component2())
	Assert.assertEquals(1, person.component3())
}

@Test fun `Test adding and getting person from a Set`() {
	val s = HashSet<Person>()
	val person = Person("John", "Smith", 1)
	var added = s.add(person)
	Assert.assertTrue(added)
	added = s.add(person)
	Assert.assertFalse(added)
}

As you see on the fragment of code above Kotlin is accepting to use method names with spaces enclosed in backticks. Thanks to that I can set a descriptive form of test name, which is then visible during execution, and you know exactly what’s going on 🙂
kotlin-1

4. Extensions

Let’s consider the situation that we have a library contains class definitions, which cannot be changed, and we need to add there some methods. In Java, we have some choices to implement such an approach. We can just extend the existing class, implement there a new method or for example implement it with Decorator pattern.
Now, let’s assume we have the following Java class containing list of persons and exposing getters/setters.

public class Organization {

	private List<Person> persons;

	public List<Person> getPersons() {
		return persons;
	}

	public void setPersons(List<Person> persons) {
		this.persons = persons;
	}
	
}

If I would like to have the method for adding single Person object to the list I would have to extends Organization, and implement new method there.

public class OrganizationExt extends Organization {

	public void addPerson(Person person) {
		getPersons().add(person);
	}
}

Kotlin provides the ability to extend a class with a new functionality without having to inherit from the base class. This is done via special declarations called extensions. Here’s the similar declaration to Organization Java class in Kotlin. Because Kotlin treats simple Listclass as immutable, we need to define MutableList.

class Organization(val persons: MutableList<Person> = ArrayList()) {
    
}

We can easily extend it with addPerson method as shown below. Extensions are resolved statically, and they do not modify extended classes.

class OrganizationTest {

    fun Organization.addPerson(person: Person) {
        persons.add(person)
    }

    @Test
    fun testExtension() {
        val organization = Organization()
        organization.addPerson(Person("John", "Smith", 1))
        Assert.assertTrue(organization.persons.size == 1)
    }

}

5. String templates

Here’s a little something to make you happy – not available in Java.

println("Organization ${organization.name} with ${organization.persons.size} persons")

Conclusion

Of course there are some other differences between Java and Kotlin. This is only my personal list of favourite features unavailable in Java. The sample source code with described samples is available on GitHub: sample-kotlin-playground.

Running Jenkins Server with Configuration-As-Code

Some days ago I came across a newly created Jenkins plugin called Configuration as Code (JcasC). This plugin allows you to define Jenkins configuration in very popular format these days – YAML notation. It is interesting that such a plugin has not been created before, but better late than never. Of course, we could have use some other Jenkins plugins for that, like Job DSL Plugin, but it is based on Groovy language.
If you have any experience with Jenkins, you probably know how many plugins and other configuration settings it requires to have in order to work in your organization as a main CI server. With JcasC plugin, you can store such a configuration in human-readable declarative YAML files. In this article I’m going to show you how to create and run Jenkins with configuration as code letting you to build Java application using such tools like declarative pipelines, Git, Maven. I’ll also show how to manage sensitive data using Vault server.

1. Using Vault server

We will begin from running Vault server on the local machine. The easiest way to do that is with Docker image. By default, official Vault image is started in development mode. The following command runs an in-memory server, which listens on address 0.0.0.0:8200

docker run -d --name=vault --cap-add=IPC_LOCK -p 8200:8200 vault:0.9.0

There is one thing that should be clarified here. I do not run the newest version of Vault, because it forces us to call endpoints from version 2 of KV (Key-Value Secrets Engine) HTTP API, which is used for manipulating secrets. This version, in turn, is not supported by JcasC plugin that can communicate only with endpoints from version 1 of KV HTTP API. It does not apply to older version of Vault, for example 0.9.0, which allows to call KV in version 1. After running container we should obtain the token used for authentication against Vault from the console logs. To do that just run command docker logs vault and find the following fragment in the logs.

vault-token
Now, using this authentication token we may add credentials required for accessing Jenkins web dashboard and our account on Git repository host. Jenkins account will be identified by rootPassword key, while GitHub account by githubPassword key.

$ curl -H "X-Vault-Token:  5bcab13b-6cf5-2f58-8b37-34dca31bebde" --request POST -d '{"rootPassword":"your_root_password", "githubPassword":"your_github_password"}' https://192.168.99.100:8200/v1/secret/jenkins

To check out if the parameters has been saved on Vault just call GET method with the same context path.

$ curl -H "X-Vault-Token:  5bcab13b-6cf5-2f58-8b37-34dca31bebde" https://192.168.99.100:8200/v1/secret/jenkins

2. Building Jenkins image

The same as for Vault server we also run Jenkins on Docker container. However, we need to add some configuration settings to Jenkins official image before running it. JcasC plugin requires setting an environment variable that points to location of the current YAML configuration files. This variable can point to the following:

  • Path to a folder containing a set of config files
  • A full path to a single file
  • A URL pointing to a file served on the web, or for example your internal configuration server

The next step is to set some configuration settings required for establishing connection to Vault server. We have to pass the authentication token, the path of created key and the URL of running server. All these configuration settings are set as environment variables and may be overridden on container startup. The same rule applies to the location of JcasC configuration file. The following Dockerfile definition extends Jenkins base image, and add all the required parameters for running it using JcasC plugin and with secrets taken from Vault.

FROM jenkins/jenkins:lts
ENV CASC_JENKINS_CONFIG="/var/jenkins_home/jenkins.yml"
ENV CASC_VAULT_TOKEN=5bcab13b-6cf5-2f58-8b37-34dca31bebde
ENV CASC_VAULT_PATH=/secret/jenkins
ENV CASC_VAULT_URL=http://192.168.99.100:8200
COPY jenkins.yml ${CASC_JENKINS_CONFIG}
USER jenkins
RUN /usr/local/bin/install-plugins.sh configuration-as-code configuration-as-code-support git workflow-cps-global-lib

Now, let’s build the Docker image using Dockerfile visible above. Alternatively, you can just pull the image stored in my Docker Hub repository.

$ docker build -t piomin/jenkins-casc:1.0 .

Finally, you can run the container based on the built image with the following command. Of course, before that we need to prepare the YAML configuration file for JcasC plugin.

$ docker run -d --name jenkins-casc -p 8080:8080 -p 50000:50000 piomin/jenkins-casc:1.0 .

3. Preparing configuration

JcasC plugin provides many configuration settings that allows you to configure various components of your jenkins master installation. However, I will limit myself to defining the basic configuration used for building my sample Java application. We need the following Jenkins components to be configured after startup:

  1. A set of Jenkins plugins allowing to create declarative pipeline that checkouts source code from Git repository, builds it using Maven, and records JUnit test results
  2. Basic security realm containing credentials for a single Jenkins user. The user password is read from property rootPassword stored on Vault server
  3. JDK location directory
  4. Maven installation settings – Maven is not installed by default in Jenkins, so we have to set required version and tool name
  5. Credentials for accessing Git repository containing application source code
plugins: # (1)
  required:
    git: 3.9.1
    pipeline-model-definition: 1.3.2
    pipeline-stage-step: 2.3
    pipeline-maven: 3.5.12
    workflow-aggregator: 2.5
    junit: 1.25
  sites:
  - id: "default"
    url: "https://updates.jenkins.io/update-center.json"
jenkins:
  agentProtocols:
  - "JNLP4-connect"
  - "Ping"
  authorizationStrategy:
    loggedInUsersCanDoAnything:
      allowAnonymousRead: false
  crumbIssuer:
    standard:
      excludeClientIPFromCrumb: false
  disableRememberMe: false
  mode: NORMAL
  numExecutors: 2
  primaryView:
    all:
      name: "all"
  quietPeriod: 5
  scmCheckoutRetryCount: 0
  securityRealm: # (2)
    local:
      allowsSignup: false
      enableCaptcha: false
      users:
      - id: "piomin"
        password: ${rootPassword}
  slaveAgentPort: 50000
  views:
  - all:
      name: "all"
tool:
  git:
    installations:
    - home: "git"
      name: "Default"
  jdk: # (3)
    installations:
    - home: "/docker-java-home"
      name: "jdk"
  maven: # (4)
    installations:
    - name: "maven"
      properties:
      - installSource:
          installers:
          - maven:
              id: "3.5.4"
credentials: # (5)
  system:
    domainCredentials:
      - domain :
          name: "github.com"
          description: "GitHub"
        credentials:
          - usernamePassword:
              scope: SYSTEM
              id: github-piomin
              username: piomin
              password: ${githubPassword}

4. Exporting configuration

After running Jenkins with JcasC plugin installed you can easily export the current configuration to the file. First, navigate to section Manage Jenkins -> Configuration as Code.

jcasc-1

Then, after choosing Export Configuration button, the YAML file with Jenkins configuration will be downloaded to your machine. But following the comment visible below you cannot rely on that file, because this feature is still not stable. For my configuration it didn’t export Maven tool settings and list of Jenkins plugins. However, the JcasC plugin is probably still under active development, so I hope that feature will work succesfully soon.

jcasc-2

5. Running sample pipeline

Finally you can create and run pipeline for your sample application. Here’s the definition of my pipeline.

pipeline {
    agent any
    tools {
        maven 'maven'
    }
    stages {
        stage('Checkout') {
            steps {
                git url: 'https://github.com/piomin/sample-spring-boot-autoscaler.git', credentialsId: 'github-piomin', branch: 'master'
            }
        }
		stage('Test') {
            steps {
                dir('example-service') {
                    sh 'mvn clean test'
                }
            }
        }
        stage('Build') {
            steps {
                dir('example-service') {
                    sh 'mvn clean install'
                }
            }
        }
    }
    post {
        always {
            junit '**/target/reports/**/*.xml'
        }
    }
}

Summary

The idea around Jenkins Configuration as Code Plugin is the step in the right direction. I’ll be following the development of this product with the great interest. There are still some features that needs to be added to make it more useful, and some bugs that need to be fixed. But after that I’ll will definitely consider using this plugin for maintaining the current Jenkins master server inside my organization.

Spring Boot Autoscaler

One of more important reasons we are deciding to use such a tools like Kubernetes, Pivotal Cloud Foundry or HashiCorp’s Nomad is an availability of auto-scaling our applications. Of course those tools provides many other useful mechanisms, but we can implement auto-scaling by ourselves. At first glance it seems to be difficult, but assuming we use Spring Boot as a framework for building our applications and Jenkins as a CI server, it finally does not require a lot of work. Today, I’m going to show you how to implement such a solutions using the following frameworks/tools:

  • Spring Boot
  • Spring Boot Actuator
  • Spring Cloud Netflix Eureka
  • Jenkins CI

How it works?

Every Spring Boot application, which contains Spring Boot Actuator library can expose metrics under endpoint /actuator/metrics. There are many valuable metrics that gives you the detailed information about an application status. Some of them may be especially important when talking about autoscaling: JVM, CPU metrics, a number of running threads and a number of incoming HTTP requests. There is dedicated Jenkins pipeline responsible for monitoring application’s metrics by polling endpoint /actuator/metrics periodically. If any monitored metrics is below or above target range it runs new instance or shutdown a running instance of application using another Actuator endpoint /actuator/shutdown. Before that, it needs to fetch the current list of running instances of a single application in order to get an address of existing application selected for shutting down or the address of server with the smallest number of running instances for a new instance of application..

spring-autoscaler-1

After discussing an architecture of our system we may proceed to the development. Our application needs to meet some requirements: it has to expose metrics and endpoint for graceful shutdown, it needs to register in Eureka after after startup and deregister on shutdown, and finally it also should dynamically allocate running port randomly from the pool of free ports. Thanks to Spring Boot we may easily implement all these mechanisms if five minutes 🙂

Dynamic port allocation

Since it is possible to run many instances of application on a single machine we have to guarantee that there won’t be conflicts in port numbers. Fortunately, Spring Boot provides such mechanisms for an application. We just need to set port number to 0 inside application.yml file using server.port property. Because our application registers itself in eureka it also needs to send unique instanceId, which is by default generated as a concatenation of fields spring.cloud.client.hostname, spring.application.name and server.port.
Here’s current configuration of our sample application. I have changed the template of instanceId field by replacing number of port to randomly generated number.

spring:
  application:
    name: example-service
server:
  port: ${PORT:0}
eureka:
  instance:
    instanceId: ${spring.cloud.client.hostname}:${spring.application.name}:${random.int[1,999999]}

Enabling Actuator metrics

To enable Spring Boot Actuator we need to include the following dependency to pom.xml.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

We also have to enable exposure of actuator endpoints via HTTP API by setting property management.endpoints.web.exposure.include to '*'. Now, the list of all available metric names is available under context path /actuator/metrics, while detailed information for each metric under path /actuator/metrics/{metricName}.

Graceful shutdown

Besides metrics Spring Boot Actuator also provides endpoint for shutting down an application. However, in contrast to other endpoints this endpoint is not available by default. We have to set property management.endpoint.shutdown.enabled to true. After that we will be to stop our application by sending POST request to /actuator/shutdown endpoint.
This method of stopping application guarantees that service will unregister itself from Eureka server before shutdown.

Enabling Eureka discovery

Eureka is the most popular discovery server used for building microservices-based architecture with Spring Cloud. So, if you already have microservices and want to provide auto-scaling mechanisms for them, Eureka would be a natural choice. It contains IP address and port number of every registered instance of application. To enable Eureka on the client side you just need to include the following dependency to your pom.xml.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
</dependency>

As I have mentioned before we also have to guarantee an uniqueness of instanceId send to Eureka server by client-side application. It has been described in the step “Dynamic port allocation”.
The next step is to create application with embedded Eureka server. To achieve it we first need to include the following dependency into pom.xml.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-netflix-eureka-server</artifactId>
</dependency>

The main class should be annotated with @EnableEurekaServer.

@SpringBootApplication
@EnableEurekaServer
public class DiscoveryApp {

    public static void main(String[] args) {
        new SpringApplicationBuilder(DiscoveryApp.class).run(args);
    }

}

Client-side applications by default tries to connect with Eureka server on localhost under port 8761. We only need single, standalone Eureka node, so we will disable registration and attempts to fetching list of services form another instances of server.

spring:
  application:
    name: discovery-service
server:
  port: ${PORT:8761}
eureka:
  instance:
    hostname: localhost
  client:
    registerWithEureka: false
    fetchRegistry: false
    serviceUrl:
      defaultZone: http://localhost:8761/eureka/

The tests of the sample autoscaling system will be performed using Docker containers, so we need to prepare and build image with Eureka server. Here’s Dockerfile with image definition. It can be built using command docker build -t piomin/discovery-server:2.0 ..

FROM openjdk:8-jre-alpine
ENV APP_FILE discovery-service-1.0-SNAPSHOT.jar
ENV APP_HOME /usr/apps
EXPOSE 8761
COPY target/$APP_FILE $APP_HOME/
WORKDIR $APP_HOME
ENTRYPOINT ["sh", "-c"]
CMD ["exec java -jar $APP_FILE"]

Building Jenkins pipeline for autoscaling

The first step is to prepare Jenkins pipeline responsible for autoscaling. We will create Jenkins Declarative Pipeline, which runs every minute. Periodical execution may be configured with the triggers directive, that defines the automated ways in which the pipeline should be re-triggered. Our pipeline will communicate with Eureka server and metrics endpoints exposed by every microservice using Spring Boot Actuator.
The test service name is EXAMPLE-SERVICE, which is equal to value (big letters) of property spring.application.name defined inside application.yml file. The monitored metric is the number of HTTP listener threads running on Tomcat container. These threads are responsible for processing incoming HTTP requests.

pipeline {
    agent any
    triggers {
        cron('* * * * *')
    }
    environment {
        SERVICE_NAME = "EXAMPLE-SERVICE"
        METRICS_ENDPOINT = "/actuator/metrics/tomcat.threads.busy?tag=name:http-nio-auto-1"
        SHUTDOWN_ENDPOINT = "/actuator/shutdown"
    }
    stages { ... }
}

Integrating Jenkins pipeline with Eureka

The first stage of our pipeline is responsible for fetching list of services registered in service discovery server. Eureka exposes HTTP API with several endpoints. One of them is GET /eureka/apps/{serviceName}, which returns list of all instances of application with given name. We are saving the number of running instances and the URL of metrics endpoint of every single instance. These values would be accessed during next stages of pipeline.
Here’s the fragment of pipeline responsible for fetching list of running instances of application. The name of stage is Calculate. We use HTTP Request Plugin for HTTP connections.

stage('Calculate') {
	steps {
		script {
			def response = httpRequest "http://192.168.99.100:8761/eureka/apps/${env.SERVICE_NAME}"
			def app = printXml(response.content)
			def index = 0
			env["INSTANCE_COUNT"] = app.instance.size()
			app.instance.each {
				if (it.status == 'UP') {
					def address = "http://${it.ipAddr}:${it.port}"
					env["INSTANCE_${index++}"] = address 
				}
			}
		}
	}
}

@NonCPS
def printXml(String text) {
    return new XmlSlurper(false, false).parseText(text)
}

Here’s a sample response from Eureka API for our microservice. The response content type is XML.

spring-autoscaler-2

Integrating Jenkins pipeline with Spring Boot Actuator metrics

Spring Boot Actuator exposes endpoint with metrics, which allows to find metric by name and optionally by tag. In the fragment of pipeline visible below I’m trying to find the instance with metric below or above a defined threshold. If there is such an instance we stop the loop in order to proceed to the next stage, which performs scaling down or up. The ip addresses of running applications are taken from pipeline environment variable with prefix INSTANCE_, which has been saved in the previous stage.

stage('Metrics') {
	steps {
		script {
			def count = env.INSTANCE_COUNT
			for(def i=0; i<count; i++) {
				def ip = env["INSTANCE_${i}"] + env.METRICS_ENDPOINT
				if (ip == null)
					break;
				def response = httpRequest ip
				def objRes = printJson(response.content)
				env.SCALE_TYPE = returnScaleType(objRes)
				if (env.SCALE_TYPE != "NONE")
					break
			}
		}
	}
}

@NonCPS
def printJson(String text) {
    return new JsonSlurper().parseText(text)
}

def returnScaleType(objRes) {
def value = objRes.measurements[0].value
if (value.toInteger() > 100)
		return "UP"
else if (value.toInteger() < 20)
		return "DOWN"
else
		return "NONE"
}

Shutdown application instance

In the last stage of our pipeline we will shutdown the running instance or start new instance depending on the result saved in the previous stage. Shutdown may be easily performed by calling Spring Boot Actuator endpoint. In the following fragment of pipeline we pick the instance returned by Eureka as first. Then we send POST request to that ip address.
If we need to scale up our application we call another pipeline responsible for build fat JAR and launch it on our machine.

stage('Scaling') {
	steps {
		script {
			if (env.SCALE_TYPE == 'DOWN') {
				def ip = env["INSTANCE_0"] + env.SHUTDOWN_ENDPOINT
				httpRequest url:ip, contentType:'APPLICATION_JSON', httpMode:'POST'
			} else if (env.SCALE_TYPE == 'UP') {
				build job:'spring-boot-run-pipeline'
			}
			currentBuild.description = env.SCALE_TYPE
		}
	}
}

Here’s a full definition of our pipeline spring-boot-run-pipeline responsible for starting new instance of application. It clones the repository with application source code, builds binaries using Maven commands, and finally runs the application using java -jar command passing address of Eureka server as a parameter.

pipeline {
    agent any
    tools {
        maven 'M3'
    }
    stages {
        stage('Checkout') {
            steps {
                git url: 'https://github.com/piomin/sample-spring-boot-autoscaler.git', credentialsId: 'github-piomin', branch: 'master'
            }
        }
        stage('Build') {
            steps {
                dir('example-service') {
                    sh 'mvn clean package'
                }
            }
        }
        stage('Run') {
            steps {
                dir('example-service') {
                    sh 'nohup java -jar -DEUREKA_URL=http://192.168.99.100:8761/eureka target/example-service-1.0-SNAPSHOT.jar 1>/dev/null 2>logs/runlog &'
                }
            }
        }
    }
}

Remote extension

The algorithm discussed in the previous sections will work fine only for microservices launched on the single machine. If we would like to extend it to work with many machines, we will have to modify our architecture as shown below. Each machine has Jenkins agent running and communicating with Jenkins master. If we would like to start new instance of microservices on the selected machine, we have to run pipeline using agent running on that machine. This agent is responsible only for building application from source code and launching it on the target machine. The shutdown of instance is still performed just by calling HTTP endpoint.

spring-autoscaler-3

You can find more information about running Jenkins agents and connecting them with Jenkins master via JNLP protocol in my article Jenkins nodes on Docker containers. Assuming we have successfully launched some agents on the target machines we need to parametrize our pipelines in order to be able to select agent (and therefore the target machine) dynamically.
When we are scaling up our application we have to pass agent label to the downstream pipeline.

build job:'spring-boot-run-pipeline', parameters:[string(name: 'agent', value:"slave-1")]

The calling pipeline will be ran by agent labelled with given parameter.

pipeline {
    agent {
        label "${params.agent}"
    }
    stages { ... }
}

If we have more than one agent connected to the master node we can map their addresses into the labels. Thanks to that you would be able to map IP address of microservice instance fetched from Eureka to the target machine with Jenkins agent.

pipeline {
    agent any
    triggers {
        cron('* * * * *')
    }
    environment {
        SERVICE_NAME = "EXAMPLE-SERVICE"
        METRICS_ENDPOINT = "/actuator/metrics/tomcat.threads.busy?tag=name:http-nio-auto-1"
        SHUTDOWN_ENDPOINT = "/actuator/shutdown"
        AGENT_192.168.99.102 = "slave-1"
        AGENT_192.168.99.103 = "slave-2"
    }
    stages { ... }
}

Summary

In this article I have demonstrated how to use Spring Boot Actuator metrics in order to scale up/scale down your Spring Boot application. Using basic mechanisms provided by Spring Boot together with Spring Cloud Netflix Eureka and Jenkins you can implement auto-scaling for your applications without getting any other third-party tools. The case described in this article assumes using Jenkins agents on the remote machines to launch there new instance of application, but you may as well use a tool like Ansible for that. If you would decide to run Ansible playbooks from Jenkins you will not have to launch Jenkins agents on remote machines. The source code with sample applications is available on GitHub: https://github.com/piomin/sample-spring-boot-autoscaler.git.

Integration tests on OpenShift using Arquillian Cube and Istio

Building integration tests for applications deployed on Kubernetes/OpenShift platforms seems to be quite a big challenge. With Arquillian Cube, an Arquillian extension for managing Docker containers, it is not complicated. Kubernetes extension, being a part of Arquillian Cube, helps you write and run integration tests for your Kubernetes/Openshift application. It is responsible for creating and managing temporary namespace for your tests, applying all Kubernetes resources required to setup your environment and once everything is ready it will just run defined integration tests.
The one very good information related to Arquillian Cube is that it supports Istio framework. You can apply Istio resources before executing tests. One of the most important features of Istio is an ability to control of traffic behavior with rich routing rules, retries, delays, failovers, and fault injection. It allows you to test some unexpected situations during network communication between microservices like server errors or timeouts.
If you would like to run some tests using Istio resources on Minishift you should first install it on your platform. To do that you need to change some privileges for your OpenShift user. Let’s do that.

1. Enabling Istio on Minishift

Istio requires some high-level privileges to be able to run on OpenShift. To add those privileges to the current user we need to login as an user with cluster admin role. First, we should enable admin-user addon on Minishift by executing the following command.

$ minishift addons enable admin-user

After that you would be able to login as system:admin user, which has cluster-admin role. With this user you can also add cluster-admin role to other users, for example admin. Let’s do that.

$ oc login -u system:admin
$ oc adm policy add-cluster-role-to-user cluster-admin admin
$ oc login -u admin -p admin

Now, let’s create new project dedicated especially for Istio and then add some required privileges.

$ oc new-project istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-ingress-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z default -n istio-system
$ oc adm policy add-scc-to-user anyuid -z prometheus -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-egressgateway-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-citadel-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-ingressgateway-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-cleanup-old-ca-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-mixer-post-install-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-mixer-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-pilot-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-sidecar-injector-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-galley-service-account -n istio-system
$ oc adm policy add-scc-to-user privileged -z default -n myproject

Finally, we may proceed to Istio components installation. I downloaded the current newest version of Istio – 1.0.1. Installation file is available under install/kubernetes directory. You just have to apply it to your Minishift instance by calling oc apply command.

$ oc apply -f install/kubernetes/istio-demo.yaml

2. Enabling Istio for Arquillian Cube

I have already described how to use Arquillian Cube to run tests with OpenShift in the article Testing microservices on OpenShift using Arquillian Cube. In comparison with the sample described in that article we need to include dependency responsible for enabling Istio features.

<dependency>
	<groupId>org.arquillian.cube</groupId>
	<artifactId>arquillian-cube-istio-kubernetes</artifactId>
	<version>1.17.1</version>
	<scope>test</scope>
</dependency>

Now, we can use @IstioResource annotation to apply Istio resources into OpenShift cluster or IstioAssistant bean to be able to use some additional methods for adding, removing resources programmatically or polling an availability of URLs.
Let’s take a look on the following JUnit test class using Arquillian Cube with Istio support. In addition to the standard test created for running on OpenShift instance I have added Istio resource file customer-to-account-route.yaml. Then I have invoked method await provided by IstioAssistant. First test test1CustomerRoute creates new customer, so it needs to wait until customer-route is deployed on OpenShift. The next test test2AccountRoute adds account for the newly created customer, so it needs to wait until account-route is deployed on OpenShift. Finally, the test test3GetCustomerWithAccounts is ran, which calls the method responsible for finding customer by id with list of accounts. In that case customer-service calls method endpoint by account-service. As you have probably find out the last line of that test method contains an assertion to empty list of accounts: Assert.assertTrue(c.getAccounts().isEmpty()). Why? We will simulate the timeout in communication between customer-service and account-service using Istio rules.

@Category(RequiresOpenshift.class)
@RequiresOpenshift
@Templates(templates = {
        @Template(url = "classpath:account-deployment.yaml"),
        @Template(url = "classpath:deployment.yaml")
})
@RunWith(ArquillianConditionalRunner.class)
@IstioResource("classpath:customer-to-account-route.yaml")
@FixMethodOrder(MethodSorters.NAME_ASCENDING)
public class IstioRuleTest {

    private static final Logger LOGGER = LoggerFactory.getLogger(IstioRuleTest.class);
    private static String id;

    @ArquillianResource
    private IstioAssistant istioAssistant;

    @RouteURL(value = "customer-route", path = "/customer")
    private URL customerUrl;
    @RouteURL(value = "account-route", path = "/account")
    private URL accountUrl;

    @Test
    public void test1CustomerRoute() {
        LOGGER.info("URL: {}", customerUrl);
        istioAssistant.await(customerUrl, r -> r.isSuccessful());
        LOGGER.info("URL ready. Proceeding to the test");
        OkHttpClient httpClient = new OkHttpClient();
        RequestBody body = RequestBody.create(MediaType.parse("application/json"), "{\"name\":\"John Smith\", \"age\":33}");
        Request request = new Request.Builder().url(customerUrl).post(body).build();
        try {
            Response response = httpClient.newCall(request).execute();
            ResponseBody b = response.body();
            String json = b.string();
            LOGGER.info("Test: response={}", json);
            Assert.assertNotNull(b);
            Assert.assertEquals(200, response.code());
            Customer c = Json.decodeValue(json, Customer.class);
            this.id = c.getId();
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

    @Test
    public  void test2AccountRoute() {
        LOGGER.info("Route URL: {}", accountUrl);
        istioAssistant.await(accountUrl, r -> r.isSuccessful());
        LOGGER.info("URL ready. Proceeding to the test");
        OkHttpClient httpClient = new OkHttpClient();
        RequestBody body = RequestBody.create(MediaType.parse("application/json"), "{\"number\":\"01234567890\", \"balance\":10000, \"customerId\":\"" + this.id + "\"}");
        Request request = new Request.Builder().url(accountUrl).post(body).build();
        try {
            Response response = httpClient.newCall(request).execute();
            ResponseBody b = response.body();
            String json = b.string();
            LOGGER.info("Test: response={}", json);
            Assert.assertNotNull(b);
            Assert.assertEquals(200, response.code());
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

    @Test
    public void test3GetCustomerWithAccounts() {
        String url = customerUrl + "/" + id;
        LOGGER.info("Calling URL: {}", customerUrl);
        OkHttpClient httpClient = new OkHttpClient();
        Request request = new Request.Builder().url(url).get().build();
        try {
            Response response = httpClient.newCall(request).execute();
            String json = response.body().string();
            LOGGER.info("Test: response={}", json);
            Assert.assertNotNull(response.body());
            Assert.assertEquals(200, response.code());
            Customer c = Json.decodeValue(json, Customer.class);
            Assert.assertTrue(c.getAccounts().isEmpty());
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

}

3. Creating Istio rules

On of the interesting features provided by Istio is an availability of injecting faults to the route rules. we can specify one or more faults to inject while forwarding HTTP requests to the rule’s corresponding request destination. The faults can be either delays or aborts. We can define a percentage level of error using percent field for the both types of fault. In the following Istio resource I have defines 2 seconds delay for every single request sent to account-service.

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: account-service
spec:
  hosts:
    - account-service
  http:
  - fault:
      delay:
        fixedDelay: 2s
        percent: 100
    route:
    - destination:
        host: account-service
        subset: v1

Besides VirtualService we also need to define DestinationRule for account-service. It is really simple – we have just define version label of the target service.

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: account-service
spec:
  host: account-service
  subsets:
  - name: v1
    labels:
      version: v1

Before running the test we should also modify OpenShift deployment templates of our sample applications. We need to inject some Istio resources into the pods definition using istioctl kube-inject command as shown below.

$ istioctl kube-inject -f deployment.yaml -o customer-deployment-istio.yaml
$ istioctl kube-inject -f account-deployment.yaml -o account-deployment-istio.yaml

Finally, we may rewrite generated files into OpenShift templates. Here’s the fragment of Openshift template containing DeploymentConfig definition for account-service.

kind: Template
apiVersion: v1
metadata:
  name: account-template
objects:
  - kind: DeploymentConfig
    apiVersion: v1
    metadata:
      name: account-service
      labels:
        app: account-service
        name: account-service
        version: v1
    spec:
      template:
        metadata:
          annotations:
            sidecar.istio.io/status: '{"version":"364ad47b562167c46c2d316a42629e370940f3c05a9b99ccfe04d9f2bf5af84d","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs"],"imagePullSecrets":null}'
          name: account-service
          labels:
            app: account-service
            name: account-service
            version: v1
        spec:
          containers:
          - env:
            - name: DATABASE_NAME
              valueFrom:
                secretKeyRef:
                  key: database-name
                  name: mongodb
            - name: DATABASE_USER
              valueFrom:
                secretKeyRef:
                  key: database-user
                  name: mongodb
            - name: DATABASE_PASSWORD
              valueFrom:
                secretKeyRef:
                  key: database-password
                  name: mongodb
            image: piomin/account-vertx-service
            name: account-vertx-service
            ports:
            - containerPort: 8095
            resources: {}
          - args:
            - proxy
            - sidecar
            - --configPath
            - /etc/istio/proxy
            - --binaryPath
            - /usr/local/bin/envoy
            - --serviceCluster
            - account-service
            - --drainDuration
            - 45s
            - --parentShutdownDuration
            - 1m0s
            - --discoveryAddress
            - istio-pilot.istio-system:15007
            - --discoveryRefreshDelay
            - 1s
            - --zipkinAddress
            - zipkin.istio-system:9411
            - --connectTimeout
            - 10s
            - --statsdUdpAddress
            - istio-statsd-prom-bridge.istio-system:9125
            - --proxyAdminPort
            - "15000"
            - --controlPlaneAuthPolicy
            - NONE
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: INSTANCE_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: ISTIO_META_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: ISTIO_META_INTERCEPTION_MODE
              value: REDIRECT
            image: gcr.io/istio-release/proxyv2:1.0.1
            imagePullPolicy: IfNotPresent
            name: istio-proxy
            resources:
              requests:
                cpu: 10m
            securityContext:
              readOnlyRootFilesystem: true
              runAsUser: 1337
            volumeMounts:
            - mountPath: /etc/istio/proxy
              name: istio-envoy
            - mountPath: /etc/certs/
              name: istio-certs
              readOnly: true
          initContainers:
          - args:
            - -p
            - "15001"
            - -u
            - "1337"
            - -m
            - REDIRECT
            - -i
            - '*'
            - -x
            - ""
            - -b
            - 8095,
            - -d
            - ""
            image: gcr.io/istio-release/proxy_init:1.0.1
            imagePullPolicy: IfNotPresent
            name: istio-init
            resources: {}
            securityContext:
              capabilities:
                add:
                - NET_ADMIN
          volumes:
          - emptyDir:
              medium: Memory
            name: istio-envoy
          - name: istio-certs
            secret:
              optional: true
              secretName: istio.default

4. Building applications

The sample applications are implemented using Eclipse Vert.x framework. They use Mongo database for storing data. The connection settings are injected into pods using Kubernetes Secrets.

public class MongoVerticle extends AbstractVerticle {

	private static final Logger LOGGER = LoggerFactory.getLogger(MongoVerticle.class);

	@Override
	public void start() throws Exception {
		ConfigStoreOptions envStore = new ConfigStoreOptions()
				.setType("env")
				.setConfig(new JsonObject().put("keys", new JsonArray().add("DATABASE_USER").add("DATABASE_PASSWORD").add("DATABASE_NAME")));
		ConfigRetrieverOptions options = new ConfigRetrieverOptions().addStore(envStore);
		ConfigRetriever retriever = ConfigRetriever.create(vertx, options);
		retriever.getConfig(r -> {
			String user = r.result().getString("DATABASE_USER");
			String password = r.result().getString("DATABASE_PASSWORD");
			String db = r.result().getString("DATABASE_NAME");
			JsonObject config = new JsonObject();
			LOGGER.info("Connecting {} using {}/{}", db, user, password);
			config.put("connection_string", "mongodb://" + user + ":" + password + "@mongodb/" + db);
			final MongoClient client = MongoClient.createShared(vertx, config);
			final CustomerRepository service = new CustomerRepositoryImpl(client);
			ProxyHelper.registerService(CustomerRepository.class, vertx, service, "customer-service");	
		});
	}
}

MongoDB should be started on OpenShift before starting any applications, which connect to it. To achieve it we should insert Mongo deployment resource into Arquillian configuration file as env.config.resource.name field.
The configuration of Arquillian Cube is visible below. We will use an existing namespace myproject, which has already granted the required privileges (see Step 1). We also need to pass authentication token of user admin. You can collect it using command oc whoami -t after login to OpenShift cluster.

<extension qualifier="openshift">
	<property name="namespace.use.current">true</property>
	<property name="namespace.use.existing">myproject</property>
	<property name="kubernetes.master">https://192.168.99.100:8443</property>
	<property name="cube.auth.token">TYYccw6pfn7TXtH8bwhCyl2tppp5MBGq7UXenuZ0fZA</property>
	<property name="env.config.resource.name">mongo-deployment.yaml</property>
</extension>

The communication between customer-service and account-service is realized by Vert.x WebClient. We will set read timeout for the client to 1 second. Because Istio injects 2 seconds delay into the route, the communication is going to end with timeout.

public class AccountClient {

	private static final Logger LOGGER = LoggerFactory.getLogger(AccountClient.class);
	private Vertx vertx;

	public AccountClient(Vertx vertx) {
		this.vertx = vertx;
	}
	
	public AccountClient findCustomerAccounts(String customerId, Handler<AsyncResult<List>> resultHandler) {
		WebClient client = WebClient.create(vertx);
		client.get(8095, "account-service", "/account/customer/" + customerId).timeout(1000).send(res2 -> {
			if (res2.succeeded()) {
				LOGGER.info("Response: {}", res2.result().bodyAsString());
				List accounts = res2.result().bodyAsJsonArray().stream().map(it -> Json.decodeValue(it.toString(), Account.class)).collect(Collectors.toList());
				resultHandler.handle(Future.succeededFuture(accounts));
			} else {
				resultHandler.handle(Future.succeededFuture(new ArrayList()));
			}
		});
		return this;
	}
}

The full code of sample applications is available on GitHub in the repository https://github.com/piomin/sample-vertx-kubernetes/tree/openshift-istio-tests.

5. Running tests

You can the tests during Maven build or just using your IDE. As the first test1CustomerRoute test is executed. It adds new customer and save generated id for two next tests.

arquillian-istio-3

The next test is test2AccountRoute. It adds an account for the customer created during previous test.

arquillian-istio-2

Finally, the test responsible for verifying communication between microservices is running. It verifies if the list of accounts is empty, what is a result of timeout in communication with account-service.

arquillian-istio-1

Testing Microservices: Tools and Frameworks

There are some key challenges around microservices architecture testing that we are facing. The selection of right tools is one of that elements that helps us deal with the issues related to those challenges. First, let’s identify the most important elements involved into the process of microservices testing. These are some of them:

  • Teams coordination – with many independent teams managing their own microservices, it becomes very challenging to coordinate the overall process of software development and testing
  • Complexity – there are many microservices that communicate to each other. We need to ensure that every one of them is working properly and is resistant to the slow responses or failures from other microservices
  • Performance – since there are many independent services it is important to test the whole architecture under traffic close to the production

Let’s discuss some interesting frameworks helping that may help you in testing microservice-based architecture.

Components tests with Hoverfly

Hoverfly simulation mode may be especially useful for building component tests. During component tests we are verifying the whole microservice without communication over network with other microservices or external datastores. The following picture shows how such a test is performed for our sample microservice.

testing-microservices-1

Hoverfly provides simple DSL for creating simulations, and a JUnit integration for using it within JUnit tests. It may orchestrated via JUnit @Rule. We are simulating two services and then overriding Ribbon properties to resolve address of these services by client name. We should also disable communication with Eureka discovery by disabling registration after application boot or fetching list of services for Ribbon client. Hoverfly simulates responses for PUT and GET methods exposed by passenger-management and driver-management microservices. Controller is the main component that implements business logic in our application. It store data using in-memory repository component and communicates with other microservices through @FeignClient interfaces. By testing three methods implemented by the controller we are testing the whole business logic implemented inside trip-management service.

@SpringBootTest(properties = {
        "eureka.client.enabled=false",
        "ribbon.eureka.enable=false",
        "passenger-management.ribbon.listOfServers=passenger-management",
        "driver-management.ribbon.listOfServers=driver-management"
})
@RunWith(SpringRunner.class)
@AutoConfigureMockMvc
@FixMethodOrder(MethodSorters.NAME_ASCENDING)
public class TripComponentTests {

    ObjectMapper mapper = new ObjectMapper();

    @Autowired
    MockMvc mockMvc;

    @ClassRule
    public static HoverflyRule rule = HoverflyRule.inSimulationMode(SimulationSource.dsl(
            HoverflyDsl.service("passenger-management:80")
                    .get(HoverflyMatchers.startsWith("/passengers/login/"))
                    .willReturn(ResponseCreators.success(HttpBodyConverter.jsonWithSingleQuotes("{'id':1,'name':'John Walker'}")))
                    .put(HoverflyMatchers.startsWith("/passengers")).anyBody()
                    .willReturn(ResponseCreators.success(HttpBodyConverter.jsonWithSingleQuotes("{'id':1,'name':'John Walker'}"))),
            HoverflyDsl.service("driver-management:80")
                    .get(HoverflyMatchers.startsWith("/drivers/"))
                    .willReturn(ResponseCreators.success(HttpBodyConverter.jsonWithSingleQuotes("{'id':1,'name':'David Smith','currentLocationX': 15,'currentLocationY':25}")))
                    .put(HoverflyMatchers.startsWith("/drivers")).anyBody()
                    .willReturn(ResponseCreators.success(HttpBodyConverter.jsonWithSingleQuotes("{'id':1,'name':'David Smith','currentLocationX': 15,'currentLocationY':25}")))
    )).printSimulationData();

    @Test
    public void test1CreateNewTrip() throws Exception {
        TripInput ti = new TripInput("test", 10, 20, "walker");
        mockMvc.perform(MockMvcRequestBuilders.post("/trips")
                .contentType(MediaType.APPLICATION_JSON_UTF8)
                .content(mapper.writeValueAsString(ti)))
                .andExpect(MockMvcResultMatchers.status().isOk())
                .andExpect(MockMvcResultMatchers.jsonPath("$.id", Matchers.any(Integer.class)))
                .andExpect(MockMvcResultMatchers.jsonPath("$.status", Matchers.is("NEW")))
                .andExpect(MockMvcResultMatchers.jsonPath("$.driverId", Matchers.any(Integer.class)));
    }

    @Test
    public void test2CancelTrip() throws Exception {
        mockMvc.perform(MockMvcRequestBuilders.put("/trips/cancel/1")
                .contentType(MediaType.APPLICATION_JSON_UTF8)
                .content(mapper.writeValueAsString(new Trip())))
                .andExpect(MockMvcResultMatchers.status().isOk())
                .andExpect(MockMvcResultMatchers.jsonPath("$.id", Matchers.any(Integer.class)))
                .andExpect(MockMvcResultMatchers.jsonPath("$.status", Matchers.is("IN_PROGRESS")))
                .andExpect(MockMvcResultMatchers.jsonPath("$.driverId", Matchers.any(Integer.class)));
    }

    @Test
    public void test3PayTrip() throws Exception {
        mockMvc.perform(MockMvcRequestBuilders.put("/trips/payment/1")
                .contentType(MediaType.APPLICATION_JSON_UTF8)
                .content(mapper.writeValueAsString(new Trip())))
                .andExpect(MockMvcResultMatchers.status().isOk())
                .andExpect(MockMvcResultMatchers.jsonPath("$.id", Matchers.any(Integer.class)))
                .andExpect(MockMvcResultMatchers.jsonPath("$.status", Matchers.is("PAYED")));
    }

}

The tests visible above verify only positive scenarios. What about testing some unexpected behaviour like network delays or server errors? With Hoverfly we can easily simulate such a behaviour and define some negative scenarios. In the following fragment of code I have defined three scenarios. In the first of them target service has been delayed 2 seconds. In order to simulate timeout on the client side I had to change default readTimeout for Ribbon load balancer and then disabled Hystrix circuit breaker for Feign client. The second test simulates HTTP 500 response status from passenger-management service. The last scenario assumes empty response from method responsible for searching the nearest driver.

@SpringBootTest(properties = {
        "eureka.client.enabled=false",
        "ribbon.eureka.enable=false",
        "passenger-management.ribbon.listOfServers=passenger-management",
        "driver-management.ribbon.listOfServers=driver-management",
        "feign.hystrix.enabled=false",
        "ribbon.ReadTimeout=500"
})
@RunWith(SpringRunner.class)
@AutoConfigureMockMvc
public class TripNegativeComponentTests {

    private ObjectMapper mapper = new ObjectMapper();
    @Autowired
    private MockMvc mockMvc;

    @ClassRule
    public static HoverflyRule rule = HoverflyRule.inSimulationMode(SimulationSource.dsl(
            HoverflyDsl.service("passenger-management:80")
                    .get("/passengers/login/test1")
                    .willReturn(ResponseCreators.success(HttpBodyConverter.jsonWithSingleQuotes("{'id':1,'name':'John Smith'}")).withDelay(2000, TimeUnit.MILLISECONDS))
                    .get("/passengers/login/test2")
                    .willReturn(ResponseCreators.success(HttpBodyConverter.jsonWithSingleQuotes("{'id':1,'name':'John Smith'}")))
                    .get("/passengers/login/test3")
                    .willReturn(ResponseCreators.serverError()),
            HoverflyDsl.service("driver-management:80")
                    .get(HoverflyMatchers.startsWith("/drivers/"))
                    .willReturn(ResponseCreators.success().body("{}"))
            ));

    @Test
    public void testCreateTripWithTimeout() throws Exception {
        mockMvc.perform(MockMvcRequestBuilders.post("/trips").contentType(MediaType.APPLICATION_JSON).content(mapper.writeValueAsString(new TripInput("test", 15, 25, "test1"))))
                .andExpect(MockMvcResultMatchers.status().isOk())
                .andExpect(MockMvcResultMatchers.jsonPath("$.id", Matchers.nullValue()))
                .andExpect(MockMvcResultMatchers.jsonPath("$.status", Matchers.is("REJECTED")));
    }

    @Test
    public void testCreateTripWithError() throws Exception {
        mockMvc.perform(MockMvcRequestBuilders.post("/trips").contentType(MediaType.APPLICATION_JSON).content(mapper.writeValueAsString(new TripInput("test", 15, 25, "test3"))))
                .andExpect(MockMvcResultMatchers.status().isOk())
                .andExpect(MockMvcResultMatchers.jsonPath("$.id", Matchers.nullValue()))
                .andExpect(MockMvcResultMatchers.jsonPath("$.status", Matchers.is("REJECTED")));
    }

    @Test
    public void testCreateTripWithNoDrivers() throws Exception {
        mockMvc.perform(MockMvcRequestBuilders.post("/trips").contentType(MediaType.APPLICATION_JSON).content(mapper.writeValueAsString(new TripInput("test", 15, 25, "test2"))))
                .andExpect(MockMvcResultMatchers.status().isOk())
                .andExpect(MockMvcResultMatchers.jsonPath("$.id", Matchers.nullValue()))
                .andExpect(MockMvcResultMatchers.jsonPath("$.status", Matchers.is("REJECTED")));
    }

}

All the timeouts and errors in communication with external microservices are handled by the bean annotated with @ControllerAdvice. In such cases trip-management microservice should not return server error response, but 200 OK with JSON response containing field status equals to REJECTED.

@ControllerAdvice
public class TripControllerErrorHandler extends ResponseEntityExceptionHandler {

    @ExceptionHandler({RetryableException.class, FeignException.class})
    protected ResponseEntity handleFeignTimeout(RuntimeException ex, WebRequest request) {
        Trip trip = new Trip();
        trip.setStatus(TripStatus.REJECTED);
        return handleExceptionInternal(ex, trip, null, HttpStatus.OK, request);
    }

}

Contract tests with Pact

The next type of test strategy usually implemented for microservices-based architecture is consumer-driven contract testing. In fact, there are some tools especially dedicated for such type of tests. One of them is Pact. Contract testing is a way to ensure that services can communicate with each other without implementing integration tests. A contract is signed between two sides of communication: consumer and provider. Pact assumes that contract code is generated and published on the consumer side, and than verified by the provider.

Pact provides tool that can store and share the contracts between consumers and providers. It is called Pact Broker. It exposes a simple RESTful API for publishing and retrieving pacts, and embedded web dashboard for navigating the API. We can easily run Pact Broker on the local machine using its Docker image.

micro-testing-2

We will begin from running Pact Broker. Pact Broker requires running instance of postgresql, so first we have to launch it using Docker image, and then link our broker container with that container.

docker run -d --name postgres -p 5432:5432 -e POSTGRES_USER=oauth -e POSTGRES_PASSWORD=oauth123 -e POSTGRES_DB=oauth postgres
docker run -d --name pact-broker --link postgres:postgres -e PACT_BROKER_DATABASE_USERNAME=oauth -e PACT_BROKER_DATABASE_PASSWORD=oauth123 -e PACT_BROKER_DATABASE_HOST=postgres -e PACT_BROKER_DATABASE_NAME=oauth -p 9080:80 dius/pact-broker

The next step is to implement contract tests on the consumer side. We will use JVM implementation of Pact library for that. It provides PactProviderRuleMk2 object responsible for creating stubs of the provider service. We should annotate it with JUnit @Rule. Ribbon will forward all requests to passenger-management to the stub address – in that case localhost:8180. Pact JVM supports annotations and provides DSL for building test scenarios. Test method responsible for generating contract data should be annotated with @Pact. It is important to set fields state and provider, because then generated contract would be verified on the provider side using these names. Generated pacts are verified inside the same test class by the methods annotated with @PactVerification. Field fragment points to the name of the method responsible for generating pact inside the same test class. The contract is tested using PassengerManagementClient @FeignClient.

@RunWith(SpringRunner.class)
@SpringBootTest(properties = {
        "driver-management.ribbon.listOfServers=localhost:8190",
        "passenger-management.ribbon.listOfServers=localhost:8180",
        "ribbon.eureka.enabled=false",
        "eureka.client.enabled=false",
        "ribbon.ReadTimeout=5000"
})
public class PassengerManagementContractTests {

    @Rule
    public PactProviderRuleMk2 stubProvider = new PactProviderRuleMk2("passengerManagementProvider", "localhost", 8180, this);
    @Autowired
    private PassengerManagementClient passengerManagementClient;

    @Pact(state = "get-passenger", provider = "passengerManagementProvider", consumer = "passengerManagementClient")
    public RequestResponsePact callGetPassenger(PactDslWithProvider builder) {
        DslPart body = new PactDslJsonBody().integerType("id").stringType("name").numberType("balance").close();
        return builder.given("get-passenger").uponReceiving("test-get-passenger")
                .path("/passengers/login/test").method("GET").willRespondWith().status(200).body(body).toPact();
    }

    @Pact(state = "update-passenger", provider = "passengerManagementProvider", consumer = "passengerManagementClient")
    public RequestResponsePact callUpdatePassenger(PactDslWithProvider builder) {
        return builder.given("update-passenger").uponReceiving("test-update-passenger")
                .path("/passengers").method("PUT").bodyWithSingleQuotes("{'id':1,'amount':1000}", "application/json").willRespondWith().status(200)
                .bodyWithSingleQuotes("{'id':1,'name':'Adam Smith','balance':5000}", "application/json").toPact();
    }

    @Test
    @PactVerification(fragment = "callGetPassenger")
    public void verifyGetPassengerPact() {
        Passenger passenger = passengerManagementClient.getPassenger("test");
        Assert.assertNotNull(passenger);
        Assert.assertNotNull(passenger.getId());
    }

    @Test
    @PactVerification(fragment = "callUpdatePassenger")
    public void verifyUpdatePassengerPact() {
        Passenger passenger = passengerManagementClient.updatePassenger(new PassengerInput(1L, 1000));
        Assert.assertNotNull(passenger);
        Assert.assertNotNull(passenger.getId());
    }

}

Just running the tests is not enough. We also have to publish pacts generated during tests to Pact Broker. In order to achieve it we have to include the following Maven plugin to our pom.xml and then execute command mvn clean install pact:publish.

<plugin>
	<groupId>au.com.dius</groupId>
	<artifactId>pact-jvm-provider-maven_2.12</artifactId>
	<version>3.5.21</version>
	<configuration>
		<pactBrokerUrl>http://192.168.99.100:9080</pactBrokerUrl>
	</configuration>
</plugin>

Pact provides support for Spring on the provider side. Thanks to that we may use MockMvc controllers or inject properties from application.yml into the test class. Here’s dependency declaration that has to be included to our pom.xml

<dependency>
	<groupId>au.com.dius</groupId>
	<artifactId>pact-jvm-provider-spring_2.12</artifactId>
	<version>3.5.21</version>
	<scope>test</scope>
</dependency>

Now , the contract is being verified on the provider side. We need to pass provider name inside @Provider annotation and name of states for every verification test inside @State. These values has been during the tests on the consumer side inside @Pact annotation (fields state and provider).

@RunWith(SpringRestPactRunner.class)
@Provider("passengerManagementProvider")
@PactBroker
public class PassengerControllerContractTests {

    @InjectMocks
    private PassengerController controller = new PassengerController();
    @Mock
    private PassengerRepository repository;
    @TestTarget
    public final MockMvcTarget target = new MockMvcTarget();

    @Before
    public void before() {
        MockitoAnnotations.initMocks(this);
        target.setControllers(controller);
    }

    @State("get-passenger")
    public void testGetPassenger() {
        target.setRunTimes(3);
        Mockito.when(repository.findByLogin(Mockito.anyString()))
                .thenReturn(new Passenger(1L, "Adam Smith", "test", 4000))
                .thenReturn(new Passenger(3L, "Tom Hamilton", "hamilton", 400000))
                .thenReturn(new Passenger(5L, "John Scott", "scott", 222));
    }

    @State("update-passenger")
    public void testUpdatePassenger() {
        target.setRunTimes(1);
        Passenger passenger = new Passenger(1L, "Adam Smith", "test", 4000);
        Mockito.when(repository.findById(1L)).thenReturn(passenger);
        Mockito.when(repository.update(Mockito.any(Passenger.class)))
                .thenReturn(new Passenger(1L, "Adam Smith", "test", 5000));
    }
}

Pact Broker host and port are injected from application.yml file.

pactbroker:
  host: "192.168.99.100"
  port: "8090"

Performance tests with Gatling

An important step of testing microservices before deploying them on production is performance testing. One of interesting tools in this area is Gatling. It is highly capable load testing tool written in Scala. It means that we also have to use Scala DSL in order to build test scenarios. Let’s begin from adding required library to pom.xml file.

<dependency>
	<groupId>io.gatling.highcharts</groupId>
	<artifactId>gatling-charts-highcharts</artifactId>
	<version>2.3.1</version>
</dependency>

Now, we may proceed to the test. In the scenario visible above we are testing two endpoints exposed by trip-management: POST /trips and PUT /trips/payment/${tripId}. In fact, this scenario verifies the whole functionality of our sample system, where we are setting up trip and then pay for it after finish.
Every test class using Gatling needs to extend Simulation class. We are defining scenario using scenario method and then setting its name. We may define multiple executions inside single scenario. After every execution of POST /trips method test save generated id returned by the service. Then it inserts that id into the URL used for calling method PUT /trips/payment/${tripId}. Every single test expects response with 200 OK status.
Gatling provides two interesting features, which are worth mentioning. You can see how they are used in the following performance test. First of them is feeder. It is used for polling records and injecting their content into the test. Feed rPassengers selects one of five defined logins randomly. The final test result may be verified using Assertions API. It is responsible for verifying global statistics like response time or number of failed requests matches expectations for a whole simulation. In the scenario visible below the criterium is max response time that needs to be lower 100 milliseconds.

class CreateAndPayTripPerformanceTest extends Simulation {

  val rPassengers = Iterator.continually(Map("passenger" -> List("walker","smith","hamilton","scott","holmes").lift(Random.nextInt(5)).get))

  val scn = scenario("CreateAndPayTrip").feed(rPassengers).repeat(100, "n") {
    exec(http("CreateTrip-API")
      .post("http://localhost:8090/trips")
      .header("Content-Type", "application/json")
      .body(StringBody("""{"destination":"test${n}","locationX":${n},"locationY":${n},"username":"${passenger}"}"""))
      .check(status.is(200), jsonPath("$.id").saveAs("tripId"))
    ).exec(http("PayTrip-API")
      .put("http://localhost:8090/trips/payment/${tripId}")
      .header("Content-Type", "application/json")
      .check(status.is(200))
    )
  }

  setUp(scn.inject(atOnceUsers(20))).maxDuration(FiniteDuration.apply(5, TimeUnit.MINUTES))
    .assertions(global.responseTime.max.lt(100))

}

In order to run Gatling performance test you need to include the following Maven plugin to your pom.xml. You may run a single scenario or run multiple scenarios. After including the plugin you only need to execute command mvn clean gatling:test.

<plugin>
	<groupId>io.gatling</groupId>
	<artifactId>gatling-maven-plugin</artifactId>
	<version>2.2.4</version>
	<configuration>
		<simulationClass>pl.piomin.performance.tests.CreateAndPayTripPerformanceTest</simulationClass>
	</configuration>
</plugin>

Here are some diagrams illustrating result of performance tests for our microservice. Because maximum response time has been greater than set inside assertion (100ms), the test has failed.

microservices-testing-2

and …

microservices-testing-3

Summary

The right selection of tools is not the most important element phase of microservices testing. However, right tools can help you facing the key challenges related to it. Hoverfly allows to create full component tests that verifies if your microservice is able to handle delays or error from downstream services. Pact helps you to organize team by sharing and verifying contracts between independently developed microservices. Finally, Gatling can help you implementing load tests for selected scenarios, in order to verify an end-to-end performance of your system.
The source code used as a demo for this article is available on GitHub: https://github.com/piomin/sample-testing-microservices.git. If you find this article interesting for you you may be also interested in some other articles related to this subject:

GraphQL – The Future of Microservices?

Often, GraphQL is presented as a revolutionary way of designing web APIs in comparison to REST. However, if you would take a closer look on that technology you will see that there are so many differences between them. GraphQL is a relatively new solution that has been open sourced by Facebook in 2015. Today, REST is still the most popular paradigm used for exposing APIs and inter-service communication between microservices. Is GraphQL going to overtake REST in the future? Let’s take a look how to create microservices communicating through GraphQL API using Spring Boot and Apollo client.

Let’s begin from an architecture of our sample system. We have three microservices that communicates to each other using URLs taken from Eureka service discovery.

graphql-arch

1. Enabling Spring Boot support for GraphQL

We can easily enable support for GraphQL on the server-side Spring Boot application just by including some starters. After including graphql-spring-boot-starter the GraphQL servlet would be automatically accessible under path /graphql. We can override that default path by settings property graphql.servlet.mapping in application.yml file. We should also enable GraphiQL – an in-browser IDE for writing, validating, and testing GraphQL queries, and GraphQL Java Tools library, which contains useful components for creating queries and mutations. Thanks to that library any files on the classpath with .graphqls extension will be used to provide the schema definition.

<dependency>
	<groupId>com.graphql-java</groupId>
	<artifactId>graphql-spring-boot-starter</artifactId>
	<version>5.0.2</version>
</dependency>
<dependency>
	<groupId>com.graphql-java</groupId>
	<artifactId>graphiql-spring-boot-starter</artifactId>
	<version>5.0.2</version>
</dependency>
<dependency>
	<groupId>com.graphql-java</groupId>
	<artifactId>graphql-java-tools</artifactId>
	<version>5.2.3</version>
</dependency>

2. Building GraphQL schema definition

Every schema definitions contains data types declaration, relationships between them, and a set of operations including queries for searching objects and mutations for creating, updating or deleting data. Usually we will start from creating type declaration, which is responsible for domain object definition. You can specify if the field is required using ! char or if it is an array using [...]. Definition has to contain type declaration or reference to other types available in the specification.

type Employee {
  id: ID!
  organizationId: Int!
  departmentId: Int!
  name: String!
  age: Int!
  position: String!
  salary: Int!
}

Here’s an equivalent Java class to GraphQL definition visible above. GraphQL type Int can be also mapped to Java Long. The ID scalar type represents a unique identifier – in that case it also would be Java Long.

public class Employee {

	private Long id;
	private Long organizationId;
	private Long departmentId;
	private String name;
	private int age;
	private String position;
	private int salary;
	
	// constructor
	
	// getters
	// setters
	
}

The next part of schema definition contains queries and mutations declaration. Most of the queries return list of objects – what is marked with [Employee]. Inside EmployeeQueries type we have declared all find methods, while inside EmployeeMutations type methods for adding, updating and removing employees. If you pass the whole object to that method you need to declare it as an input type.

schema {
  query: EmployeeQueries
  mutation: EmployeeMutations
}

type EmployeeQueries {
  employees: [Employee]
  employee(id: ID!): Employee!
  employeesByOrganization(organizationId: Int!): [Employee]
  employeesByDepartment(departmentId: Int!): [Employee]
}

type EmployeeMutations {
  newEmployee(employee: EmployeeInput!): Employee
  deleteEmployee(id: ID!) : Boolean
  updateEmployee(id: ID!, employee: EmployeeInput!): Employee
}

input EmployeeInput {
  organizationId: Int
  departmentId: Int
  name: String
  age: Int
  position: String
  salary: Int
}

3. Queries and mutation implementation

Thanks to GraphQL Java Tools and Spring Boot GraphQL auto-configuration we don’t need to do much to implement queries and mutations in our application. The EmployeesQuery bean has to GraphQLQueryResolver interface. Basing on that Spring would be able to automatically detect and call right method as a response to one of the GraphQL query declared inside the schema. Here’s a class containing an implementation of queries.

@Component
public class EmployeeQueries implements GraphQLQueryResolver {

	private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeQueries.class);
	
	@Autowired
	EmployeeRepository repository;
	
	public List employees() {
		LOGGER.info("Employees find");
		return repository.findAll();
	}
	
	public List employeesByOrganization(Long organizationId) {
		LOGGER.info("Employees find: organizationId={}", organizationId);
		return repository.findByOrganization(organizationId);
	}

	public List employeesByDepartment(Long departmentId) {
		LOGGER.info("Employees find: departmentId={}", departmentId);
		return repository.findByDepartment(departmentId);
	}
	
	public Employee employee(Long id) {
		LOGGER.info("Employee find: id={}", id);
		return repository.findById(id);
	}
	
}

If you would like to call, for example method employee(Long id) you should build the following query. You can easily test it in your application using GraphiQL tool available under path /graphiql.

graphql-1
The bean responsible for implementation of mutation methods needs to implement GraphQLMutationResolver. Despite declaration of EmployeeInput we still to use the same domain object as returned by queries – Employee.

@Component
public class EmployeeMutations implements GraphQLMutationResolver {

	private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeQueries.class);
	
	@Autowired
	EmployeeRepository repository;
	
	public Employee newEmployee(Employee employee) {
		LOGGER.info("Employee add: employee={}", employee);
		return repository.add(employee);
	}
	
	public boolean deleteEmployee(Long id) {
		LOGGER.info("Employee delete: id={}", id);
		return repository.delete(id);
	}
	
	public Employee updateEmployee(Long id, Employee employee) {
		LOGGER.info("Employee update: id={}, employee={}", id, employee);
		return repository.update(id, employee);
	}
	
}

We can also use GraphiQL to test mutations. Here’s the command that adds new employee, and receives response with employee’s id and name.

graphql-2

4. Generating client-side classes

Ok, we have successfully created server-side application. We have already tested some queries using GraphiQL. But our main goal is to create some other microservices that communicate with employee-service application through GraphQL API. Here the most of tutorials about Spring Boot and GraphQL ending.
To be able to communicate with our first application through GraphQL API we have two choices. We can get a standard REST client and implement GraphQL API by ourselves with HTTP GET requests or use one of existing Java clients. Surprisingly, there are no many GraphQL Java client implementations available. The most serious choice is Apollo GraphQL Client for Android. Of course it is not designed only for Android devices, and you can successfully use it in your microservice Java application.
Before using the client we need to generate classes from schema and .grapql files. The recommended way to do it is through Apollo Gradle Plugin. There are also some Maven plugins, but none of them provide the level of automation as Gradle plugin, for example it automatically downloads node.js required for generating client-side classes. So, the first step is to add Apollo plugin and runtime to the project dependencies.

buildscript {
  repositories {
    jcenter()
    maven { url 'https://oss.sonatype.org/content/repositories/snapshots/' }
  }
  dependencies {
    classpath 'com.apollographql.apollo:apollo-gradle-plugin:1.0.1-SNAPSHOT'
  }
}

apply plugin: 'com.apollographql.android'

dependencies {
  compile 'com.apollographql.apollo:apollo-runtime:1.0.1-SNAPSHOT'
}

GraphQL Gradle plugin tries to find files with .graphql extension and schema.json inside src/main/graphql directory. GraphQL JSON schema can be obtained from your Spring Boot application by calling resource /graphql/schema.json. File .graphql contains queries definition. Query employeesByOrganization will be called by organization-service, while employeesByDepartment by both department-service and organization-service. Those two application needs a little different set of data in the response. Application department-service requires more detailed information about every employee than organization-service. GraphQL is an excellent solution in that case, because we can define the require set of data in the response on the client side. Here’s query definition of employeesByOrganization called by organization-service.

query EmployeesByOrganization($organizationId: Int!) {
  employeesByOrganization(organizationId: $organizationId) {
    id
    name
  }
}

Application organization-service would also call employeesByDepartment query.

query EmployeesByDepartment($departmentId: Int!) {
  employeesByDepartment(departmentId: $departmentId) {
    id
    name
  }
}

The query employeesByDepartment is also called by department-service, which requires not only id and name fields, but also position and salary.

query EmployeesByDepartment($departmentId: Int!) {
  employeesByDepartment(departmentId: $departmentId) {
    id
    name
    position
    salary
  }
}

All the generated classes are available under build/generated/source/apollo directory.

5. Building Apollo client with discovery

After generating all required classes and including them into calling microservices we may proceed to the client implementation. Apollo client has two important features that will affect our development:

  • It provides only asynchronous methods based on callback
  • It does not integrate with service discovery based on Spring Cloud Netflix Eureka

Here’s an implementation of employee-service client inside department-service. I used EurekaClient directly (1). It gets all running instances registered as EMPLOYEE-SERVICE. Then it selects one instance form the list of available instances randomly (2). The port number of that instance is passed to ApolloClient (3). Before calling asynchronous method enqueue provided by ApolloClient we create lock (4), which waits max. 5 seconds for releasing (8). Method enqueue returns response in the callback method onResponse (5). We map the response body from GraphQL Employee object to returned object (6) and then release the lock (7).

@Component
public class EmployeeClient {

	private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeClient.class);
	private static final int TIMEOUT = 5000;
	private static final String SERVICE_NAME = "EMPLOYEE-SERVICE"; 
	private static final String SERVER_URL = "http://localhost:%d/graphql";
	
	Random r = new Random();
	
	@Autowired
	private EurekaClient discoveryClient; // (1)
	
	public List findByDepartment(Long departmentId) throws InterruptedException {
		List employees = new ArrayList();
		Application app = discoveryClient.getApplication(SERVICE_NAME); // (2)
		InstanceInfo ii = app.getInstances().get(r.nextInt(app.size()));
		ApolloClient client = ApolloClient.builder().serverUrl(String.format(SERVER_URL, ii.getPort())).build(); // (3)
		CountDownLatch lock = new CountDownLatch(1); // (4)
		client.query(EmployeesByDepartmentQuery.builder().build()).enqueue(new Callback() {

			@Override
			public void onFailure(ApolloException ex) {
				LOGGER.info("Err: {}", ex);
				lock.countDown();
			}

			@Override
			public void onResponse(Response res) { // (5)
				LOGGER.info("Res: {}", res);
				employees.addAll(res.data().employees().stream().map(emp -> new Employee(Long.valueOf(emp.id()), emp.name(), emp.position(), emp.salary())).collect(Collectors.toList())); // (6)
				lock.countDown(); // (7)
			}

		});
		lock.await(TIMEOUT, TimeUnit.MILLISECONDS); // (8)
		return employees;
	}
	
}

Finally, EmployeeClient is injected into the query resolver class – DepartmentQueries, and used inside query departmentsByOrganizationWithEmployees.

@Component
public class DepartmentQueries implements GraphQLQueryResolver {

	private static final Logger LOGGER = LoggerFactory.getLogger(DepartmentQueries.class);
	
	@Autowired
	EmployeeClient employeeClient;
	@Autowired
	DepartmentRepository repository;

	public List departmentsByOrganizationWithEmployees(Long organizationId) {
		LOGGER.info("Departments find: organizationId={}", organizationId);
		List departments = repository.findByOrganization(organizationId);
		departments.forEach(d -> {
			try {
				d.setEmployees(employeeClient.findByDepartment(d.getId()));
			} catch (InterruptedException e) {
				LOGGER.error("Error calling employee-service", e);
			}
		});
		return departments;
	}
	
	// other queries
	
}

Before calling target query we should take a look on the schema created for department-service. Every Department object can contain the list of assigned employees, so we also define type Employee referenced by Department type.

schema {
  query: DepartmentQueries
  mutation: DepartmentMutations
}

type DepartmentQueries {
  departments: [Department]
  department(id: ID!): Department!
  departmentsByOrganization(organizationId: Int!): [Department]
  departmentsByOrganizationWithEmployees(organizationId: Int!): [Department]
}

type DepartmentMutations {
  newDepartment(department: DepartmentInput!): Department
  deleteDepartment(id: ID!) : Boolean
  updateDepartment(id: ID!, department: DepartmentInput!): Department
}

input DepartmentInput {
  organizationId: Int!
  name: String!
}

type Department {
  id: ID!
  organizationId: Int!
  name: String!
  employees: [Employee]
}

type Employee {
  id: ID!
  name: String!
  position: String!
  salary: Int!
}

Now, we can call our test query with list of required fields using GraphiQL. An application department-service is by default available under port 8091, so we may call it using address http://localhost:8091/graphiql.

graphql-3

Conclusion

GraphQL seems to be an interesting alternative to standard REST APIs. However, we should not consider it as a replacement to REST. There are some use cases where GraphQL may be better choice, and some use cases where REST is better choice. If your clients does not need the full set of fields returned by the server side, and moreover you have many clients with different requirements to the single endpoint – GraphQL is a good choice. When it comes to microservices there are no solutions based on Java that allow you to use GraphQL together with service discovery, load balancing or API gateway out-of-the-box. In this article I have shown an example of usage Apollo GraphQL client together with Spring Cloud Eureka for inter-service communication. Sample applications source code is available on GitHub https://github.com/piomin/sample-graphql-microservices.git.

Quick Guide to Microservices with Kubernetes, Spring Boot 2.0 and Docker

Here’s the next article in a series of “Quick Guide to…”. This time we will discuss and run examples of Spring Boot microservices on Kubernetes. The structure of that article will be quite similar to this one Quick Guide to Microservices with Spring Boot 2.0, Eureka and Spring Cloud, as they are describing the same aspects of applications development. I’m going to focus on showing you the differences and similarities in development between for Spring Cloud and for Kubernetes. The topics covered in this article are:

  • Using Spring Boot 2.0 in cloud-native development
  • Providing service discovery for all microservices using Spring Cloud Kubernetes project
  • Injecting configuration settings into application pods using Kubernetes Config Maps and Secrets
  • Building application images using Docker and deploying them on Kubernetes using YAML configuration files
  • Using Spring Cloud Kubernetes together with Zuul proxy to expose a single Swagger API documentation for all microservices

Spring Cloud and Kubernetes may be threaten as a competitive solutions when you build microservices environment. Such components like Eureka, Spring Cloud Config or Zuul provided by Spring Cloud may be replaced by built-in Kubernetes objects like services, config maps, secrets or ingresses. But even if you decide to use Kubernetes components instead of Spring Cloud you can take advantage of some interesting features provided throughout the whole Spring Cloud project.

The one raelly interesting project that helps us in development is Spring Cloud Kubernetes (https://github.com/spring-cloud-incubator/spring-cloud-kubernetes). Although it is still in incubation stage it is definitely worth to dedicating some time to it. It integrates Spring Cloud with Kubernetes. I’ll show you how to use implementation of discovery client, inter-service communication with Ribbon client and Zipkin discovery using Spring Cloud Kubernetes.

Before we proceed to the source code, let’s take a look on the following diagram. It illustrates the architecture of our sample system. It is quite similar to the architecture presented in the already mentioned article about microservices on Spring Cloud. There are three independent applications (employee-service, department-service, organization-service), which communicate between each other through REST API. These Spring Boot microservices use some build-in mechanisms provided by Kubernetes: config maps and secrets for distributed configuration, etcd for service discovery, and ingresses for API gateway.

micro-kube-1

Let’s proceed to the implementation. Currently, the newest stable version of Spring Cloud is Finchley.RELEASE. This version of spring-cloud-dependencies should be declared as a BOM for dependency management.

<dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-dependencies</artifactId>
			<version>Finchley.RELEASE</version>
			<type>pom</type>
			<scope>import</scope>
		</dependency>
	</dependencies>
</dependencyManagement>

Spring Cloud Kubernetes is not released under Spring Cloud Release Trains. So, we need to explicitly define its version. Because we use Spring Boot 2.0 we have to include the newest SNAPSHOT version of spring-cloud-kubernetes artifacts, which is 0.3.0.BUILD-SNAPSHOT.

The source code of sample applications presented in this article is available on GitHub in repository https://github.com/piomin/sample-spring-microservices-kubernetes.git.

Pre-requirements

In order to be able to deploy and test our sample microservices we need to prepare a development environment. We can realize that in the following steps:

  • You need at least a single node cluster instance of Kubernetes (Minikube) or Openshift (Minishift) running on your local machine. You should start it and expose embedded Docker client provided by both of them. The detailed intruction for Minishift may be found there: Quick guide to deploying Java apps on OpenShift. You can also use that description to run Minikube – just replace word ‘minishift’ with ‘minikube’. In fact, it does not matter if you choose Kubernetes or Openshift – the next part of this tutorial would be applicable for both of them
  • Spring Cloud Kubernetes requires access to Kubernetes API in order to be able to retrieve a list of address of pods running for a single service. If you use Kubernetes you should just execute the following command:
$ kubectl create clusterrolebinding admin --clusterrole=cluster-admin --serviceaccount=default:default

If you deploy your microservices on Minishift you should first enable admin-user addon, then login as a cluster admin, and grant required permissions.

$ minishift addons enable admin-user
$ oc login -u system:admin
$ oc policy add-role-to-user cluster-reader system:serviceaccount:myproject:default
  • All our sample microservices use MongoDB as a backend store. So, you should first run an instance of this database on your node. With Minishift it is quite simple, as you can use predefined templates just by selecting service Mongo on the Catalog list. With Kubernetes the task is more difficult. You have to prepare deployment configuration files by yourself and apply it to the cluster. All the configuration files are available under kubernetes directory inside sample Git repository. To apply the following YAML definition to the cluster you should execute command kubectl apply -f kubernetes\mongo-deployment.yaml. After it Mongo database would be available under the name mongodb inside Kubernetes cluster.
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mongodb
  labels:
    app: mongodb
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mongodb
  template:
    metadata:
      labels:
        app: mongodb
    spec:
      containers:
      - name: mongodb
        image: mongo:latest
        ports:
        - containerPort: 27017
        env:
        - name: MONGO_INITDB_DATABASE
          valueFrom:
            configMapKeyRef:
              name: mongodb
              key: database-name
        - name: MONGO_INITDB_ROOT_USERNAME
          valueFrom:
            secretKeyRef:
              name: mongodb
              key: database-user
        - name: MONGO_INITDB_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mongodb
              key: database-password
---
apiVersion: v1
kind: Service
metadata:
  name: mongodb
  labels:
    app: mongodb
spec:
  ports:
  - port: 27017
    protocol: TCP
  selector:
    app: mongodb

1. Inject configuration with Config Maps and Secrets

When using Spring Cloud the most obvious choice for realizing distributed configuration in your system is Spring Cloud Config. With Kubernetes you can use Config Map. It holds key-value pairs of configuration data that can be consumed in pods or used to store configuration data. It is used for storing and sharing non-sensitive, unencrypted configuration information. To use sensitive information in your clusters, you must use Secrets. An usage of both these Kubernetes objects can be perfectly demonstrated basing on the example of MongoDB connection settings. Inside Spring Boot application we can easily inject it using environment variables. Here’s fragment of application.yml file with URI configuration.

spring:
  data:
    mongodb:
      uri: mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@mongodb/${MONGO_DATABASE}

While username or password are a sensitive fields, a database name is not. So we can place it inside config map.

apiVersion: v1
kind: ConfigMap
metadata:
  name: mongodb
data:
  database-name: microservices

Of course, username and password are defined as secrets.

apiVersion: v1
kind: Secret
metadata:
  name: mongodb
type: Opaque
data:
  database-password: MTIzNDU2
  database-user: cGlvdHI=

To apply the configuration to Kubernetes cluster we run the following commands.

$ kubectl apply -f kubernetes/mongodb-configmap.yaml
$ kubectl apply -f kubernetes/mongodb-secret.yaml

After it we should inject the configuration properties into application’s pods. When defining container configuration inside Deployment YAML file we have to include references to environment variables and secrets as shown below

apiVersion: apps/v1
kind: Deployment
metadata:
  name: employee
  labels:
    app: employee
spec:
  replicas: 1
  selector:
    matchLabels:
      app: employee
  template:
    metadata:
      labels:
        app: employee
    spec:
      containers:
      - name: employee
        image: piomin/employee:1.0
        ports:
        - containerPort: 8080
        env:
        - name: MONGO_DATABASE
          valueFrom:
            configMapKeyRef:
              name: mongodb
              key: database-name
        - name: MONGO_USERNAME
          valueFrom:
            secretKeyRef:
              name: mongodb
              key: database-user
        - name: MONGO_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mongodb
              key: database-password

2. Building service discovery with Kubernetes

We usually running microservices on Kubernetes using Docker containers. One or more containers are grouped by pods, which are the smallest deployable units created and managed in Kubernetes. A good practice is to run only one container inside a single pod. If you would like to scale up your microservice you would just have to increase a number of running pods. All running pods that belong to a single microservice are logically grouped by Kubernetes Service. This service may be visible outside the cluster, and is able to load balance incoming requests between all running pods. The following service definition groups all pods labelled with field app equaled to employee.

apiVersion: v1
kind: Service
metadata:
  name: employee
  labels:
    app: employee
spec:
  ports:
  - port: 8080
    protocol: TCP
  selector:
    app: employee

Service can be used for accessing application outside Kubernetes cluster or for inter-service communication inside a cluster. However, the communication between microservices can be implemented more comfortable with Spring Cloud Kubernetes. First we need to include the following dependency to project pom.xml.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-kubernetes</artifactId>
	<version>0.3.0.BUILD-SNAPSHOT</version>
</dependency>

Then we should enable discovery client for an application – the same as we have always done for discovery Spring Cloud Netflix Eureka. This allows you to query Kubernetes endpoints (services) by name. This discovery feature is also used by the Spring Cloud Kubernetes Ribbon or Zipkin projects to fetch respectively the list of the pods defined for a microservice to be load balanced or the Zipkin servers available to send the traces or spans.

@SpringBootApplication
@EnableDiscoveryClient
@EnableMongoRepositories
@EnableSwagger2
public class EmployeeApplication {

	public static void main(String[] args) {
		SpringApplication.run(EmployeeApplication.class, args);
	}
	
	// ...
}

The last important thing in this section is to guarantee that Spring application name would be exactly the same as Kubernetes service name for the application. For application employee-service it is employee.

spring:
  application:
    name: employee

3. Building microservice using Docker and deploying on Kubernetes

There is nothing unusual in our sample microservices. We have included some standard Spring dependencies for building REST-based microservices, integrating with MongoDB and generating API documentation using Swagger2.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
	<groupId>io.springfox</groupId>
	<artifactId>springfox-swagger2</artifactId>
	<version>2.9.2</version>
</dependency>
<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-data-mongodb</artifactId>
</dependency>

In order to integrate with MongoDB we should create interface that extends standard Spring Data CrudRepository.

public interface EmployeeRepository extends CrudRepository {
	
	List findByDepartmentId(Long departmentId);
	List findByOrganizationId(Long organizationId);
	
}

Entity class should be annotated with Mongo @Document and a primary key field with @Id.

@Document(collection = "employee")
public class Employee {

	@Id
	private String id;
	private Long organizationId;
	private Long departmentId;
	private String name;
	private int age;
	private String position;
	
	// ...
	
}

The repository bean has been injected to the controller class. Here’s the full implementation of our REST API inside employee-service.

@RestController
public class EmployeeController {

	private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeController.class);
	
	@Autowired
	EmployeeRepository repository;
	
	@PostMapping("/")
	public Employee add(@RequestBody Employee employee) {
		LOGGER.info("Employee add: {}", employee);
		return repository.save(employee);
	}
	
	@GetMapping("/{id}")
	public Employee findById(@PathVariable("id") String id) {
		LOGGER.info("Employee find: id={}", id);
		return repository.findById(id).get();
	}
	
	@GetMapping("/")
	public Iterable findAll() {
		LOGGER.info("Employee find");
		return repository.findAll();
	}
	
	@GetMapping("/department/{departmentId}")
	public List findByDepartment(@PathVariable("departmentId") Long departmentId) {
		LOGGER.info("Employee find: departmentId={}", departmentId);
		return repository.findByDepartmentId(departmentId);
	}
	
	@GetMapping("/organization/{organizationId}")
	public List findByOrganization(@PathVariable("organizationId") Long organizationId) {
		LOGGER.info("Employee find: organizationId={}", organizationId);
		return repository.findByOrganizationId(organizationId);
	}
	
}

In order to run our microservices on Kubernetes we should first build the whole Maven project with mvn clean install command. Each microservice has Dockerfile placed in the root directory. Here’s Dockerfile definition for employee-service.

FROM openjdk:8-jre-alpine
ENV APP_FILE employee-service-1.0-SNAPSHOT.jar
ENV APP_HOME /usr/apps
EXPOSE 8080
COPY target/$APP_FILE $APP_HOME/
WORKDIR $APP_HOME
ENTRYPOINT ["sh", "-c"]
CMD ["exec java -jar $APP_FILE"]

Let’s build Docker images for all three sample microservices.

$ cd employee-service
$ docker build -t piomin/employee:1.0 .
$ cd department-service
$ docker build -t piomin/department:1.0 .
$ cd organization-service
$ docker build -t piomin/organization:1.0 .

The last step is to deploy Docker containers with applications on Kubernetes. To do that just execute commands kubectl apply on YAML configuration files. The sample deployment file for employee-service has been demonstrated in step 1. All required deployment fields are available inside project repository in kubernetes directory.

$ kubectl apply -f kubernetes\employee-deployment.yaml
$ kubectl apply -f kubernetes\department-deployment.yaml
$ kubectl apply -f kubernetes\organization-deployment.yaml

4. Communication between microservices with Spring Cloud Kubernetes Ribbon

All the microservice are deployed on Kubernetes. Now, it’s worth to discuss some aspects related to inter-service communication. Application employee-service in contrast to other microservices did not invoke any other microservices. Let’s take a look on to other microservices that calls API exposed by employee-service and communicates between each other (organization-service calls department-service API).
First we need to include some additional dependencies to the project. We use Spring Cloud Ribbon and OpenFeign. Alternatively you can also use Spring @LoadBalanced RestTemplate.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-netflix-ribbon</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-kubernetes-ribbon</artifactId>
	<version>0.3.0.BUILD-SNAPSHOT</version>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-openfeign</artifactId>
</dependency>

Here’s the main class of department-service. It enables Feign client using @EnableFeignClients annotation. It works the same as with discovery based on Spring Cloud Netflix Eureka. OpenFeign uses Ribbon for client-side load balancing. Spring Cloud Kubernetes Ribbon provides some beans that forces Ribbon to communicate with Kubernetes API through Fabric8 KubernetesClient.

@SpringBootApplication
@EnableDiscoveryClient
@EnableFeignClients
@EnableMongoRepositories
@EnableSwagger2
public class DepartmentApplication {
	
	public static void main(String[] args) {
		SpringApplication.run(DepartmentApplication.class, args);
	}
	
	// ...
	
}

Here’s implementation of Feign client for calling method exposed by employee-service.

@FeignClient(name = "employee")
public interface EmployeeClient {

	@GetMapping("/department/{departmentId}")
	List findByDepartment(@PathVariable("departmentId") String departmentId);
	
}

Finally, we have to inject Feign client’s beans to the REST controller. Now, we may call the method defined inside EmployeeClient, which is equivalent to calling REST endpoints.

@RestController
public class DepartmentController {

	private static final Logger LOGGER = LoggerFactory.getLogger(DepartmentController.class);
	
	@Autowired
	DepartmentRepository repository;
	@Autowired
	EmployeeClient employeeClient;
	
	// ...
	
	@GetMapping("/organization/{organizationId}/with-employees")
	public List findByOrganizationWithEmployees(@PathVariable("organizationId") Long organizationId) {
		LOGGER.info("Department find: organizationId={}", organizationId);
		List departments = repository.findByOrganizationId(organizationId);
		departments.forEach(d -> d.setEmployees(employeeClient.findByDepartment(d.getId())));
		return departments;
	}
	
}

5. Building API gateway using Kubernetes Ingress

An Ingress is a collection of rules that allow incoming requests to reach the downstream services. In our microservices architecture ingress is playing a role of an API gateway. To create it we should first prepare YAML description file. The descriptor file should contain the hostname under which the gateway will be available and mapping rules to the downstream services.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: gateway-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  backend:
    serviceName: default-http-backend
    servicePort: 80
  rules:
  - host: microservices.info
    http:
      paths:
      - path: /employee
        backend:
          serviceName: employee
          servicePort: 8080
      - path: /department
        backend:
          serviceName: department
          servicePort: 8080
      - path: /organization
        backend:
          serviceName: organization
          servicePort: 8080

You have to execute the following command to apply the configuration visible above to the Kubernetes cluster.

$ kubectl apply -f kubernetes\ingress.yaml

For testing this solution locally we have to insert the mapping between IP address and hostname set in ingress definition inside hosts file as shown below. After it we can services through ingress using defined hostname just like that: http://microservices.info/employee.

192.168.99.100 microservices.info

You can check the details of created ingress just by executing command kubectl describe ing gateway-ingress.
micro-kube-2

6. Enabling API specification on gateway using Swagger2

Ok, what if we would like to expose single swagger documentation for all microservices deployed on Kubernetes? Well, here the things are getting complicated… We can run container with Swagger UI, and map all paths exposed by the ingress manually, but it is rather not a good solution…
In that case we can use Spring Cloud Kubernetes Ribbon one more time – this time together with Spring Cloud Netflix Zuul. Zuul will act as gateway only for serving Swagger API.
Here’s the list of dependencies used in my gateway-service project.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-netflix-zuul</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-kubernetes</artifactId>
	<version>0.3.0.BUILD-SNAPSHOT</version>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-netflix-ribbon</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-kubernetes-ribbon</artifactId>
	<version>0.3.0.BUILD-SNAPSHOT</version>
</dependency>
<dependency>
	<groupId>io.springfox</groupId>
	<artifactId>springfox-swagger-ui</artifactId>
	<version>2.9.2</version>
</dependency>
<dependency>
	<groupId>io.springfox</groupId>
	<artifactId>springfox-swagger2</artifactId>
	<version>2.9.2</version>
</dependency>

Kubernetes discovery client will detect all services exposed on cluster. We would like to display documentation only for our three microservices. That’s why I defined the following routes for Zuul.

zuul:
  routes:
    department:
      path: /department/**
    employee:
      path: /employee/**
    organization:
      path: /organization/**

Now we can use ZuulProperties bean to get routes addresses from Kubernetes discovery, and configure them as Swagger resources as shown below.

@Configuration
public class GatewayApi {

	@Autowired
	ZuulProperties properties;

	@Primary
	@Bean
	public SwaggerResourcesProvider swaggerResourcesProvider() {
		return () -> {
			List resources = new ArrayList();
			properties.getRoutes().values().stream()
					.forEach(route -> resources.add(createResource(route.getId(), "2.0")));
			return resources;
		};
	}

	private SwaggerResource createResource(String location, String version) {
		SwaggerResource swaggerResource = new SwaggerResource();
		swaggerResource.setName(location);
		swaggerResource.setLocation("/" + location + "/v2/api-docs");
		swaggerResource.setSwaggerVersion(version);
		return swaggerResource;
	}

}

Application gateway-service should be deployed on cluster the same as other applications. You can the list of running service by executing command kubectl get svc. Swagger documentation is available under address http://192.168.99.100:31237/swagger-ui.html.
micro-kube-3

Conclusion

I’m actually rooting for Spring Cloud Kubernetes project, which is still at the incubation stage. Kubernetes popularity as a platform is rapidly growing during some last months, but it still has some weaknesses. One of them is inter-service communication. Kubernetes doesn’t give us many mechanisms out-of-the-box, which allows configure more advanced rules. This a reason for creating frameworks for service mesh on Kubernetes like Istio or Linkerd. While these projects are still relatively new solutions, Spring Cloud is stable, opinionated framework. Why not to use to provide service discovery, inter-service communication or load balancing? Thanks to Spring Cloud Kubernetes it is possible.

Intro to Blockchain with Ethereum, Web3j and Spring Boot: Smart Contracts

I have already provided a quick introduction to building Spring Boot applications with Ethereum and web3j in one of my latest articles Introduction to Blockchain with Java using Ethereum, web3j and Spring Boot. That article has attracted much interest from you, so I decided to describe some more advanced aspects related to Ethereum and web3j. Today I’m going to show how you can implement Ethereum smart contracts in your application. First, let’s define what exactly is smart contract.

Smart contract is just a program that is executed on EVM (Ethereum Virtual Machine). Each contract contains a collection of code (functions) and data. It has an address in the Ethereum blockchain, can interact with other contracts, make decisions, store data, and send ether to others. Ethereum smart contracts are usually written in a language named Solidity, which is a statically typed high level language. Every contract needs to be compiled. After it you can generate source code for your application basing on the compiled binaries. Web3j library provides tools dedicated for that. Before we proceed to the source code let’s discuss an architecture of our sample system.

It consists of two independent applications contract-service and transaction-service. The most business logic is performed by contract-service application. It provides methods for creating smart wallets, deploying smart contracts on Ethereum and calling contract’s functions. Application transaction-service is responsible only for performing transaction between third-party and the owner of contract. It gets the owner’s account by calling endpoint exposed by contract-service. Application contract-service observing for transactions performed on the Ethereum node. If it is related to the contract owner’s account application calls function responsible for transferring funds to contract receiver’s account on all contracts signed by this owner. Here’s the diagram that illustrates process described above.

blockchain-contract

1. Building a smart contract with Solidity

The most popular tool for creating smart contracts in Ethereum is Solidity. Solidity is a contract-oriented, high-level language for implementing smart contracts. It was influenced by C++, Python and JavaScript and is designed to target the Ethereum Virtual Machine (EVM). It is statically typed, supports inheritance, libraries and complex user-defined types among other features. For more information about that language you should refer to Solidity documentation available on site http://solidity.readthedocs.io/.

Our main goal in this article is just to build a simple contract, compile it and generate required source code. That’s why I don’t want to go into the exact implementation details of contracts using Solidity. Here’s the implementation of contract responsible for counting a fee for incoming transaction. On the basis of this calculation it deposits funds on the transaction owner’s account and withdraws funds from sender’s account. This contract is signed between two users. Every one of them has it own smart wallet secured by their credentials. The understanding of this simple contract is very important, so let’s analyze it line after line.

Each contract is described by a percentage of transaction, which goes to receiver’s account (1) and receiver’s account address (2). Two first lines of contract declare variables for storing these parameters: fee of Solidity type uint, and receiver of type address. Both these values are initialized inside contract’s constructor (5). Parameter fee indicates the percentage fee of transaction, that is withdrawn from sender’s account and deposited on the receiver’s account. The line mapping (address => uint) public balances maps addresses of all balances to unsigned integers (3). We have also defines event Sent, which is emitted after every transaction within the contract (4). Function getReceiverBalance return the receiver’s account balance (6). Finally, there is a function sendTrx(...) that can be can be called by external client (7). It is responsible for performing withdrawal and deposit operations basing on the contract’s percentage fee and transaction amount. It requires a little more attention. First, it needs to have payable modifier to able to transfer funds between Ethereum accounts. After that, the transaction amount can be read from msg.value parameter. Then, we call function send on receiver address variable with given amount in Wei, and save this value on the contract’s balance. Additionally, we may sent an event that can be received by client application.

pragma solidity ^0.4.21;

contract TransactionFee {

    // (1)
    uint public fee;
    // (2)
    address public receiver;
    // (3)
    mapping (address => uint) public balances;
    // (4)
    event Sent(address from, address to, uint amount, bool sent);

    // (5)
    constructor(address _receiver, uint _fee) public {
        receiver = _receiver;
        fee = _fee;
    }

    // (6)
    function getReceiverBalance() public view returns(uint) {
        return receiver.balance;
    }

    // (7)
    function sendTrx() public payable {
        uint value = msg.value * fee / 100;
        bool sent = receiver.send(value);
        balances[receiver] += (value);
        emit Sent(msg.sender, receiver, value, sent);
    }

}

Once we have created a contract, we have to compile it and generate source code that can be use inside our application to deploy contract and call its functions. For just a quick check you can use Solidity compiler available online on site https://remix.ethereum.org.

2. Compiling contract and generating source code

Solidity provides up to date docker builds for their compiler. Released version are tagged with stable, while unstable changes from development branch are tagged with nightly. However, that Docker image contains only compiler executable file, so we would have to mount a persistent volume with input file with Solidity contract. Assuming that it is available under directory /home/docker on our Docker machine, we can compile it using the following command. This command creates two files: a binary .bin file, which is the smart contract code in a format the EVM can interpret, and an application binary interface .abi file, which defines the smart contract methods.

$ docker run --rm -v /home/docker:/build ethereum/solc:stable /build/TransactionFee.sol --bin --abi --optimize -o /build

The compilation output files are available under /build on the container, and are persisted inside /home/docker directory. The container is removed after compilation, because it is no needed now. We can generate source code from compiled contract using executable file provided together with Web3j library. It is available under directory ${WEB3J_HOME}/bin. When generating source code using Web3j we should pass location of .bin and .abi files, then set target package name and directory.

$ web3j solidity generate /build/transactionfee.bin /build/transactionfee.abi -p pl.piomin.services.contract.model -o src/main/java/

Web3j executable generates Java source file with Solidity contract name inside a given package. Here are the most important fragments of generated source file.

public class Transactionfee extends Contract {
    private static final String BINARY = "608060405234801561..."
    public static final String FUNC_GETRECEIVERBALANCE = "getReceiverBalance";
    public static final String FUNC_BALANCES = "balances";
    public static final String FUNC_SENDTRX = "sendTrx";
    public static final String FUNC_FEE = "fee";
    public static final String FUNC_RECEIVER = "receiver";

    // ...

    protected Transactionfee(String contractAddress, Web3j web3j, TransactionManager transactionManager, BigInteger gasPrice, BigInteger gasLimit) {
        super(BINARY, contractAddress, web3j, transactionManager, gasPrice, gasLimit);
    }

    public RemoteCall getReceiverBalance() {
        final Function function = new Function(FUNC_GETRECEIVERBALANCE,
                Arrays.asList(),
                Arrays.asList(new TypeReference() {}));
        return executeRemoteCallSingleValueReturn(function, BigInteger.class);
    }

    public RemoteCall balances(String param0) {
        final Function function = new Function(FUNC_BALANCES,
                Arrays.asList(new org.web3j.abi.datatypes.Address(param0)),
                Arrays.asList(new TypeReference() {}));
        return executeRemoteCallSingleValueReturn(function, BigInteger.class);
    }

    public RemoteCall sendTrx(BigInteger weiValue) {
        final Function function = new Function(
                FUNC_SENDTRX,
                Arrays.asList(),
                Collections.emptyList());
        return executeRemoteCallTransaction(function, weiValue);
    }

    public RemoteCall fee() {
        final Function function = new Function(FUNC_FEE,
                Arrays.asList(),
                Arrays.asList(new TypeReference() {}));
        return executeRemoteCallSingleValueReturn(function, BigInteger.class);
    }

    public RemoteCall receiver() {
        final Function function = new Function(FUNC_RECEIVER,
                Arrays.asList(),
                Arrays.&lt;TypeReference&gt;asList(new TypeReference
<Address>() {}));
        return executeRemoteCallSingleValueReturn(function, String.class);
    }

    public static RemoteCall deploy(Web3j web3j, Credentials credentials, BigInteger gasPrice, BigInteger gasLimit, String _receiver, BigInteger _fee) {
        String encodedConstructor = FunctionEncoder.encodeConstructor(Arrays.asList(new org.web3j.abi.datatypes.Address(_receiver),
                new org.web3j.abi.datatypes.generated.Uint256(_fee)));
        return deployRemoteCall(Transactionfee.class, web3j, credentials, gasPrice, gasLimit, BINARY, encodedConstructor);
    }

    public static RemoteCall deploy(Web3j web3j, TransactionManager transactionManager, BigInteger gasPrice, BigInteger gasLimit, String _receiver, BigInteger _fee) {
        String encodedConstructor = FunctionEncoder.encodeConstructor(Arrays.asList(new org.web3j.abi.datatypes.Address(_receiver),
                new org.web3j.abi.datatypes.generated.Uint256(_fee)));
        return deployRemoteCall(Transactionfee.class, web3j, transactionManager, gasPrice, gasLimit, BINARY, encodedConstructor);
    }

    // ...

    public Observable sentEventObservable(DefaultBlockParameter startBlock, DefaultBlockParameter endBlock) {
        EthFilter filter = new EthFilter(startBlock, endBlock, getContractAddress());
        filter.addSingleTopic(EventEncoder.encode(SENT_EVENT));
        return sentEventObservable(filter);
    }

    public static Transactionfee load(String contractAddress, Web3j web3j, Credentials credentials, BigInteger gasPrice, BigInteger gasLimit) {
        return new Transactionfee(contractAddress, web3j, credentials, gasPrice, gasLimit);
    }

    public static Transactionfee load(String contractAddress, Web3j web3j, TransactionManager transactionManager, BigInteger gasPrice, BigInteger gasLimit) {
        return new Transactionfee(contractAddress, web3j, transactionManager, gasPrice, gasLimit);
    }

    public static class SentEventResponse {
        public Log log;
        public String from;
        public String to;
        public BigInteger amount;
        public Boolean sent;
    }
}

3. Deploying contract

Once we have successfully generated Java object representing contract inside our application we may proceed to the application development. We will begin from contract-service. First, we will create smart wallet with credentials with sufficient funds for signing contracts as an owner. The following fragment of code is responsible for that, and is invoked just after application boot. You can also see here an implementation of HTTP GET method responsible for returning owner account address.

@PostConstruct
public void init() throws IOException, CipherException, NoSuchAlgorithmException, NoSuchProviderException, InvalidAlgorithmParameterException {
	String file = WalletUtils.generateLightNewWalletFile("piot123", null);
	credentials = WalletUtils.loadCredentials("piot123", file);
	LOGGER.info("Credentials created: file={}, address={}", file, credentials.getAddress());
	EthCoinbase coinbase = web3j.ethCoinbase().send();
	EthGetTransactionCount transactionCount = web3j.ethGetTransactionCount(coinbase.getAddress(), DefaultBlockParameterName.LATEST).send();
	Transaction transaction = Transaction.createEtherTransaction(coinbase.getAddress(), transactionCount.getTransactionCount(), BigInteger.valueOf(20_000_000_000L), BigInteger.valueOf(21_000), credentials.getAddress(),BigInteger.valueOf(25_000_000_000_000_000L));
	web3j.ethSendTransaction(transaction).send();
	EthGetBalance balance = web3j.ethGetBalance(credentials.getAddress(), DefaultBlockParameterName.LATEST).send();
	LOGGER.info("Balance: {}", balance.getBalance().longValue());
}

@GetMapping("/owner")
public String getOwnerAccount() {
	return credentials.getAddress();
}

Application contract-service exposes some endpoints that can be called by an external client or the second application in our sample system – transaction-service. The following implementation of POST /contract method performs two actions. First, it creates a new smart wallet with credentials. Then it uses those credentials to sign a smart contract with the address defined in the previous step. To sign a new contract you have to call method deploy from class generated from Solidity definition – Transactionfee. It is responsible for deploying a new instance of contract on the Ethereum node.

private List contracts = new ArrayList();

@PostMapping
public Contract createContract(@RequestBody Contract newContract) throws Exception {
	String file = WalletUtils.generateLightNewWalletFile("piot123", null);
	Credentials receiverCredentials = WalletUtils.loadCredentials("piot123", file);
	LOGGER.info("Credentials created: file={}, address={}", file, credentials.getAddress());
	Transactionfee2 contract = Transactionfee2.deploy(web3j, credentials, GAS_PRICE, GAS_LIMIT, receiverCredentials.getAddress(), BigInteger.valueOf(newContract.getFee())).send();
	newContract.setReceiver(receiverCredentials.getAddress());
	newContract.setAddress(contract.getContractAddress());
	contracts.add(contract.getContractAddress());
	LOGGER.info("New contract deployed: address={}", contract.getContractAddress());
	Optional tr = contract.getTransactionReceipt();
	if (tr.isPresent()) {
		LOGGER.info("Transaction receipt: from={}, to={}, gas={}", tr.get().getFrom(), tr.get().getTo(), tr.get().getGasUsed().intValue());
	}
	return newContract;
}

Every contract deployed on Ethereum has its own unique address. The unique address of every created contract is stored by the application. Then the application is able to load all existing contracts using those addresses. The following method is responsible for executing method sentTrx on the selected contract.

public void processContracts(long transactionAmount) {
	contracts.forEach(it -> {
		Transactionfee contract = Transactionfee.load(it, web3j, credentials, GAS_PRICE, GAS_LIMIT);
		try {
			TransactionReceipt tr = contract.sendTrx(BigInteger.valueOf(transactionAmount)).send();
			LOGGER.info("Transaction receipt: from={}, to={}, gas={}", tr.getFrom(), tr.getTo(), tr.getGasUsed().intValue());
			LOGGER.info("Get receiver: {}", contract.getReceiverBalance().send().longValue());
			EthFilter filter = new EthFilter(DefaultBlockParameterName.EARLIEST, DefaultBlockParameterName.LATEST, contract.getContractAddress());
			web3j.ethLogObservable(filter).subscribe(log -> {
				LOGGER.info("Log: {}", log.getData());
			});
		} catch (Exception e) {
			LOGGER.error("Error during contract execution", e);
		}
	});
}

Application contract-service listens for transactions incoming to Ethereum node, that has been send by transaction-service. If target account of transaction is equal to contracts owner account a given transaction is processed.

@Autowired
Web3j web3j;
@Autowired
ContractService service;

@PostConstruct
public void listen() {
	web3j.transactionObservable().subscribe(tx -> {
		if (tx.getTo() != null && tx.getTo().equals(service.getOwnerAccount())) {
			LOGGER.info("New tx: id={}, block={}, from={}, to={}, value={}", tx.getHash(), tx.getBlockHash(), tx.getFrom(), tx.getTo(), tx.getValue().intValue());
			service.processContracts(tx.getValue().longValue());
		} else {
			LOGGER.info("Not matched: id={}, to={}", tx.getHash(), tx.getTo());
		}
	});
}

Here’s the source code from transaction-service responsible for transfer funds from third-party account to contracts owner account.

@Value("${contract-service.url}")
String url;
@Autowired
Web3j web3j;
@Autowired
RestTemplate template;
Credentials credentials;

@PostMapping
public String performTransaction(@RequestBody TransactionRequest request) throws Exception {
	EthAccounts accounts = web3j.ethAccounts().send();
	String owner = template.getForObject(url, String.class);
	EthGetTransactionCount transactionCount = web3j.ethGetTransactionCount(accounts.getAccounts().get(request.getFromId()), DefaultBlockParameterName.LATEST).send();
	Transaction transaction = Transaction.createEtherTransaction(accounts.getAccounts().get(request.getFromId()), transactionCount.getTransactionCount(), GAS_PRICE, GAS_LIMIT, owner, BigInteger.valueOf(request.getAmount()));
	EthSendTransaction response = web3j.ethSendTransaction(transaction).send();
	if (response.getError() != null) {
		LOGGER.error("Transaction error: {}", response.getError().getMessage());
		return "ERR";
	}
	LOGGER.info("Transaction: {}", response.getResult());
	EthGetTransactionReceipt receipt = web3j.ethGetTransactionReceipt(response.getTransactionHash()).send();
	if (receipt.getTransactionReceipt().isPresent()) {
		TransactionReceipt r = receipt.getTransactionReceipt().get();
		LOGGER.info("Tx receipt: from={}, to={}, gas={}, cumulativeGas={}", r.getFrom(), r.getTo(), r.getGasUsed().intValue(), r.getCumulativeGasUsed().intValue());
	}
	EthGetBalance balance = web3j.ethGetBalance(accounts.getAccounts().get(request.getFromId()), DefaultBlockParameterName.LATEST).send();
	LOGGER.info("Balance: address={}, amount={}", accounts.getAccounts().get(request.getFromId()), balance.getBalance().longValue());
	balance = web3j.ethGetBalance(owner, DefaultBlockParameterName.LATEST).send();
	LOGGER.info("Balance: address={}, amount={}", owner, balance.getBalance().longValue());
	return response.getTransactionHash();
}

4. Test scenario

To run test scenario we need to have launched:

  • Ethereum node in development on Docker container
  • Ethereum Geth console client on Docker container
  • Instance of contact-service application, by default available on port 8090
  • Instance of transaction-service application, by default available on port 8091

Instruction how to run Ethereum node and Geth client using Docker container is available in my previous article about blockchain Introduction to Blockchain with Java using Ethereum, web3j and Spring Boot.

Before starting sample applications we should create at least one test account on Ethereum node. To achieve it we have to execute personal.newAccount Geth command as shown below.

blockchain-contract-1

After startup application transaction-service transfer some funds from coinbase account to all other existing accounts.

blockchain-contract-2

The next step is to create some contracts using owner account created automatically by contract-service on startup. You should call POST /contract method with fee parameter, that specifies percentage of transaction amount transfer from contract owner’s account to contract receiver’s account. Using the following command I have deployed two contracts with 10% and 5%. It means that 10% and 5% of each transaction sent to owner’s account by third-party user is transferred to the accounts generated by POST method. The address of account created by the POST method is returned in the response in the receiver field.

curl -X POST -H "Content-Type: application/json" -d '{"fee":10}' http://localhost:8090/contract
{"fee": 10,"receiver": "0x864ef9931c2690efcc6a773760237c4b09f40e65","address": "0xa6205a746ae0858fa22d6451b794cc977faa507c"}
curl -X POST -H "Content-Type: application/json" -d '{"fee":5}' http://localhost:8090/contract
{"fee": 5,"receiver": "0x098898594d7acd1481324af779e431ab87a3155d","address": "0x9c64d6b0fc01ee055e114a528fb5ad853843cde3"}

If contracts have been successfully deployed the last thing to do is to send a transaction by calling endpoint POST /transaction exposed by transaction-service. The owner account is automatically retrieved from contract-service. You have to set the transaction amount and source account index (means eth.accounts[index]).

curl -X POST -H "Content-Type: application/json" -d '{"amount":1000000,"fromId":1}' http://localhost:8090/transaction

Ok, that’s finally it. Now, the transaction is received by contract-service, which executes function sendTrx(...) on all defined contracts. As a result 10% and 5% of that transaction amount goes to contract receivers.

blockchain-contract-3

Sample applications source code is available in repository sample-spring-blockchain-contract (https://github.com/piomin/sample-spring-blockchain-contract.git). Enjoy! 🙂

 

Spring REST Docs versus SpringFox Swagger for API documentation

Recently, I have come across some articles and mentions about Spring REST Docs, where it has been present as a better alternative to traditional Swagger docs. Until now, I was always using Swagger for building API documentation, so I decided to try Spring REST Docs. You may even read on the main page of that Spring project (https://spring.io/projects/spring-restdocs) some references to Swagger, for example: “This approach frees you from the limitations of the documentation produced by tools like Swagger”. Are you interested in building API documentation using Spring REST Docs? Let’s take a closer look on that project!

A first difference in comparison to Swagger is a test-driven approach to generating API documentation. Thanks to that Spring REST Docs ensures that the documentation is always generated accurately matches the actual behavior of the API. When using Swagger SpringFox library you just need to enable it for the project and provide some configuration to force it work following your expectations. I have already described usage of Swagger 2 for automated build API documentation for Spring Boot based application in my two previous articles:

The articles mentioned above describe in the details how to use SpringFox Swagger in your Spring Boot application to automatically generate API documentation basing on the source code. Here I’ll give you only a short introduction to that technology, to easily find out differences between usage of Swagger2 and Spring REST Docs.

1. Using Swagger2 with Spring Boot

To enable SpringFox library for your application you need to include the following dependencies to pom.xml.

<dependency>
    <groupId>io.springfox</groupId>
    <artifactId>springfox-swagger2</artifactId>
    <version>2.9.2</version>
</dependency>
<dependency>
    <groupId>io.springfox</groupId>
    <artifactId>springfox-swagger-ui</artifactId>
    <version>2.9.2</version>
</dependency>

Then you should annotate the main or configuration class with @EnableSwagger2. You can also customize the behaviour of SpringFox library by declaring Docket bean.

@Bean
public Docket swaggerEmployeeApi() {
	return new Docket(DocumentationType.SWAGGER_2)
		.select()
			.apis(RequestHandlerSelectors.basePackage("pl.piomin.services.employee.controller"))
			.paths(PathSelectors.any())
		.build()
		.apiInfo(new ApiInfoBuilder().version("1.0").title("Employee API").description("Documentation Employee API v1.0").build());
}

Now, after running the application the documentation is available under context path /v2/api-docs. You can also display it in your web browser using Swagger UI available at site /swagger-ui.html.

spring-cloud-3
It looks easy? Let’s see how to do this with Spring REST Docs.

2. Using Asciidoctor with Spring Boot

There are some other differences between Spring REST Docs and SpringFox Swagger. By default, Spring REST Docs uses Asciidoctor. Asciidoctor processes plain text and produces HTML, styled and layed out to suit your needs. If you prefer, Spring REST Docs can also be configured to use Markdown. This really distinguished it from Swagger, which uses its own notation called OpenAPI Specification.
Spring REST Docs makes use of snippets produced by tests written with Spring MVC’s test framework, Spring WebFlux’s WebTestClient or REST Assured 3. I’ll show you an example based on Spring MVC.
I suggest you begin from creating base Asciidoc file. It should be placed in src/main/asciidoc directory in your application source code. I don’t know if you are familiar with Asciidoctor notation, but it is really intuitive. The sample visible below shows two important things. First we’ll display the version of the project taken from pom.xml. Then we’ll include the snippets generated during JUnit tests by declaring macro called operation containing document name and list of snippets. We can choose between such snippets like curl-request, http-request, http-response, httpie-request, links, request-body, request-fields, response-body, response-fields or path-parameters. The document name is determined by name of the test method in our JUnit test class.

= RESTful Employee API Specification
{project-version}
:doctype: book

== Add a new person

A `POST` request is used to add a new person

operation::add-person[snippets='http-request,request-fields,http-response']

== Find a person by id

A `GET` request is used to find a new person by id

operation::find-person-by-id[snippets='http-request,path-parameters,http-response,response-fields']

The source code fragment with Asciidoc natation is just a template. We would like to generate HTML file, which prettily displays all our automatically generated staff. To achieve it we should enable plugin asciidoctor-maven-plugin in the project’s pom.xml. In order to display Maven project version we need to pass it to the Asciidoc plugin configuration attributes. We also need to spring-restdocs-asciidoctor dependency to that plugin.

<plugin>
	<groupId>org.asciidoctor</groupId>
	<artifactId>asciidoctor-maven-plugin</artifactId>
	<version>1.5.6</version>
	<executions>
		<execution>
			<id>generate-docs</id>
			<phase>prepare-package</phase>
			<goals>
				<goal>process-asciidoc</goal>
			</goals>
			<configuration>
				<backend>html</backend>
				<doctype>book</doctype>
				<attributes>
					<project-version>${project.version}</project-version>
				</attributes>
			</configuration>
		</execution>
	</executions>
	<dependencies>
		<dependency>
			<groupId>org.springframework.restdocs</groupId>
			<artifactId>spring-restdocs-asciidoctor</artifactId>
			<version>2.0.0.RELEASE</version>
		</dependency>
	</dependencies>
</plugin>

Ok, the documentation is automatically generated during Maven build from our api.adoc file located inside src/main/asciidoc directory. But we still need to develop JUnit API tests that automatically generate required snippets. Let’s do that in the next step.

3. Generating snippets for Spring MVC

First, we should enable Spring REST Docs for our project. To achieve it we have to include the following dependency.

<dependency>
	<groupId>org.springframework.restdocs</groupId>
	<artifactId>spring-restdocs-mockmvc</artifactId>
	<scope>test</scope>
</dependency>

Now, all we need to do is to implement JUnit tests. Spring Boot provides an @AutoConfigureRestDocs annotation that allows you to leverage Spring REST Docs in your tests.
In fact, we need to prepare standard Spring MVC test using MockMvc bean. I also mocked some methods implemented by EmployeeRepository. Then, I used some static methods provided by Spring REST Docs with support for generating documentation of request and response payloads. First of those method is document("{method-name}/",...), which is responsible for generating snippets under directory target/generated-snippets/{method-name}, where method name is the name of the test method formatted using kebab-case. I have described all the JSON fields in the requests using requestFields(...) and responseFields(...) methods.

@RunWith(SpringRunner.class)
@WebMvcTest(EmployeeController.class)
@AutoConfigureRestDocs
public class EmployeeControllerTest {

	@MockBean
	EmployeeRepository repository;
	@Autowired
	MockMvc mockMvc;
	
	private ObjectMapper mapper = new ObjectMapper();

	@Before
	public void setUp() {
		Employee e = new Employee(1L, 1L, "John Smith", 33, "Developer");
		e.setId(1L);
		when(repository.add(Mockito.any(Employee.class))).thenReturn(e);
		when(repository.findById(1L)).thenReturn(e);
	}

	@Test
	public void addPerson() throws JsonProcessingException, Exception {
		Employee employee = new Employee(1L, 1L, "John Smith", 33, "Developer");
		mockMvc.perform(post("/").contentType(MediaType.APPLICATION_JSON).content(mapper.writeValueAsString(employee)))
			.andExpect(status().isOk())
			.andDo(document("{method-name}/", requestFields(
				fieldWithPath("id").description("Employee id").ignored(),
				fieldWithPath("organizationId").description("Employee's organization id"),
				fieldWithPath("departmentId").description("Employee's department id"),
				fieldWithPath("name").description("Employee's name"),
				fieldWithPath("age").description("Employee's age"),
				fieldWithPath("position").description("Employee's position inside organization")
			)));
	}
	
	@Test
	public void findPersonById() throws JsonProcessingException, Exception {
		this.mockMvc.perform(get("/{id}", 1).accept(MediaType.APPLICATION_JSON))
			.andExpect(status().isOk())
			.andDo(document("{method-name}/", responseFields(
				fieldWithPath("id").description("Employee id"),
				fieldWithPath("organizationId").description("Employee's organization id"),
				fieldWithPath("departmentId").description("Employee's department id"),
				fieldWithPath("name").description("Employee's name"),
				fieldWithPath("age").description("Employee's age"),
				fieldWithPath("position").description("Employee's position inside organization")
			), pathParameters(parameterWithName("id").description("Employee id"))));
	}

}

If you would like to customize some settings of Spring REST Docs you should provide @TestConfiguration class inside JUnit test class. In the following code fragment you may see an example of such customization. I overridden default snippets output directory from index to test method-specific name, and force generation of sample request and responses using prettyPrint option (single parameter in the separated line).

@TestConfiguration
static class CustomizationConfiguration implements RestDocsMockMvcConfigurationCustomizer {

	@Override
	public void customize(MockMvcRestDocumentationConfigurer configurer) {
		configurer.operationPreprocessors()
			.withRequestDefaults(prettyPrint())
			.withResponseDefaults(prettyPrint());
	}
	
	@Bean
	public RestDocumentationResultHandler restDocumentation() {
		return MockMvcRestDocumentation.document("{method-name}");
	}
}

Now, if you execute mvn clean install on your project you should see the following structure inside your output directory.
rest-api-docs-3

4. Viewing and publishing API docs

Once we have successfully built our project, the documentation has been generated. We can display HTML file available at target/generated-docs/api.html. It provides the full documentation of our API.

rest-api-docs-1
And the next part…

rest-api-docs-2
You may also want to publish it inside your application fat JAR file. If you configure maven-resources-plugin following example vibisle below it would be available under /static/docs directory inside JAR.

<plugin>
	<artifactId>maven-resources-plugin</artifactId>
	<executions>
		<execution>
			<id>copy-resources</id>
			<phase>prepare-package</phase>
			<goals>
				<goal>copy-resources</goal>
			</goals>
			<configuration>
				<outputDirectory>
					${project.build.outputDirectory}/static/docs
				</outputDirectory>
				<resources>
					<resource>
						<directory>
							${project.build.directory}/generated-docs
						</directory>
					</resource>
				</resources>
			</configuration>
		</execution>
	</executions>
</plugin>

Conclusion

That’s all what I wanted to show in this article. The sample service generating documentation using Spring REST Docs is available on GitHub under repository https://github.com/piomin/sample-spring-microservices-new/tree/rest-api-docs/employee-service. I’m not sure that Swagger and Spring REST Docs should be treated as a competitive solutions. I use Swagger for simple testing an API on the running application or exposing specification that can be used for automated generation of a client code. Spring REST Docs is rather used for generating documentation that can be published somewhere, and “is accurate, concise, and well-structured. This documentation then allows your users to get the information they need with a minimum of fuss”. I think there is no obstacle to use Spring REST Docs and SpringFox Swagger together in your project in order to provide the most valuable documentation of API exposed by the application.

Continuous Integration with Jenkins, Artifactory and Spring Cloud Contract

Consumer Driven Contract (CDC) testing is one of the method that allows you to verify integration between applications within your system. The number of such interactions may be really large especially if you maintain microservices-based architecture. Assuming that every microservice is developed by different teams or sometimes even different vendors, it is important to automate the whole testing process. As usual, we can use Jenkins server for running contract tests within our Continuous Integration (CI) process.

The sample scenario has been visualized on the picture below. We have one application (person-service) that exposes API leveraged by three different applications. Each application is implementing by a different development team. Consequently, every application is stored in the separated Git repository and has dedicated pipeline in Jenkins for building, testing and deploying.

contracts-3 (1)

The source code of sample applications is available on GitHub in the repository sample-spring-cloud-contract-ci (https://github.com/piomin/sample-spring-cloud-contract-ci.git). I placed all the sample microservices in the single Git repository only for our demo simplification. We will still treat them as a separated microservices, developed and built independently.

In this article I used Spring Cloud Contract for CDC implementation. It is the first choice solution for JVM applications written in Spring Boot. Contracts can be defined using Groovy or YAML notation. After building on the producer side Spring Cloud Contract generate special JAR file with stubs suffix, that contains all defined contracts and JSON mappings. Such a JAR file can be build on Jenkins and then published on Artifactory. Contract consumer also use the same Artifactory server, so they can use the latest version of stubs file. Because every application expects different response from person-service, we have to define three different contracts between person-service and a target consumer.

contracts-1

Let’s analyze the sample scenario. Assuming we have performed some changes in the API exposed by person-service and we have modified contracts on the producer side, we would like to publish them on shared server. First, we need to verify contracts against producer (1), and in case of success publish artifact with stubs to Artifactory (2). All the pipelines defined for applications that use this contract are able to trigger the build on a new version of JAR file with stubs (3). Then, the newest version contract is verifying against consumer (4). If contract testing fails, pipeline is able to notify the responsible team about this failure.

contracts-2

1. Pre-requirements

Before implementing and running any sample we need to prepare our environment. We need to launch Jenkins and Artifactory servers on the local machine. The most suitable way for this is through a Docker containers. Here are the commands required for run these containers.

$ docker run --name artifactory -d -p 8081:8081 docker.bintray.io/jfrog/artifactory-oss:latest
$ docker run --name jenkins -d -p 8080:8080 -p 50000:50000 jenkins/jenkins:lts

I don’t know if you are familiar with such tools like Artifactory and Jenkins. But after starting them we need to configure some things. First you need to initialize Maven repositories for Artifactory. You will be prompt for that just after a first launch. It also automatically add one remote repository: JCenter Bintray (https://bintray.com/bintray/jcenter), which is enough for our build. Jenkins also comes with default set of plugins, which you can install just after first launch (Install suggested plugins). For this demo, you will also have to install plugin for integration with Artifactory (https://wiki.jenkins.io/display/JENKINS/Artifactory+Plugin). If you need more details about Jenkins and Artifactory configuration you can refer to my older article How to setup Continuous Delivery environment.

2. Building contracts

We are beginning contract definition from the producer side application. Producer exposes only one GET /persons/{id} method that returns Person object. Here are the fields contained by Person class.

public class Person {

	private Integer id;
	private String firstName;
	private String lastName;
	@JsonFormat(pattern = "yyyy-MM-dd")
	private Date birthDate;
	private Gender gender;
	private Contact contact;
	private Address address;
	private String accountNo;

	// ...
}

The following picture illustrates, which fields of Person object are used by consumers. As you see, some of the fields are shared between consumers, while some other are required only by single consuming application.

contracts-4

Now we can take a look on contract definition between person-service and bank-service.

import org.springframework.cloud.contract.spec.Contract

Contract.make {
	request {
		method 'GET'
		urlPath('/persons/1')
	}
	response {
		status OK()
		body([
			id: 1,
			firstName: 'Piotr',
			lastName: 'Minkowski',
			gender: $(regex('(MALE|FEMALE)')),
			contact: ([
				email: $(regex(email())),
				phoneNo: $(regex('[0-9]{9}$'))
			])
		])
		headers {
			contentType(applicationJson())
		}
	}
}

For comparison, here’s definition of contract between person-service and letter-service.

import org.springframework.cloud.contract.spec.Contract

Contract.make {
	request {
		method 'GET'
		urlPath('/persons/1')
	}
	response {
		status OK()
		body([
			id: 1,
			firstName: 'Piotr',
			lastName: 'Minkowski',
			address: ([
				city: $(regex(alphaNumeric())),
				country: $(regex(alphaNumeric())),
				postalCode: $(regex('[0-9]{2}-[0-9]{3}')),
				houseNo: $(regex(positiveInt())),
				street: $(regex(nonEmpty()))
			])
		])
		headers {
			contentType(applicationJson())
		}
	}
}

3. Implementing tests on the producer side

Ok, we have three different contracts assigned to the single endpoint exposed by person-service. We need to publish them in such a way to that they are easily available for consumers. In that case Spring Cloud Contract comes with a handy solution. We may define contracts with different response for the same request, and than choose the appropriate definition on the consumer side. All those contract definitions will be published within the same JAR file. Because we have three consumers we define three different contracts placed in directories bank-consumer, contact-consumer and letter-consumer.

contracts-5

All the contracts will use a single base test class. To achieve it we need to provide a fully qualified name of that class for Spring Cloud Contract Verifier plugin in pom.xml.

<plugin>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-contract-maven-plugin</artifactId>
	<extensions>true</extensions>
	<configuration>
		<baseClassForTests>pl.piomin.services.person.BasePersonContractTest</baseClassForTests>
	</configuration>
</plugin>

Here’s the full definition of base class for our contract tests. We will mock the repository bean with the answer matching to the rules created inside contract files.

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = WebEnvironment.DEFINED_PORT)
public abstract class BasePersonContractTest {

	@Autowired
	WebApplicationContext context;
	@MockBean
	PersonRepository repository;
	
	@Before
	public void setup() {
		RestAssuredMockMvc.webAppContextSetup(this.context);
		PersonBuilder builder = new PersonBuilder()
			.withId(1)
			.withFirstName("Piotr")
			.withLastName("Minkowski")
			.withBirthDate(new Date())
			.withAccountNo("1234567890")
			.withGender(Gender.MALE)
			.withPhoneNo("500070935")
			.withCity("Warsaw")
			.withCountry("Poland")
			.withHouseNo(200)
			.withStreet("Al. Jerozolimskie")
			.withEmail("piotr.minkowski@gmail.com")
			.withPostalCode("02-660");
		when(repository.findById(1)).thenReturn(builder.build());
	}
	
}

Spring Cloud Contract Maven plugin visible above is responsible for generating stubs from contract definitions. It is executed during Maven build after running mvn clean install command. The build is performed on Jenkins CI. Jenkins pipeline is responsible for updating remote Git repository, build binaries from source code, running automated tests and finally publishing JAR file containing stubs on a remote artifact repository – Artifactory. Here’s Jenkins pipeline created for the contract producer side (person-service).

node {
  withMaven(maven:'M3') {
    stage('Checkout') {
      git url: 'https://github.com/piomin/sample-spring-cloud-contract-ci.git', credentialsId: 'piomin-github', branch: 'master'
    }
    stage('Publish') {
      def server = Artifactory.server 'artifactory'
      def rtMaven = Artifactory.newMavenBuild()
      rtMaven.tool = 'M3'
      rtMaven.resolver server: server, releaseRepo: 'libs-release', snapshotRepo: 'libs-snapshot'
      rtMaven.deployer server: server, releaseRepo: 'libs-release-local', snapshotRepo: 'libs-snapshot-local'
      rtMaven.deployer.artifactDeploymentPatterns.addInclude("*stubs*")
      def buildInfo = rtMaven.run pom: 'person-service/pom.xml', goals: 'clean install'
      rtMaven.deployer.deployArtifacts buildInfo
      server.publishBuildInfo buildInfo
    }
  }
}

We also need to include dependency spring-cloud-starter-contract-verifier to the producer app to enable Spring Cloud Contract Verifier.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-contract-verifier</artifactId>
	<scope>test</scope>
</dependency>

4. Implementing tests on the consumer side

To enable Spring Cloud Contract on the consumer side we need to include artifact spring-cloud-starter-contract-stub-runner to the project dependencies.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-contract-stub-runner</artifactId>
	<scope>test</scope>
</dependency>

Then, the only thing left is to build JUnit test, which verifies our contract by calling it through OpenFeign client. The configuration of that test is provided inside annotation @AutoConfigureStubRunner. We select the latest version of person-service stubs artifact by setting + in the version section of ids parameter. Because, we have multiple contracts defined inside person-service we need to choose the right for current service by setting consumer-name parameter. All the contract definitions are downloaded from Artifactory server, so we set stubsMode parameter to REMOTE. The address of Artifactory server has to be set using repositoryRoot property.

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = WebEnvironment.NONE)
@AutoConfigureStubRunner(ids = {"pl.piomin.services:person-service:+:stubs:8090"}, consumerName = "letter-consumer",  stubsPerConsumer = true, stubsMode = StubsMode.REMOTE, repositoryRoot = "http://192.168.99.100:8081/artifactory/libs-snapshot-local")
@DirtiesContext
public class PersonConsumerContractTest {

	@Autowired
	private PersonClient personClient;
	
	@Test
	public void verifyPerson() {
		Person p = personClient.findPersonById(1);
		Assert.assertNotNull(p);
		Assert.assertEquals(1, p.getId().intValue());
		Assert.assertNotNull(p.getFirstName());
		Assert.assertNotNull(p.getLastName());
		Assert.assertNotNull(p.getAddress());
		Assert.assertNotNull(p.getAddress().getCity());
		Assert.assertNotNull(p.getAddress().getCountry());
		Assert.assertNotNull(p.getAddress().getPostalCode());
		Assert.assertNotNull(p.getAddress().getStreet());
		Assert.assertNotEquals(0, p.getAddress().getHouseNo());
	}
	
}

Here’s Feign client implementation responsible for calling endpoint exposed by person-service

@FeignClient("person-service")
public interface PersonClient {

	@GetMapping("/persons/{id}")
	Person findPersonById(@PathVariable("id") Integer id);
	
}

5. Setup of Continuous Integration process

Ok, we have already defined all the contracts required for our exercise. We have also build a pipeline responsible for building and publishing stubs with contracts on the producer side (person-service). It always publish the newest version of stubs generated from source code. Now, our goal is to launch pipelines defined for three consumer applications, each time when new stubs would be published to Artifactory server by producer pipeline.
The best solution for that would be to trigger a Jenkins build when you deploy an artifact. To achieve it we use Jenkins plugin called URLTrigger, that can be configured to watch for changes on a certain URL, in that case REST API endpoint exposed by Artifactory for selected repository path.
After installing URLTrigger plugin we have to enable it for all consumer pipelines. You can configure it to watch for changes in the returned JSON file from the Artifactory File List REST API, that is accessed via the following URI: http://192.168.99.100:8081/artifactory/api/storage/%5BPATH_TO_FOLDER_OR_REPO%5D/. The file maven-metadata.xml will change every time you deploy a new version of application to Artifactory. We can monitor the change of response’s content between the last two polls. The last field that has to be filled is Schedule. If you set it to * * * * * it will poll for a change every minute.

contracts-6

Our three pipelines for consumer applications are ready. The first run was finished with success.

contracts-7

If you have already build person-service application and publish stubs to Artifactory you will see the following structure in libs-snapshot-local repository. I have deployed three different versions of API exposed by person-service. Each time I publish new version of contract all the dependent pipelines are triggered to verify it.

contracts-8

The JAR file with contracts is published under classifier stubs.

contracts-9

Spring Cloud Contract Stub Runner tries to find the latest version of contracts.

2018-07-04 11:46:53.273  INFO 4185 --- [           main] o.s.c.c.stubrunner.AetherStubDownloader  : Desired version is [+] - will try to resolve the latest version
2018-07-04 11:46:54.752  INFO 4185 --- [           main] o.s.c.c.stubrunner.AetherStubDownloader  : Resolved version is [1.3-SNAPSHOT]
2018-07-04 11:46:54.823  INFO 4185 --- [           main] o.s.c.c.stubrunner.AetherStubDownloader  : Resolved artifact [pl.piomin.services:person-service:jar:stubs:1.3-SNAPSHOT] to /var/jenkins_home/.m2/repository/pl/piomin/services/person-service/1.3-SNAPSHOT/person-service-1.3-SNAPSHOT-stubs.jar

6. Testing change in contract

Ok, we have already prepared contracts and configured our CI environment. Now, let’s perform change in the API exposed by person-service. We will just change the name of one field: accountNo to accountNumber.

contracts-12

This changes requires a change in contract definition created on the producer side. If you modify the field name there person-service will build successfully and new version of contract will be published to Artifactory. Because all other pipelines listens for changes in the latest version of JAR files with stubs, the build will be started automatically. Microservices letter-service and contact-service do not use field accountNo, so their pipelines will not fail. Only bank-service pipeline report error in contract as shown on the picture below.

contracts-10

Now, if you were notified about failed verification of the newest contract version between person-service and bank-service, you can perform required change on the consumer side.

contracts-11

Introduction to Blockchain with Java using Ethereum, web3j and Spring Boot

Blockchain is one of the buzzwords in IT world during some last months. This term is related to cryptocurrencies, and was created together with Bitcoins. It is decentralized, immutable data structure divided into blocks, which are linked and secured using cryptographic algorithms. Every single block in this structure typically contains a cryptographic hash of the previous block, a timestamp, and transaction data. Blockchain is managed by peer-to-peer network, and during inter-node communication every new block is validated before adding. This is short portion of theory about blockchain. In a nutshell, this is a technology which allows us to managed transactions between two parties in a decentralized way. Now, the question is how we can implement it in our system.
Here comes Ethereum. It is a decentralized platform created by Vitarik Buterin that provides scripting language for a development of applications. It is based on ideas from Bitcoin, and is driven by the new cryptocurrency called Ether. Today, Ether is the second largest cryptocurrency after Bitcoin. The heart of Ethereum technology is EVM (Ethereum Virtual Machine), which can be treated as something similar to JVM, but using a network of fully decentralized nodes. To implement transactions based Ethereum in Java world we use web3j library. This is a lightweight, reactive, type safe Java and Android library for integrating with nodes on Ethereum blockchains. More details can be found on its website https://web3j.io.

1. Running Ethereum locally

Although there are many articles on the Web about blockchain and ethereum it is not easy to find a solution describing how to run ready-for-use instance of Ethereum on the local machine. It is worth to mention that generally there are two most popular Ethereum clients we can use: Geth and Parity. It turns out we can easily run Geth node locally using Docker container. By default it connects the node to the Ethereum main network. Alternatively, you can connect it to test network or Rinkeby network. But the best option for beginning is just to run it in development mode by setting --dev parameter on Docker container running command.
Here’s the command that starts Docker container in development mode and exposes Ethereum RPC API on port 8545.

$ docker run -d --name ethereum -p 8545:8545 -p 30303:30303 ethereum/client-go --rpc --rpcaddr "0.0.0.0" --rpcapi="db,eth,net,web3,personal" --rpccorsdomain "*" --dev

The one really good message when running that container in development mode is that you have plenty of Ethers on your default, test account. In that case, you don’t have to mine any Ethers to be able to start tests. Great! Now, let’s create some other test accounts and also check out some things. To achieve it we need to run Geth’s interactive JavaScript console inside Docker container.

$ docker exec -it ethereum geth attach ipc:/tmp/geth.ipc

2. Managing Ethereum node using JavaScript console

After running JavaScript console you can easily display default account (coinbase), the list of all available accounts and their balances. Here’s the screen illustrating results for my Ethereum node.
blockchain-1
Now, we have to create some test accounts. We can do it by calling personal.newAccount(password) function. After creating required accounts, you can perform some test transactions using JavaScript console, and transfer some funds from base account to the newly created accounts. Here are the commands used for creating accounts and executing transactions.
blockchain-2

3. System architecture

The architecture of our sample system is very simple. I don’t want to complicate anything, but just show you how to send transaction to Geth node and receive notifications. While transaction-service sends new transaction to Ethereum node, bonus-service observe node and listening for incoming transactions. Then it send bonus to the sender’s account once per 10 transactions received from his account. Here’s the diagram that illustrates an architecture of our sample system.
blockchain-arch

4. Enable Web3j for Spring Boot app

I think that now we have clarity what exactly we want to do. So, let’s proceed to the implementation. First, we should include all required dependencies in order to be able to use web3j library inside Spring Boot application. Fortunately, there is a starter that can be included.

<dependency>
	<groupId>org.web3j</groupId>
	<artifactId>web3j-spring-boot-starter</artifactId>
	<version>1.6.0</version>
</dependency>

Because we are running Ethereum Geth client on Docker container we need to change auto-configured client’s address for web3j.

spring:
  application:
    name: transaction-service
server:
  port: ${PORT:8090}
web3j:
  client-address: http://192.168.99.100:8545

5. Building applications

If we included web3j starter to the project dependencies all you need is to autowire Web3j bean. Web3j is responsible for sending transaction to Geth client node. It receives response with transaction hash if it has been accepted by the node or error object if it has been rejected. While creating transaction object it is important to set gas limit to minimum 21000. If you sent lower value, you will probably receive error Error: intrinsic gas too low.

@Service
public class BlockchainService {

    private static final Logger LOGGER = LoggerFactory.getLogger(BlockchainService.class);

    @Autowired
    Web3j web3j;

    public BlockchainTransaction process(BlockchainTransaction trx) throws IOException {
        EthAccounts accounts = web3j.ethAccounts().send();
        EthGetTransactionCount transactionCount = web3j.ethGetTransactionCount(accounts.getAccounts().get(trx.getFromId()), DefaultBlockParameterName.LATEST).send();
        Transaction transaction = Transaction.createEtherTransaction(accounts.getAccounts().get(trx.getFromId()), transactionCount.getTransactionCount(), BigInteger.valueOf(trx.getValue()), BigInteger.valueOf(21_000), accounts.getAccounts().get(trx.getToId()),BigInteger.valueOf(trx.getValue()));
        EthSendTransaction response = web3j.ethSendTransaction(transaction).send();
        if (response.getError() != null) {
            trx.setAccepted(false);
            return trx;
        }
        trx.setAccepted(true);
        String txHash = response.getTransactionHash();
        LOGGER.info("Tx hash: {}", txHash);
        trx.setId(txHash);
        EthGetTransactionReceipt receipt = web3j.ethGetTransactionReceipt(txHash).send();
        if (receipt.getTransactionReceipt().isPresent()) {
            LOGGER.info("Tx receipt: {}", receipt.getTransactionReceipt().get().getCumulativeGasUsed().intValue());
        }
        return trx;
    }

}

The @Service bean visible above is invoked by the controller. The implementation of POST method takes BlockchainTransaction object as parameter. You can send there sender id, receiver id, and transaction amount. Sender and receiver ids are equivalent to index in query eth.account[index].

@RestController
public class BlockchainController {

    @Autowired
    BlockchainService service;

    @PostMapping("/transaction")
    public BlockchainTransaction execute(@RequestBody BlockchainTransaction transaction) throws NoSuchAlgorithmException, NoSuchProviderException, InvalidAlgorithmParameterException, CipherException, IOException {
        return service.process(transaction);
    }

}

You can send a test transaction by calling POST method using the following command.

  
$ curl --header "Content-Type: application/json" --request POST --data '{"fromId":2,"toId":1,"value":3}' http://localhost:8090/transaction

Before sending any transactions you should also unlock sender account.
blockchain-3

Application bonus-service listens for transactions processed by Ethereum node. It subscribes for notifications from Web3j library by calling web3j.transactionObservable().subscribe(...) method. It returns the amount of received transaction to the sender’s account once per 10 transactions sent from that address. Here’s the implementation of observable method inside application bonus-service.

@Autowired
Web3j web3j;

@PostConstruct
public void listen() {
	Subscription subscription = web3j.transactionObservable().subscribe(tx -> {
		LOGGER.info("New tx: id={}, block={}, from={}, to={}, value={}", tx.getHash(), tx.getBlockHash(), tx.getFrom(), tx.getTo(), tx.getValue().intValue());
		try {
			EthCoinbase coinbase = web3j.ethCoinbase().send();
			EthGetTransactionCount transactionCount = web3j.ethGetTransactionCount(tx.getFrom(), DefaultBlockParameterName.LATEST).send();
			LOGGER.info("Tx count: {}", transactionCount.getTransactionCount().intValue());
			if (transactionCount.getTransactionCount().intValue() % 10 == 0) {
				EthGetTransactionCount tc = web3j.ethGetTransactionCount(coinbase.getAddress(), DefaultBlockParameterName.LATEST).send();
				Transaction transaction = Transaction.createEtherTransaction(coinbase.getAddress(), tc.getTransactionCount(), tx.getValue(), BigInteger.valueOf(21_000), tx.getFrom(), tx.getValue());
				web3j.ethSendTransaction(transaction).send();
			}
		} catch (IOException e) {
			LOGGER.error("Error getting transactions", e);
		}
	});
	LOGGER.info("Subscribed");
}

Conclusion

Blockchain and cryptocurrencies are not the easy topics to start. Ethereum simplifies development of applications that use blockchain, by providing a complete, scripting language. Using web3j library together with Spring Boot and Docker image of Ethereum Geth client allows to quickly start local development of solution implementing blockchain technology. IF you would like to try it locally just clone my repository available on GitHub https://github.com/piomin/sample-spring-blockchain.git

Building and testing message-driven microservices using Spring Cloud Stream

Spring Boot and Spring Cloud give you a great opportunity to build microservices fast using different styles of communication. You can create synchronous REST microservices based on Spring Cloud Netflix libraries as shown in one of my previous articles Quick Guide to Microservices with Spring Boot 2.0, Eureka and Spring Cloud. You can create asynchronous, reactive microservices deployed on Netty with Spring WebFlux project and combine it succesfully with some Spring Cloud libraries as shown in my article Reactive Microservices with Spring WebFlux and Spring Cloud. And finally, you may implement message-driven microservices based on publish/subscribe model using Spring Cloud Stream and message broker like Apache Kafka or RabbitMQ. The last of listed approaches to building microservices is the main subject of this article. I’m going to show you how to effectively build, scale, run and test messaging microservices basing on RabbitMQ broker.

Architecture

For the purpose of demonstrating Spring Cloud Stream features we will design a sample system which uses publish/subscribe model for inter-service communication. We have three microservices: order-service, product-service and account-service. Application order-service exposes HTTP endpoint that is responsible for processing orders sent to our system. All the incoming orders are processed asynchronously – order-service prepare and send message to RabbitMQ exchange and then respond to the calling client that the request has been accepted for processing. Applications account-service and product-service are listening for the order messages incoming to the exchange. Microservice account-service is responsible for checking if there are sufficient funds on customer’s account for order realization and then withdrawing cash from this account. Microservice product-service checks if there is sufficient amount of products in the store, and changes the number of available products after processing order. Both account-service and product-service send asynchronous response through RabbitMQ exchange (this time it is one-to-one communication using direct exchange) with a status of operation. Microservice order-service after receiving response messages sets the appropriate status of the order and exposes it through REST endpoint GET /order/{id} to the external client.

If you feel that the description of our sample system is a little incomprehensible, here’s the diagram with architecture for clarification.

stream-1

Enabling Spring Cloud Stream

The recommended way to include Spring Cloud Stream in the project is with a dependency management system. Spring Cloud Stream has an independent release trains management in relation to the whole Spring Cloud framework. However, if we have declared spring-cloud-dependencies in the Elmhurst.RELEASE version inside the dependencyManagement
section, we wouldn’t have to declare anything else in pom.xml. If you prefer to use only the Spring Cloud Stream project, you should define the following section.

<dependencyManagement>
  <dependencies>
    <dependency>
      <groupId>org.springframework.cloud</groupId>
      <artifactId>spring-cloud-stream-dependencies</artifactId>
      <version>Elmhurst.RELEASE</version>
      <type>pom</type>
      <scope>import</scope>
    </dependency>
  </dependencies>
</dependencyManagement>

The next step is to add spring-cloud-stream artifact to the project dependencies. I also recommend you include at least the spring-cloud-sleuth library to provide sending messaging with the same traceId as the source request incoming to order-service.

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-stream</artifactId>
</dependency>
<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-sleuth</artifactId>
</dependency>

Spring Cloud Stream programming model

To enable connectivity to a message broker for your application, annotate the main class with @EnableBinding. The @EnableBinding annotation takes one or more interfaces as parameters. You may choose between three interfaces provided by Spring Cloud Stream:

  • Sink: This is used for marking a service that receives messages from the inbound channel.
  • Source: This is used for sending messages to the outbound channel.
  • Processor: This can be used in case you need both an inbound channel and an outbound channel, as it extends the Source and Sink interfaces. Because order-service sends messages, as well as receives them, its main class has been annotated with @EnableBinding(Processor.class).

Here’s the main class of order-service that enables Spring Cloud Stream binding.

@SpringBootApplication
@EnableBinding(Processor.class)
public class OrderApplication {
  public static void main(String[] args) {
    new SpringApplicationBuilder(OrderApplication.class).web(true).run(args);
  }
}

Adding message broker

In Spring Cloud Stream nomenclature the implementation responsible for integration with specific message broker is called binder. By default, Spring Cloud Stream provides binder implementations for Kafka and RabbitMQ. It is able to automatically detect and use a binder found on the classpath. Any middleware-specific settings can be overridden through external configuration properties in the form supported by Spring Boot, such as application arguments, environment variables, or just the application.yml file. To include support for RabbitMQ, which used it this article as a message broker, you should add the following dependency to the project.

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-starter-stream-rabbit</artifactId>
</dependency>

Now, our applications need to connected with one, shared instance of RabbitMQ broker. That’s why I run Docker image with RabbitMQ exposed outside on default 5672 port. It also launches web dashboard available under address http://192.168.99.100:15672.

$ docker run -d --name rabbit -p 15672:15672 -p 5672:5672 rabbitmq:management

We need to override default address of RabbitMQ for every Spring Boot application by settings property spring.rabbitmq.host to Docker machine IP 192.168.99.100.

spring:
  rabbitmq:
    host: 192.168.99.100
    port: 5672

Implementing message-driven microservices

Spring Cloud Stream is built on top of Spring Integration project. Spring Integration extends the Spring programming model to support the well-known Enterprise Integration Patterns (EIP). EIP defines a number of components that are typically used for orchestration in distributed systems. You have probably heard about patterns such as message channels, routers, aggregators, or endpoints. Let’s proceed to the implementation.
We begin from order-service, that is responsible for accepting orders, publishing them on shared topic and then collecting asynchronous responses from downstream services. Here’s the @Service, which builds message and publishes it to the remote topic using Source bean.

@Service
public class OrderSender {
  @Autowired
  private Source source;

  public boolean send(Order order) {
    return this.source.output().send(MessageBuilder.withPayload(order).build());
  }
}

That @Service is called by the controller, which exposes the HTTP endpoints for submitting new orders and getting order with status by id.

@RestController
public class OrderController {

	private static final Logger LOGGER = LoggerFactory.getLogger(OrderController.class);

	private ObjectMapper mapper = new ObjectMapper();

	@Autowired
	OrderRepository repository;
	@Autowired
	OrderSender sender;	

	@PostMapping
	public Order process(@RequestBody Order order) throws JsonProcessingException {
		Order o = repository.add(order);
		LOGGER.info("Order saved: {}", mapper.writeValueAsString(order));
		boolean isSent = sender.send(o);
		LOGGER.info("Order sent: {}", mapper.writeValueAsString(Collections.singletonMap("isSent", isSent)));
		return o;
	}

	@GetMapping("/{id}")
	public Order findById(@PathVariable("id") Long id) {
		return repository.findById(id);
	}

}

Now, let’s take a closer look on consumer side. The message sent by OrderSender bean from order-service is received by account-service and product-service. To receive the message from topic exchange, we just have to annotate the method that takes the Order object as a parameter with @StreamListener. We also have to define target channel for listener – in that case it is Processor.INPUT.

@SpringBootApplication
@EnableBinding(Processor.class)
public class OrderApplication {

	private static final Logger LOGGER = LoggerFactory.getLogger(OrderApplication.class);

	@Autowired
	OrderService service;

	public static void main(String[] args) {
		new SpringApplicationBuilder(OrderApplication.class).web(true).run(args);
	}

	@StreamListener(Processor.INPUT)
	public void receiveOrder(Order order) throws JsonProcessingException {
		LOGGER.info("Order received: {}", mapper.writeValueAsString(order));
		service.process(order);
	}

}

Received order is then processed by AccountService bean. Order may be accepted or rejected by account-service dependending on sufficient funds on customer’s account for order’s realization. The response with acceptance status is sent back to order-service via output channel invoked by the OrderSender bean.

@Service
public class AccountService {

	private static final Logger LOGGER = LoggerFactory.getLogger(AccountService.class);

	private ObjectMapper mapper = new ObjectMapper();

	@Autowired
	AccountRepository accountRepository;
	@Autowired
	OrderSender orderSender;

	public void process(final Order order) throws JsonProcessingException {
		LOGGER.info("Order processed: {}", mapper.writeValueAsString(order));
		List accounts =  accountRepository.findByCustomer(order.getCustomerId());
		Account account = accounts.get(0);
		LOGGER.info("Account found: {}", mapper.writeValueAsString(account));
		if (order.getPrice() <= account.getBalance()) {
			order.setStatus(OrderStatus.ACCEPTED);
			account.setBalance(account.getBalance() - order.getPrice());
		} else {
			order.setStatus(OrderStatus.REJECTED);
		}
		orderSender.send(order);
		LOGGER.info("Order response sent: {}", mapper.writeValueAsString(order));
	}

}

The last step is configuration. It is provided inside application.yml file. We have to properly define destinations for channels. While order-service is assigning orders-out destination to output channel, and orders-in destination to input channel, account-service and product-service do the opposite. It is logical, because message sent by order-service via its output destination is received by consuming services via their input destinations. But it is still the same destination on shared broker’s exchange. Here are configuration settings of order-service.

spring:
  cloud:
    stream:
      bindings:
        output:
          destination: orders-out
        input:
          destination: orders-in
      rabbit:
        bindings:
          input:
            consumer:
              exchangeType: direct

Here’s configuration provided for account-service and product-service.

spring:
  cloud:
    stream:
      bindings:
        output:
          destination: orders-in
        input:
          destination: orders-out
      rabbit:
        bindings:
          output:
            producer:
              exchangeType: direct
              routingKeyExpression: '"#"'

Finally, you can run our sample microservice. For now, we just need to run a single instance of each microservice. You can easily generate some test requests by running JUnit test class OrderControllerTest provided in my source code repository inside module order-service. This case is simple. In the next we will study more advanced sample with multiple running instances of consuming services.

Scaling up

To scale up our Spring Cloud Stream applications we just need to launch additional instances of each microservice. They will still listen for the incoming messages on the same topic exchange as the currently running instances. After adding one instance of account-service and product-service we may send a test order. The result of that test won’t be satisfactory for us… Why? A single order is received by all the running instances of every microservice. This is exactly how topic exchanges works – the message sent to topic is received by all consumers, which are listening on that topic. Fortunately, Spring Cloud Stream is able to solve that problem by providing solution called consumer group. It is responsible for guarantee that only one of the instances is expected to handle a given message, if they are placed in a competing consumer relationship. The transformation to consumer group mechanism when running multiple instances of the service has been visualized on the following figure.

stream-2

Configuration of a consumer group mechanism is not very difficult. We just have to set group parameter with name of the group for given destination. Here’s the current binding configuration for account-service. The orders-in destination is a queue created for direct communication with order-service, so only orders-out is grouped using spring.cloud.stream.bindings..group property.

spring:
  cloud:
    stream:
      bindings:
        output:
          destination: orders-in
        input:
          destination: orders-out
          group: account

Consumer group mechanisms is a concept taken from Apache Kafka, and implemented in Spring Cloud Stream also for RabbitMQ broker, which does not natively support it. So, I think it is pretty interesting how it is configured on RabbitMQ. If you run two instances of the service without setting group name on destination there are two bindings created for a single exchange (one binding per one instance) as shown in the picture below. Because two applications are listening on that exchange, there four bindings assigned to that exchange in total.

stream-3

If you set group name for selected destination Spring Cloud Stream will create a single binding for all running instances of given service. The name of binding will be suffixed with group name.

B08597_11_06

Because, we have included spring-cloud-starter-sleuth to the project dependencies the same traceId header is sent between all the asynchronous requests exchanged during realization of single request incoming to the order-service POST endpoint. Thanks to that we can easily correlate all logs using this header using Elastic Stack (Kibana).

B08597_11_05

Automated Testing

You can easily test your microservice without connecting to a message broker. To achieve it you need to include spring-cloud-stream-test-support to your project dependencies. It contains the TestSupportBinder bean that lets you interact with the bound channels and inspect any messages sent and received by the application.

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-stream-test-support</artifactId>
  <scope>test</scope>
</dependency>

In the test class we need to declare MessageCollector bean, which is responsible for receiving messages retained by TestSupportBinder. Here’s my test class from account-service. Using Processor bean I send test order to input channel. Then MessageCollector receives message that is sent back to order-service via output channel. Test method testAccepted creates order that should be accepted by account-service, while testRejected method sets too high order price that results in rejecting the order.

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
public class OrderReceiverTest {

	private static final Logger LOGGER = LoggerFactory.getLogger(OrderReceiverTest.class);

	@Autowired
	private Processor processor;
	@Autowired
	private MessageCollector messageCollector;

	@Test
	@SuppressWarnings("unchecked")
	public void testAccepted() {
		Order o = new Order();
		o.setId(1L);
		o.setAccountId(1L);
		o.setCustomerId(1L);
		o.setPrice(500);
		o.setProductIds(Collections.singletonList(2L));
		processor.input().send(MessageBuilder.withPayload(o).build());
		Message received = (Message) messageCollector.forChannel(processor.output()).poll();
		LOGGER.info("Order response received: {}", received.getPayload());
		assertNotNull(received.getPayload());
		assertEquals(OrderStatus.ACCEPTED, received.getPayload().getStatus());
	}

	@Test
	@SuppressWarnings("unchecked")
	public void testRejected() {
		Order o = new Order();
		o.setId(1L);
		o.setAccountId(1L);
		o.setCustomerId(1L);
		o.setPrice(100000);
		o.setProductIds(Collections.singletonList(2L));
		processor.input().send(MessageBuilder.withPayload(o).build());
		Message received = (Message) messageCollector.forChannel(processor.output()).poll();
		LOGGER.info("Order response received: {}", received.getPayload());
		assertNotNull(received.getPayload());
		assertEquals(OrderStatus.REJECTED, received.getPayload().getStatus());
	}

}

Conclusion

Message-driven microservices are a good choice whenever you don’t need synchronous response from your API. In this article I have shown sample use case of publish/subscribe model in inter-service communication between your microservices. The source code is as usual available on GitHub (https://github.com/piomin/sample-message-driven-microservices.git). For more interesting examples with usage of Spring Cloud Stream library, also with Apache Kafka, you can refer to Chapter 11 in my book Mastering Spring Cloud (https://www.packtpub.com/application-development/mastering-spring-cloud).