Microservices API Documentation with Springdoc OpenAPI

I have already written about documentation for microservices more than two years ago in my article Microservices API Documentation with Swagger2. In that case I used project SpringFox for auto-generating Swagger documentation for Spring Boot applications. Since that time the SpringFox library is not being actively developed by the maintainers – the latest version has been released on June 2018. Currently, the most important problems with this library are a lack of support for OpenAPI in the newest version 3, and for Spring reactive APIs built using WebFlux. All these features are implemented by Springdoc OpenAPI library. Therefore, it may threaten as a replacement for SpringFox as Swagger and OpenAPI 3 generation tool for Spring Boot applications. Continue reading “Microservices API Documentation with Springdoc OpenAPI”

Spring Boot Admin on Kubernetes

The main goal of this article is to show how to monitor Spring Boot applications running on Kubernetes with Spring Boot Admin. I have already written about Spring Boot Admin more than two years ago in the article Monitoring Microservices With Spring Boot Admin. You can find there a detailed description of its main features. During this time some new features have been added. They have also changed a look of the application to more modern. But the principles of working have not been changes anymore, so you can still refer to my previous article to understand the main concept around Spring Boot Admin. Continue reading “Spring Boot Admin on Kubernetes”

Hazelcast with Spring Boot on Kubernetes

Hazelcast is the leading in-memory data grid (IMDG) solution. The main idea behind IMDG is to distribute data across many nodes inside cluster. Therefore, it seems to be an ideal solution for running on a cloud platform like Kubernetes, where you can easily scale up or scale down a number of running instances. Since Hazelcast is written in Java you can easily integrate it with your Java application using standard libraries. Something what can also simplify a start with Hazelcast is Spring Boot. You may also use an unofficial library implementing Spring Repositories pattern for Hazelcast – Spring Data Hazelcast. Continue reading “Hazelcast with Spring Boot on Kubernetes”

A Magic Around Spring Boot Auto Configuration

Auto-configuration is probably one of the most important reasons you would decide to use such frameworks like Spring Boot. Thanks to that feature, it is usually enough just to include an additional library and override some configuration properties to successfully use it in your application. Spring provides an easy way to define auto-configuration using standard @Configuration classes. Continue reading “A Magic Around Spring Boot Auto Configuration”

Spring Cloud Kubernetes For Hybrid Microservices Architecture

You might use Spring Cloud Kubernetes to build applications running both inside and outside Kubernetes cluster. The only problem with starting application outside Kubernetes is that there is no auto-configured registration mechanism. Spring Cloud Kubernetes delegates registration to the platform, what is an obvious behaviour if you are deploying your application internally using Kubernetes objects. With external application the situation is different. In fact, you should guarantee registration by yourself on the application side. Continue reading “Spring Cloud Kubernetes For Hybrid Microservices Architecture”

Spring Boot Best Practices for Microservices

In this article I’m going to propose my list of “golden rules” for building Spring Boot applications, which are a part of microservices-based system. I’m basing on my experience in migrating monolithic SOAP applications running on JEE servers into REST-based small applications built on top of Spring Boot. This list of best practices assumes you are running many microservices on the production under a huge incoming traffic. Let’s begin. Continue reading “Spring Boot Best Practices for Microservices”

Secure Spring Cloud Config

If you are building microservices architecture on top of Spring Boot and Spring Cloud I’m almost sure that one of projects you are using is Spring Cloud Config. Spring Cloud Config is responsible for implementing one of the most popular microservices pattern called distributed configuration. It provides server-side (Spring Cloud Config Server) and client-side (Spring Cloud Config Server) support for externalized configuration in a distributed system. In this article I focus on security aspects related to that project. If you are interested in some basics please refer to my previous article about it Microservices Configuration With Spring Cloud Config. Continue reading “Secure Spring Cloud Config”

Microservices with Spring Boot, Spring Cloud Gateway and Consul Cluster

The Spring Cloud Consul project provides integration for Consul and Spring Boot applications through auto-configuration. By using the well-known Spring Framework annotation style, we may enable and configure common patterns within microservice-based environments. These patterns include service discovery using Consul agent, distributed configuration using Consul key/value store, distributed events with Spring Cloud Bus, and Consul Events. The project also supports a client-side load balancer based on Netflix’s Ribbon and an API gateway based on Spring Cloud Gateway. Continue reading “Microservices with Spring Boot, Spring Cloud Gateway and Consul Cluster”

Using Reactive WebClient with Spring WebFlux

Reactive APIs and generally reactive programming become increasingly popular lately. You have a change to observe more and more new frameworks and toolkits supporting reactive programming, or just dedicated for this. Today, in the era of microservices architecture, where the network communication through APIs becomes critical for applications, reactive APIs seems to be an attractive alternative to a traditional, synchronous approach. It should be definitely considered as a primary approach if you are working with large streams of data exchanged via network communication. Continue reading “Using Reactive WebClient with Spring WebFlux”

Performance Comparison Between Spring MVC and Spring WebFlux with Elasticsearch

Since Spring 5 and Spring Boot 2 there is a full support for reactive REST API with Spring WebFlux project. Also project Spring Data systematically includes support for reactive NoSQL databases, and recently for SQL databases too. Since Spring Data Moore we can take advantage of reactive template and repository for Elasticsearch, what I have already described in one of my previous article Reactive Elasticsearch With Spring Boot. Continue reading “Performance Comparison Between Spring MVC and Spring WebFlux with Elasticsearch”

Reactive Elasticsearch With Spring Boot

One of more notable feature introduced in the latest release of Spring Data is reactive support for Elasticsearch. Since Spring Data Moore we can take advantage of reactive template and repository. It is built on top of fully reactive Elasticsearch REST client, that is based on Spring WebClient. It is also worth to mention about support for reactive Querydsl, which can be included to your application through ReactiveQueryPredicateExecutor. Continue reading “Reactive Elasticsearch With Spring Boot”

Reactive Logging With Spring WebFlux and Logstash

I have already introduced my Spring Boot library for synchronous HTTP request/response logging in one of my previous articles Logging with Spring Boot and Elastic Stack. This library is dedicated for synchronous REST applications built with Spring MVC and Spring Web. Since version 5.0 Spring Framework also offers support for reactive REST API through Spring WebFlux project. I decided to extend support for logging in my library to reactive Spring WebFlux.

Continue reading “Reactive Logging With Spring WebFlux and Logstash”

Using logstash-logging-spring-boot-starter for logging with Spring Boot and Logstash

I have already described some implementation details related to my library logstash-logging-spring-boot-starter for HTTP request/response logging in one of the previous articles Logging with Spring Boot and Elastic Stack. The article has been published some weeks ago, and since that time some important features has been added to this library. Today I’m going to summarise all those changes and describe all the features provided by the library.

Continue reading “Using logstash-logging-spring-boot-starter for logging with Spring Boot and Logstash”

Deploying Spring Boot Application on OpenShift with Dekorate

More advanced deployments to Kubernetes or OpenShift are a bit troublesome for developers. In comparison to Kubernetes OpenShift provides S2I (Source-2-Image) mechanism, which may help reduce a time required for preparation of application deployment descriptors. Although S2I is quite useful for developers, it solves only simple use cases and does not provide unified approach to building deployment configuration from a source code. Dekorate (https://dekorate.io), the recently created open-source project, tries to solve that problem. This project seems to be very interesting. It appears to be confirmed by RedHat, which has already announced a decision on including Dekorate to Red Hat OpenShift Application Runtimes as a “Tech Preview”. Continue reading “Deploying Spring Boot Application on OpenShift with Dekorate”

Logging with Spring Boot and Elastic Stack

In this article I’ll introduce my library for logging designed especially for Spring Boot RESTful web application. The main assumptions regarding this library are:

  • Logging all incoming HTTP requests and outgoing HTTP responses with full body
  • Integration with Elastic Stack through Logstash using logstash-logback-encoder library
  • Possibility for enabling logging on a client-side for most commonly used components in Spring Boot application: RestTemplate and OpenFeign
  • Generating and propagating correlationId across all communication within a single API endpoint call
  • Calculating and storing execution time for each request
  • A library should be auto-configurable – you don’t have to do anything more than including it as a dependency to your application to make it work

Continue reading “Logging with Spring Boot and Elastic Stack”

Performance Comparison Between Spring Boot and Micronaut

Today we will compare two frameworks used for building microservices on the JVM: Spring Boot and Micronaut. First of them, Spring Boot is currently the most popular and opinionated framework in the JVM world. On the other side of the barrier is staying Micronaut, quickly gaining popularity framework especially designed for building serverless functions or low memory-footprint microservices. We will be comparing version 2.1.4 of Spring Boot with 1.0.0.RC1 of Micronaut. The comparison criteria are:

  • memory usage (heap and non-heap)
  • the size in MB of generated fat JAR file
  • the application startup time
  • the performance of application, in the meaning of average response time from the REST endpoint during sample load testing

Continue reading “Performance Comparison Between Spring Boot and Micronaut”

Elasticsearch with Spring Boot

Elasticsearch is a full-text search engine especially designed for working with large data sets. Following this description it is a natural choice to use it for storing and searching application logs. Together with Logstash and Kibana it is a part of powerful solution called Elastic Stack, that has already been described in some of my previous articles.
Keeping application logs is not the only one use case for Elasticsearch. It is often used as a secondary database for the application, that has primary relational database. Such an approach can be especially useful if you have to perform full-text search over large data set or just store many historical records that are no longer modified by the application. Of course there is always question about advantages and disadvantages of that approach.
When you are working with two different data sources that contain the same data, you have to first think about synchronization. You have several options. Depending on the relational database vendor, you can leverage binary or transaction logs, which contain the history of SQL updates. This approach requires some middleware that reads logs and then puts data to Elasticsearch. You can always move the whole responsibility to the database side (trigger) or into Elasticsearch side (JDBC plugins). Continue reading “Elasticsearch with Spring Boot”

A Magic Around Spring Boot Externalized Configuration

There are some things I really like in Spring Boot, and one of them is an externalized (external) configuration. Spring Boot allows you to configure your application in many ways. You have 17 levels of loading configuration properties into application. All of them are described in the 24th Chapter of Spring Boot documentation available here.

This article was inspired by some last talks with developers about problems with the configuration of their applications. They haven’t heard about some interesting features that may be used to make it more flexible and clear. Continue reading “A Magic Around Spring Boot Externalized Configuration”

Introduction to Spring Data Redis

Redis is an in-memory data structure store with optional durability, used as database, cache and message broker. Currently, it is the most most popular tool in the key/value stores category: https://db-engines.com/en/ranking/key-value+store. The easiest way to integrate your application with Redis is through Spring Data Redis. You can use Spring RedisTemplate directly for that or you might as well use Spring Data Redis repositories. There are some limitations when you integrate with Redis via Spring Data Redis repositories. They require at least Redis Server version 2.8.0 and do not work with transactions. Therefore you need to disable transaction support for RedisTemplate, which is leveraged by Redis repositories. Continue reading “Introduction to Spring Data Redis”

Kotlin Microservice with Spring Boot

You may find many examples of microservices built with Spring Boot on my blog, but the most of them is written in Java. With the rise in popularity of Kotlin language it is more often used with Spring Boot for building backend services. Starting with version 5 Spring Framework has introduced first-class support for Kotlin. In this article I’m going to show you example of microservice build with Kotlin and Spring Boot 2. I’ll describe some interesting features of Spring Boot, which can treated as a set of good practices when building backend, REST-based microservices. Continue reading “Kotlin Microservice with Spring Boot”

Spring Boot Autoscaler

One of more important reasons we are deciding to use such a tools like Kubernetes, Pivotal Cloud Foundry or HashiCorp’s Nomad is an availability of auto-scaling our applications. Of course those tools provides many other useful mechanisms, but we can implement auto-scaling by ourselves. At first glance it seems to be difficult, but assuming we use Spring Boot as a framework for building our applications and Jenkins as a CI server, it finally does not require a lot of work. Today, I’m going to show you how to implement such a solutions using the following frameworks/tools:

  • Spring Boot
  • Spring Boot Actuator
  • Spring Cloud Netflix Eureka
  • Jenkins CI

How it works?

Every Spring Boot application, which contains Spring Boot Actuator library can expose metrics under endpoint /actuator/metrics. There are many valuable metrics that gives you the detailed information about an application status. Some of them may be especially important when talking about autoscaling: JVM, CPU metrics, a number of running threads and a number of incoming HTTP requests. There is dedicated Jenkins pipeline responsible for monitoring application’s metrics by polling endpoint /actuator/metrics periodically. If any monitored metrics is below or above target range it runs new instance or shutdown a running instance of application using another Actuator endpoint /actuator/shutdown. Before that, it needs to fetch the current list of running instances of a single application in order to get an address of existing application selected for shutting down or the address of server with the smallest number of running instances for a new instance of application..

spring-autoscaler-1

After discussing an architecture of our system we may proceed to the development. Our application needs to meet some requirements: it has to expose metrics and endpoint for graceful shutdown, it needs to register in Eureka after after startup and deregister on shutdown, and finally it also should dynamically allocate running port randomly from the pool of free ports. Thanks to Spring Boot we may easily implement all these mechanisms if five minutes 🙂

Dynamic port allocation

Since it is possible to run many instances of application on a single machine we have to guarantee that there won’t be conflicts in port numbers. Fortunately, Spring Boot provides such mechanisms for an application. We just need to set port number to 0 inside application.yml file using server.port property. Because our application registers itself in eureka it also needs to send unique instanceId, which is by default generated as a concatenation of fields spring.cloud.client.hostname, spring.application.name and server.port.
Here’s current configuration of our sample application. I have changed the template of instanceId field by replacing number of port to randomly generated number.

spring:
  application:
    name: example-service
server:
  port: ${PORT:0}
eureka:
  instance:
    instanceId: ${spring.cloud.client.hostname}:${spring.application.name}:${random.int[1,999999]}

Enabling Actuator metrics

To enable Spring Boot Actuator we need to include the following dependency to pom.xml.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

We also have to enable exposure of actuator endpoints via HTTP API by setting property management.endpoints.web.exposure.include to '*'. Now, the list of all available metric names is available under context path /actuator/metrics, while detailed information for each metric under path /actuator/metrics/{metricName}.

Graceful shutdown

Besides metrics Spring Boot Actuator also provides endpoint for shutting down an application. However, in contrast to other endpoints this endpoint is not available by default. We have to set property management.endpoint.shutdown.enabled to true. After that we will be to stop our application by sending POST request to /actuator/shutdown endpoint.
This method of stopping application guarantees that service will unregister itself from Eureka server before shutdown.

Enabling Eureka discovery

Eureka is the most popular discovery server used for building microservices-based architecture with Spring Cloud. So, if you already have microservices and want to provide auto-scaling mechanisms for them, Eureka would be a natural choice. It contains IP address and port number of every registered instance of application. To enable Eureka on the client side you just need to include the following dependency to your pom.xml.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
</dependency>

As I have mentioned before we also have to guarantee an uniqueness of instanceId send to Eureka server by client-side application. It has been described in the step “Dynamic port allocation”.
The next step is to create application with embedded Eureka server. To achieve it we first need to include the following dependency into pom.xml.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-netflix-eureka-server</artifactId>
</dependency>

The main class should be annotated with @EnableEurekaServer.

@SpringBootApplication
@EnableEurekaServer
public class DiscoveryApp {

    public static void main(String[] args) {
        new SpringApplicationBuilder(DiscoveryApp.class).run(args);
    }

}

Client-side applications by default tries to connect with Eureka server on localhost under port 8761. We only need single, standalone Eureka node, so we will disable registration and attempts to fetching list of services form another instances of server.

spring:
  application:
    name: discovery-service
server:
  port: ${PORT:8761}
eureka:
  instance:
    hostname: localhost
  client:
    registerWithEureka: false
    fetchRegistry: false
    serviceUrl:
      defaultZone: http://localhost:8761/eureka/

The tests of the sample autoscaling system will be performed using Docker containers, so we need to prepare and build image with Eureka server. Here’s Dockerfile with image definition. It can be built using command docker build -t piomin/discovery-server:2.0 ..

FROM openjdk:8-jre-alpine
ENV APP_FILE discovery-service-1.0-SNAPSHOT.jar
ENV APP_HOME /usr/apps
EXPOSE 8761
COPY target/$APP_FILE $APP_HOME/
WORKDIR $APP_HOME
ENTRYPOINT ["sh", "-c"]
CMD ["exec java -jar $APP_FILE"]

Building Jenkins pipeline for autoscaling

The first step is to prepare Jenkins pipeline responsible for autoscaling. We will create Jenkins Declarative Pipeline, which runs every minute. Periodical execution may be configured with the triggers directive, that defines the automated ways in which the pipeline should be re-triggered. Our pipeline will communicate with Eureka server and metrics endpoints exposed by every microservice using Spring Boot Actuator.
The test service name is EXAMPLE-SERVICE, which is equal to value (big letters) of property spring.application.name defined inside application.yml file. The monitored metric is the number of HTTP listener threads running on Tomcat container. These threads are responsible for processing incoming HTTP requests.

pipeline {
    agent any
    triggers {
        cron('* * * * *')
    }
    environment {
        SERVICE_NAME = "EXAMPLE-SERVICE"
        METRICS_ENDPOINT = "/actuator/metrics/tomcat.threads.busy?tag=name:http-nio-auto-1"
        SHUTDOWN_ENDPOINT = "/actuator/shutdown"
    }
    stages { ... }
}

Integrating Jenkins pipeline with Eureka

The first stage of our pipeline is responsible for fetching list of services registered in service discovery server. Eureka exposes HTTP API with several endpoints. One of them is GET /eureka/apps/{serviceName}, which returns list of all instances of application with given name. We are saving the number of running instances and the URL of metrics endpoint of every single instance. These values would be accessed during next stages of pipeline.
Here’s the fragment of pipeline responsible for fetching list of running instances of application. The name of stage is Calculate. We use HTTP Request Plugin for HTTP connections.

stage('Calculate') {
	steps {
		script {
			def response = httpRequest "http://192.168.99.100:8761/eureka/apps/${env.SERVICE_NAME}"
			def app = printXml(response.content)
			def index = 0
			env["INSTANCE_COUNT"] = app.instance.size()
			app.instance.each {
				if (it.status == 'UP') {
					def address = "http://${it.ipAddr}:${it.port}"
					env["INSTANCE_${index++}"] = address 
				}
			}
		}
	}
}

@NonCPS
def printXml(String text) {
    return new XmlSlurper(false, false).parseText(text)
}

Here’s a sample response from Eureka API for our microservice. The response content type is XML.

spring-autoscaler-2

Integrating Jenkins pipeline with Spring Boot Actuator metrics

Spring Boot Actuator exposes endpoint with metrics, which allows to find metric by name and optionally by tag. In the fragment of pipeline visible below I’m trying to find the instance with metric below or above a defined threshold. If there is such an instance we stop the loop in order to proceed to the next stage, which performs scaling down or up. The ip addresses of running applications are taken from pipeline environment variable with prefix INSTANCE_, which has been saved in the previous stage.

stage('Metrics') {
	steps {
		script {
			def count = env.INSTANCE_COUNT
			for(def i=0; i<count; i++) {
				def ip = env["INSTANCE_${i}"] + env.METRICS_ENDPOINT
				if (ip == null)
					break;
				def response = httpRequest ip
				def objRes = printJson(response.content)
				env.SCALE_TYPE = returnScaleType(objRes)
				if (env.SCALE_TYPE != "NONE")
					break
			}
		}
	}
}

@NonCPS
def printJson(String text) {
    return new JsonSlurper().parseText(text)
}

def returnScaleType(objRes) {
def value = objRes.measurements[0].value
if (value.toInteger() > 100)
		return "UP"
else if (value.toInteger() < 20)
		return "DOWN"
else
		return "NONE"
}

Shutdown application instance

In the last stage of our pipeline we will shutdown the running instance or start new instance depending on the result saved in the previous stage. Shutdown may be easily performed by calling Spring Boot Actuator endpoint. In the following fragment of pipeline we pick the instance returned by Eureka as first. Then we send POST request to that ip address.
If we need to scale up our application we call another pipeline responsible for build fat JAR and launch it on our machine.

stage('Scaling') {
	steps {
		script {
			if (env.SCALE_TYPE == 'DOWN') {
				def ip = env["INSTANCE_0"] + env.SHUTDOWN_ENDPOINT
				httpRequest url:ip, contentType:'APPLICATION_JSON', httpMode:'POST'
			} else if (env.SCALE_TYPE == 'UP') {
				build job:'spring-boot-run-pipeline'
			}
			currentBuild.description = env.SCALE_TYPE
		}
	}
}

Here’s a full definition of our pipeline spring-boot-run-pipeline responsible for starting new instance of application. It clones the repository with application source code, builds binaries using Maven commands, and finally runs the application using java -jar command passing address of Eureka server as a parameter.

pipeline {
    agent any
    tools {
        maven 'M3'
    }
    stages {
        stage('Checkout') {
            steps {
                git url: 'https://github.com/piomin/sample-spring-boot-autoscaler.git', credentialsId: 'github-piomin', branch: 'master'
            }
        }
        stage('Build') {
            steps {
                dir('example-service') {
                    sh 'mvn clean package'
                }
            }
        }
        stage('Run') {
            steps {
                dir('example-service') {
                    sh 'nohup java -jar -DEUREKA_URL=http://192.168.99.100:8761/eureka target/example-service-1.0-SNAPSHOT.jar 1>/dev/null 2>logs/runlog &'
                }
            }
        }
    }
}

Remote extension

The algorithm discussed in the previous sections will work fine only for microservices launched on the single machine. If we would like to extend it to work with many machines, we will have to modify our architecture as shown below. Each machine has Jenkins agent running and communicating with Jenkins master. If we would like to start new instance of microservices on the selected machine, we have to run pipeline using agent running on that machine. This agent is responsible only for building application from source code and launching it on the target machine. The shutdown of instance is still performed just by calling HTTP endpoint.

spring-autoscaler-3

You can find more information about running Jenkins agents and connecting them with Jenkins master via JNLP protocol in my article Jenkins nodes on Docker containers. Assuming we have successfully launched some agents on the target machines we need to parametrize our pipelines in order to be able to select agent (and therefore the target machine) dynamically.
When we are scaling up our application we have to pass agent label to the downstream pipeline.

build job:'spring-boot-run-pipeline', parameters:[string(name: 'agent', value:"slave-1")]

The calling pipeline will be ran by agent labelled with given parameter.

pipeline {
    agent {
        label "${params.agent}"
    }
    stages { ... }
}

If we have more than one agent connected to the master node we can map their addresses into the labels. Thanks to that you would be able to map IP address of microservice instance fetched from Eureka to the target machine with Jenkins agent.

pipeline {
    agent any
    triggers {
        cron('* * * * *')
    }
    environment {
        SERVICE_NAME = "EXAMPLE-SERVICE"
        METRICS_ENDPOINT = "/actuator/metrics/tomcat.threads.busy?tag=name:http-nio-auto-1"
        SHUTDOWN_ENDPOINT = "/actuator/shutdown"
        AGENT_192.168.99.102 = "slave-1"
        AGENT_192.168.99.103 = "slave-2"
    }
    stages { ... }
}

Summary

In this article I have demonstrated how to use Spring Boot Actuator metrics in order to scale up/scale down your Spring Boot application. Using basic mechanisms provided by Spring Boot together with Spring Cloud Netflix Eureka and Jenkins you can implement auto-scaling for your applications without getting any other third-party tools. The case described in this article assumes using Jenkins agents on the remote machines to launch there new instance of application, but you may as well use a tool like Ansible for that. If you would decide to run Ansible playbooks from Jenkins you will not have to launch Jenkins agents on remote machines. The source code with sample applications is available on GitHub: https://github.com/piomin/sample-spring-boot-autoscaler.git.

GraphQL – The Future of Microservices?

Often, GraphQL is presented as a revolutionary way of designing web APIs in comparison to REST. However, if you would take a closer look on that technology you will see that there are so many differences between them. GraphQL is a relatively new solution that has been open sourced by Facebook in 2015. Today, REST is still the most popular paradigm used for exposing APIs and inter-service communication between microservices. Is GraphQL going to overtake REST in the future? Let’s take a look how to create microservices communicating through GraphQL API using Spring Boot and Apollo client.

Let’s begin from an architecture of our sample system. We have three microservices that communicates to each other using URLs taken from Eureka service discovery.

graphql-arch

1. Enabling Spring Boot support for GraphQL

We can easily enable support for GraphQL on the server-side Spring Boot application just by including some starters. After including graphql-spring-boot-starter the GraphQL servlet would be automatically accessible under path /graphql. We can override that default path by settings property graphql.servlet.mapping in application.yml file. We should also enable GraphiQL – an in-browser IDE for writing, validating, and testing GraphQL queries, and GraphQL Java Tools library, which contains useful components for creating queries and mutations. Thanks to that library any files on the classpath with .graphqls extension will be used to provide the schema definition.

<dependency>
	<groupId>com.graphql-java</groupId>
	<artifactId>graphql-spring-boot-starter</artifactId>
	<version>5.0.2</version>
</dependency>
<dependency>
	<groupId>com.graphql-java</groupId>
	<artifactId>graphiql-spring-boot-starter</artifactId>
	<version>5.0.2</version>
</dependency>
<dependency>
	<groupId>com.graphql-java</groupId>
	<artifactId>graphql-java-tools</artifactId>
	<version>5.2.3</version>
</dependency>

2. Building GraphQL schema definition

Every schema definitions contains data types declaration, relationships between them, and a set of operations including queries for searching objects and mutations for creating, updating or deleting data. Usually we will start from creating type declaration, which is responsible for domain object definition. You can specify if the field is required using ! char or if it is an array using [...]. Definition has to contain type declaration or reference to other types available in the specification.

type Employee {
  id: ID!
  organizationId: Int!
  departmentId: Int!
  name: String!
  age: Int!
  position: String!
  salary: Int!
}

Here’s an equivalent Java class to GraphQL definition visible above. GraphQL type Int can be also mapped to Java Long. The ID scalar type represents a unique identifier – in that case it also would be Java Long.

public class Employee {

	private Long id;
	private Long organizationId;
	private Long departmentId;
	private String name;
	private int age;
	private String position;
	private int salary;
	
	// constructor
	
	// getters
	// setters
	
}

The next part of schema definition contains queries and mutations declaration. Most of the queries return list of objects – what is marked with [Employee]. Inside EmployeeQueries type we have declared all find methods, while inside EmployeeMutations type methods for adding, updating and removing employees. If you pass the whole object to that method you need to declare it as an input type.

schema {
  query: EmployeeQueries
  mutation: EmployeeMutations
}

type EmployeeQueries {
  employees: [Employee]
  employee(id: ID!): Employee!
  employeesByOrganization(organizationId: Int!): [Employee]
  employeesByDepartment(departmentId: Int!): [Employee]
}

type EmployeeMutations {
  newEmployee(employee: EmployeeInput!): Employee
  deleteEmployee(id: ID!) : Boolean
  updateEmployee(id: ID!, employee: EmployeeInput!): Employee
}

input EmployeeInput {
  organizationId: Int
  departmentId: Int
  name: String
  age: Int
  position: String
  salary: Int
}

3. Queries and mutation implementation

Thanks to GraphQL Java Tools and Spring Boot GraphQL auto-configuration we don’t need to do much to implement queries and mutations in our application. The EmployeesQuery bean has to GraphQLQueryResolver interface. Basing on that Spring would be able to automatically detect and call right method as a response to one of the GraphQL query declared inside the schema. Here’s a class containing an implementation of queries.

@Component
public class EmployeeQueries implements GraphQLQueryResolver {

	private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeQueries.class);
	
	@Autowired
	EmployeeRepository repository;
	
	public List employees() {
		LOGGER.info("Employees find");
		return repository.findAll();
	}
	
	public List employeesByOrganization(Long organizationId) {
		LOGGER.info("Employees find: organizationId={}", organizationId);
		return repository.findByOrganization(organizationId);
	}

	public List employeesByDepartment(Long departmentId) {
		LOGGER.info("Employees find: departmentId={}", departmentId);
		return repository.findByDepartment(departmentId);
	}
	
	public Employee employee(Long id) {
		LOGGER.info("Employee find: id={}", id);
		return repository.findById(id);
	}
	
}

If you would like to call, for example method employee(Long id) you should build the following query. You can easily test it in your application using GraphiQL tool available under path /graphiql.

graphql-1
The bean responsible for implementation of mutation methods needs to implement GraphQLMutationResolver. Despite declaration of EmployeeInput we still to use the same domain object as returned by queries – Employee.

@Component
public class EmployeeMutations implements GraphQLMutationResolver {

	private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeQueries.class);
	
	@Autowired
	EmployeeRepository repository;
	
	public Employee newEmployee(Employee employee) {
		LOGGER.info("Employee add: employee={}", employee);
		return repository.add(employee);
	}
	
	public boolean deleteEmployee(Long id) {
		LOGGER.info("Employee delete: id={}", id);
		return repository.delete(id);
	}
	
	public Employee updateEmployee(Long id, Employee employee) {
		LOGGER.info("Employee update: id={}, employee={}", id, employee);
		return repository.update(id, employee);
	}
	
}

We can also use GraphiQL to test mutations. Here’s the command that adds new employee, and receives response with employee’s id and name.

graphql-2

4. Generating client-side classes

Ok, we have successfully created server-side application. We have already tested some queries using GraphiQL. But our main goal is to create some other microservices that communicate with employee-service application through GraphQL API. Here the most of tutorials about Spring Boot and GraphQL ending.
To be able to communicate with our first application through GraphQL API we have two choices. We can get a standard REST client and implement GraphQL API by ourselves with HTTP GET requests or use one of existing Java clients. Surprisingly, there are no many GraphQL Java client implementations available. The most serious choice is Apollo GraphQL Client for Android. Of course it is not designed only for Android devices, and you can successfully use it in your microservice Java application.
Before using the client we need to generate classes from schema and .grapql files. The recommended way to do it is through Apollo Gradle Plugin. There are also some Maven plugins, but none of them provide the level of automation as Gradle plugin, for example it automatically downloads node.js required for generating client-side classes. So, the first step is to add Apollo plugin and runtime to the project dependencies.

buildscript {
  repositories {
    jcenter()
    maven { url 'https://oss.sonatype.org/content/repositories/snapshots/' }
  }
  dependencies {
    classpath 'com.apollographql.apollo:apollo-gradle-plugin:1.0.1-SNAPSHOT'
  }
}

apply plugin: 'com.apollographql.android'

dependencies {
  compile 'com.apollographql.apollo:apollo-runtime:1.0.1-SNAPSHOT'
}

GraphQL Gradle plugin tries to find files with .graphql extension and schema.json inside src/main/graphql directory. GraphQL JSON schema can be obtained from your Spring Boot application by calling resource /graphql/schema.json. File .graphql contains queries definition. Query employeesByOrganization will be called by organization-service, while employeesByDepartment by both department-service and organization-service. Those two application needs a little different set of data in the response. Application department-service requires more detailed information about every employee than organization-service. GraphQL is an excellent solution in that case, because we can define the require set of data in the response on the client side. Here’s query definition of employeesByOrganization called by organization-service.

query EmployeesByOrganization($organizationId: Int!) {
  employeesByOrganization(organizationId: $organizationId) {
    id
    name
  }
}

Application organization-service would also call employeesByDepartment query.

query EmployeesByDepartment($departmentId: Int!) {
  employeesByDepartment(departmentId: $departmentId) {
    id
    name
  }
}

The query employeesByDepartment is also called by department-service, which requires not only id and name fields, but also position and salary.

query EmployeesByDepartment($departmentId: Int!) {
  employeesByDepartment(departmentId: $departmentId) {
    id
    name
    position
    salary
  }
}

All the generated classes are available under build/generated/source/apollo directory.

5. Building Apollo client with discovery

After generating all required classes and including them into calling microservices we may proceed to the client implementation. Apollo client has two important features that will affect our development:

  • It provides only asynchronous methods based on callback
  • It does not integrate with service discovery based on Spring Cloud Netflix Eureka

Here’s an implementation of employee-service client inside department-service. I used EurekaClient directly (1). It gets all running instances registered as EMPLOYEE-SERVICE. Then it selects one instance form the list of available instances randomly (2). The port number of that instance is passed to ApolloClient (3). Before calling asynchronous method enqueue provided by ApolloClient we create lock (4), which waits max. 5 seconds for releasing (8). Method enqueue returns response in the callback method onResponse (5). We map the response body from GraphQL Employee object to returned object (6) and then release the lock (7).

@Component
public class EmployeeClient {

	private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeClient.class);
	private static final int TIMEOUT = 5000;
	private static final String SERVICE_NAME = "EMPLOYEE-SERVICE"; 
	private static final String SERVER_URL = "http://localhost:%d/graphql";
	
	Random r = new Random();
	
	@Autowired
	private EurekaClient discoveryClient; // (1)
	
	public List findByDepartment(Long departmentId) throws InterruptedException {
		List employees = new ArrayList();
		Application app = discoveryClient.getApplication(SERVICE_NAME); // (2)
		InstanceInfo ii = app.getInstances().get(r.nextInt(app.size()));
		ApolloClient client = ApolloClient.builder().serverUrl(String.format(SERVER_URL, ii.getPort())).build(); // (3)
		CountDownLatch lock = new CountDownLatch(1); // (4)
		client.query(EmployeesByDepartmentQuery.builder().build()).enqueue(new Callback() {

			@Override
			public void onFailure(ApolloException ex) {
				LOGGER.info("Err: {}", ex);
				lock.countDown();
			}

			@Override
			public void onResponse(Response res) { // (5)
				LOGGER.info("Res: {}", res);
				employees.addAll(res.data().employees().stream().map(emp -> new Employee(Long.valueOf(emp.id()), emp.name(), emp.position(), emp.salary())).collect(Collectors.toList())); // (6)
				lock.countDown(); // (7)
			}

		});
		lock.await(TIMEOUT, TimeUnit.MILLISECONDS); // (8)
		return employees;
	}
	
}

Finally, EmployeeClient is injected into the query resolver class – DepartmentQueries, and used inside query departmentsByOrganizationWithEmployees.

@Component
public class DepartmentQueries implements GraphQLQueryResolver {

	private static final Logger LOGGER = LoggerFactory.getLogger(DepartmentQueries.class);
	
	@Autowired
	EmployeeClient employeeClient;
	@Autowired
	DepartmentRepository repository;

	public List departmentsByOrganizationWithEmployees(Long organizationId) {
		LOGGER.info("Departments find: organizationId={}", organizationId);
		List departments = repository.findByOrganization(organizationId);
		departments.forEach(d -> {
			try {
				d.setEmployees(employeeClient.findByDepartment(d.getId()));
			} catch (InterruptedException e) {
				LOGGER.error("Error calling employee-service", e);
			}
		});
		return departments;
	}
	
	// other queries
	
}

Before calling target query we should take a look on the schema created for department-service. Every Department object can contain the list of assigned employees, so we also define type Employee referenced by Department type.

schema {
  query: DepartmentQueries
  mutation: DepartmentMutations
}

type DepartmentQueries {
  departments: [Department]
  department(id: ID!): Department!
  departmentsByOrganization(organizationId: Int!): [Department]
  departmentsByOrganizationWithEmployees(organizationId: Int!): [Department]
}

type DepartmentMutations {
  newDepartment(department: DepartmentInput!): Department
  deleteDepartment(id: ID!) : Boolean
  updateDepartment(id: ID!, department: DepartmentInput!): Department
}

input DepartmentInput {
  organizationId: Int!
  name: String!
}

type Department {
  id: ID!
  organizationId: Int!
  name: String!
  employees: [Employee]
}

type Employee {
  id: ID!
  name: String!
  position: String!
  salary: Int!
}

Now, we can call our test query with list of required fields using GraphiQL. An application department-service is by default available under port 8091, so we may call it using address http://localhost:8091/graphiql.

graphql-3

Conclusion

GraphQL seems to be an interesting alternative to standard REST APIs. However, we should not consider it as a replacement to REST. There are some use cases where GraphQL may be better choice, and some use cases where REST is better choice. If your clients does not need the full set of fields returned by the server side, and moreover you have many clients with different requirements to the single endpoint – GraphQL is a good choice. When it comes to microservices there are no solutions based on Java that allow you to use GraphQL together with service discovery, load balancing or API gateway out-of-the-box. In this article I have shown an example of usage Apollo GraphQL client together with Spring Cloud Eureka for inter-service communication. Sample applications source code is available on GitHub https://github.com/piomin/sample-graphql-microservices.git.

Quick Guide to Microservices with Kubernetes, Spring Boot 2.0 and Docker

Here’s the next article in a series of “Quick Guide to…”. This time we will discuss and run examples of Spring Boot microservices on Kubernetes. The structure of that article will be quite similar to this one Quick Guide to Microservices with Spring Boot 2.0, Eureka and Spring Cloud, as they are describing the same aspects of applications development. I’m going to focus on showing you the differences and similarities in development between for Spring Cloud and for Kubernetes. The topics covered in this article are:

  • Using Spring Boot 2.0 in cloud-native development
  • Providing service discovery for all microservices using Spring Cloud Kubernetes project
  • Injecting configuration settings into application pods using Kubernetes Config Maps and Secrets
  • Building application images using Docker and deploying them on Kubernetes using YAML configuration files
  • Using Spring Cloud Kubernetes together with Zuul proxy to expose a single Swagger API documentation for all microservices

Spring Cloud and Kubernetes may be threaten as a competitive solutions when you build microservices environment. Such components like Eureka, Spring Cloud Config or Zuul provided by Spring Cloud may be replaced by built-in Kubernetes objects like services, config maps, secrets or ingresses. But even if you decide to use Kubernetes components instead of Spring Cloud you can take advantage of some interesting features provided throughout the whole Spring Cloud project.

The one raelly interesting project that helps us in development is Spring Cloud Kubernetes (https://github.com/spring-cloud-incubator/spring-cloud-kubernetes). Although it is still in incubation stage it is definitely worth to dedicating some time to it. It integrates Spring Cloud with Kubernetes. I’ll show you how to use implementation of discovery client, inter-service communication with Ribbon client and Zipkin discovery using Spring Cloud Kubernetes.

Before we proceed to the source code, let’s take a look on the following diagram. It illustrates the architecture of our sample system. It is quite similar to the architecture presented in the already mentioned article about microservices on Spring Cloud. There are three independent applications (employee-service, department-service, organization-service), which communicate between each other through REST API. These Spring Boot microservices use some build-in mechanisms provided by Kubernetes: config maps and secrets for distributed configuration, etcd for service discovery, and ingresses for API gateway.

micro-kube-1

Let’s proceed to the implementation. Currently, the newest stable version of Spring Cloud is Finchley.RELEASE. This version of spring-cloud-dependencies should be declared as a BOM for dependency management.

<dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-dependencies</artifactId>
			<version>Finchley.RELEASE</version>
			<type>pom</type>
			<scope>import</scope>
		</dependency>
	</dependencies>
</dependencyManagement>

Spring Cloud Kubernetes is not released under Spring Cloud Release Trains. So, we need to explicitly define its version. Because we use Spring Boot 2.0 we have to include the newest SNAPSHOT version of spring-cloud-kubernetes artifacts, which is 0.3.0.BUILD-SNAPSHOT.

The source code of sample applications presented in this article is available on GitHub in repository https://github.com/piomin/sample-spring-microservices-kubernetes.git.

Pre-requirements

In order to be able to deploy and test our sample microservices we need to prepare a development environment. We can realize that in the following steps:

  • You need at least a single node cluster instance of Kubernetes (Minikube) or Openshift (Minishift) running on your local machine. You should start it and expose embedded Docker client provided by both of them. The detailed intruction for Minishift may be found there: Quick guide to deploying Java apps on OpenShift. You can also use that description to run Minikube – just replace word ‘minishift’ with ‘minikube’. In fact, it does not matter if you choose Kubernetes or Openshift – the next part of this tutorial would be applicable for both of them
  • Spring Cloud Kubernetes requires access to Kubernetes API in order to be able to retrieve a list of address of pods running for a single service. If you use Kubernetes you should just execute the following command:
$ kubectl create clusterrolebinding admin --clusterrole=cluster-admin --serviceaccount=default:default

If you deploy your microservices on Minishift you should first enable admin-user addon, then login as a cluster admin, and grant required permissions.

$ minishift addons enable admin-user
$ oc login -u system:admin
$ oc policy add-role-to-user cluster-reader system:serviceaccount:myproject:default
  • All our sample microservices use MongoDB as a backend store. So, you should first run an instance of this database on your node. With Minishift it is quite simple, as you can use predefined templates just by selecting service Mongo on the Catalog list. With Kubernetes the task is more difficult. You have to prepare deployment configuration files by yourself and apply it to the cluster. All the configuration files are available under kubernetes directory inside sample Git repository. To apply the following YAML definition to the cluster you should execute command kubectl apply -f kubernetes\mongo-deployment.yaml. After it Mongo database would be available under the name mongodb inside Kubernetes cluster.
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mongodb
  labels:
    app: mongodb
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mongodb
  template:
    metadata:
      labels:
        app: mongodb
    spec:
      containers:
      - name: mongodb
        image: mongo:latest
        ports:
        - containerPort: 27017
        env:
        - name: MONGO_INITDB_DATABASE
          valueFrom:
            configMapKeyRef:
              name: mongodb
              key: database-name
        - name: MONGO_INITDB_ROOT_USERNAME
          valueFrom:
            secretKeyRef:
              name: mongodb
              key: database-user
        - name: MONGO_INITDB_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mongodb
              key: database-password
---
apiVersion: v1
kind: Service
metadata:
  name: mongodb
  labels:
    app: mongodb
spec:
  ports:
  - port: 27017
    protocol: TCP
  selector:
    app: mongodb

1. Inject configuration with Config Maps and Secrets

When using Spring Cloud the most obvious choice for realizing distributed configuration in your system is Spring Cloud Config. With Kubernetes you can use Config Map. It holds key-value pairs of configuration data that can be consumed in pods or used to store configuration data. It is used for storing and sharing non-sensitive, unencrypted configuration information. To use sensitive information in your clusters, you must use Secrets. An usage of both these Kubernetes objects can be perfectly demonstrated basing on the example of MongoDB connection settings. Inside Spring Boot application we can easily inject it using environment variables. Here’s fragment of application.yml file with URI configuration.

spring:
  data:
    mongodb:
      uri: mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@mongodb/${MONGO_DATABASE}

While username or password are a sensitive fields, a database name is not. So we can place it inside config map.

apiVersion: v1
kind: ConfigMap
metadata:
  name: mongodb
data:
  database-name: microservices

Of course, username and password are defined as secrets.

apiVersion: v1
kind: Secret
metadata:
  name: mongodb
type: Opaque
data:
  database-password: MTIzNDU2
  database-user: cGlvdHI=

To apply the configuration to Kubernetes cluster we run the following commands.

$ kubectl apply -f kubernetes/mongodb-configmap.yaml
$ kubectl apply -f kubernetes/mongodb-secret.yaml

After it we should inject the configuration properties into application’s pods. When defining container configuration inside Deployment YAML file we have to include references to environment variables and secrets as shown below

apiVersion: apps/v1
kind: Deployment
metadata:
  name: employee
  labels:
    app: employee
spec:
  replicas: 1
  selector:
    matchLabels:
      app: employee
  template:
    metadata:
      labels:
        app: employee
    spec:
      containers:
      - name: employee
        image: piomin/employee:1.0
        ports:
        - containerPort: 8080
        env:
        - name: MONGO_DATABASE
          valueFrom:
            configMapKeyRef:
              name: mongodb
              key: database-name
        - name: MONGO_USERNAME
          valueFrom:
            secretKeyRef:
              name: mongodb
              key: database-user
        - name: MONGO_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mongodb
              key: database-password

2. Building service discovery with Kubernetes

We usually running microservices on Kubernetes using Docker containers. One or more containers are grouped by pods, which are the smallest deployable units created and managed in Kubernetes. A good practice is to run only one container inside a single pod. If you would like to scale up your microservice you would just have to increase a number of running pods. All running pods that belong to a single microservice are logically grouped by Kubernetes Service. This service may be visible outside the cluster, and is able to load balance incoming requests between all running pods. The following service definition groups all pods labelled with field app equaled to employee.

apiVersion: v1
kind: Service
metadata:
  name: employee
  labels:
    app: employee
spec:
  ports:
  - port: 8080
    protocol: TCP
  selector:
    app: employee

Service can be used for accessing application outside Kubernetes cluster or for inter-service communication inside a cluster. However, the communication between microservices can be implemented more comfortable with Spring Cloud Kubernetes. First we need to include the following dependency to project pom.xml.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-kubernetes</artifactId>
	<version>0.3.0.BUILD-SNAPSHOT</version>
</dependency>

Then we should enable discovery client for an application – the same as we have always done for discovery Spring Cloud Netflix Eureka. This allows you to query Kubernetes endpoints (services) by name. This discovery feature is also used by the Spring Cloud Kubernetes Ribbon or Zipkin projects to fetch respectively the list of the pods defined for a microservice to be load balanced or the Zipkin servers available to send the traces or spans.

@SpringBootApplication
@EnableDiscoveryClient
@EnableMongoRepositories
@EnableSwagger2
public class EmployeeApplication {

	public static void main(String[] args) {
		SpringApplication.run(EmployeeApplication.class, args);
	}
	
	// ...
}

The last important thing in this section is to guarantee that Spring application name would be exactly the same as Kubernetes service name for the application. For application employee-service it is employee.

spring:
  application:
    name: employee

3. Building microservice using Docker and deploying on Kubernetes

There is nothing unusual in our sample microservices. We have included some standard Spring dependencies for building REST-based microservices, integrating with MongoDB and generating API documentation using Swagger2.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
	<groupId>io.springfox</groupId>
	<artifactId>springfox-swagger2</artifactId>
	<version>2.9.2</version>
</dependency>
<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-data-mongodb</artifactId>
</dependency>

In order to integrate with MongoDB we should create interface that extends standard Spring Data CrudRepository.

public interface EmployeeRepository extends CrudRepository {
	
	List findByDepartmentId(Long departmentId);
	List findByOrganizationId(Long organizationId);
	
}

Entity class should be annotated with Mongo @Document and a primary key field with @Id.

@Document(collection = "employee")
public class Employee {

	@Id
	private String id;
	private Long organizationId;
	private Long departmentId;
	private String name;
	private int age;
	private String position;
	
	// ...
	
}

The repository bean has been injected to the controller class. Here’s the full implementation of our REST API inside employee-service.

@RestController
public class EmployeeController {

	private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeController.class);
	
	@Autowired
	EmployeeRepository repository;
	
	@PostMapping("/")
	public Employee add(@RequestBody Employee employee) {
		LOGGER.info("Employee add: {}", employee);
		return repository.save(employee);
	}
	
	@GetMapping("/{id}")
	public Employee findById(@PathVariable("id") String id) {
		LOGGER.info("Employee find: id={}", id);
		return repository.findById(id).get();
	}
	
	@GetMapping("/")
	public Iterable findAll() {
		LOGGER.info("Employee find");
		return repository.findAll();
	}
	
	@GetMapping("/department/{departmentId}")
	public List findByDepartment(@PathVariable("departmentId") Long departmentId) {
		LOGGER.info("Employee find: departmentId={}", departmentId);
		return repository.findByDepartmentId(departmentId);
	}
	
	@GetMapping("/organization/{organizationId}")
	public List findByOrganization(@PathVariable("organizationId") Long organizationId) {
		LOGGER.info("Employee find: organizationId={}", organizationId);
		return repository.findByOrganizationId(organizationId);
	}
	
}

In order to run our microservices on Kubernetes we should first build the whole Maven project with mvn clean install command. Each microservice has Dockerfile placed in the root directory. Here’s Dockerfile definition for employee-service.

FROM openjdk:8-jre-alpine
ENV APP_FILE employee-service-1.0-SNAPSHOT.jar
ENV APP_HOME /usr/apps
EXPOSE 8080
COPY target/$APP_FILE $APP_HOME/
WORKDIR $APP_HOME
ENTRYPOINT ["sh", "-c"]
CMD ["exec java -jar $APP_FILE"]

Let’s build Docker images for all three sample microservices.

$ cd employee-service
$ docker build -t piomin/employee:1.0 .
$ cd department-service
$ docker build -t piomin/department:1.0 .
$ cd organization-service
$ docker build -t piomin/organization:1.0 .

The last step is to deploy Docker containers with applications on Kubernetes. To do that just execute commands kubectl apply on YAML configuration files. The sample deployment file for employee-service has been demonstrated in step 1. All required deployment fields are available inside project repository in kubernetes directory.

$ kubectl apply -f kubernetes\employee-deployment.yaml
$ kubectl apply -f kubernetes\department-deployment.yaml
$ kubectl apply -f kubernetes\organization-deployment.yaml

4. Communication between microservices with Spring Cloud Kubernetes Ribbon

All the microservice are deployed on Kubernetes. Now, it’s worth to discuss some aspects related to inter-service communication. Application employee-service in contrast to other microservices did not invoke any other microservices. Let’s take a look on to other microservices that calls API exposed by employee-service and communicates between each other (organization-service calls department-service API).
First we need to include some additional dependencies to the project. We use Spring Cloud Ribbon and OpenFeign. Alternatively you can also use Spring @LoadBalanced RestTemplate.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-netflix-ribbon</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-kubernetes-ribbon</artifactId>
	<version>0.3.0.BUILD-SNAPSHOT</version>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-openfeign</artifactId>
</dependency>

Here’s the main class of department-service. It enables Feign client using @EnableFeignClients annotation. It works the same as with discovery based on Spring Cloud Netflix Eureka. OpenFeign uses Ribbon for client-side load balancing. Spring Cloud Kubernetes Ribbon provides some beans that forces Ribbon to communicate with Kubernetes API through Fabric8 KubernetesClient.

@SpringBootApplication
@EnableDiscoveryClient
@EnableFeignClients
@EnableMongoRepositories
@EnableSwagger2
public class DepartmentApplication {
	
	public static void main(String[] args) {
		SpringApplication.run(DepartmentApplication.class, args);
	}
	
	// ...
	
}

Here’s implementation of Feign client for calling method exposed by employee-service.

@FeignClient(name = "employee")
public interface EmployeeClient {

	@GetMapping("/department/{departmentId}")
	List findByDepartment(@PathVariable("departmentId") String departmentId);
	
}

Finally, we have to inject Feign client’s beans to the REST controller. Now, we may call the method defined inside EmployeeClient, which is equivalent to calling REST endpoints.

@RestController
public class DepartmentController {

	private static final Logger LOGGER = LoggerFactory.getLogger(DepartmentController.class);
	
	@Autowired
	DepartmentRepository repository;
	@Autowired
	EmployeeClient employeeClient;
	
	// ...
	
	@GetMapping("/organization/{organizationId}/with-employees")
	public List findByOrganizationWithEmployees(@PathVariable("organizationId") Long organizationId) {
		LOGGER.info("Department find: organizationId={}", organizationId);
		List departments = repository.findByOrganizationId(organizationId);
		departments.forEach(d -> d.setEmployees(employeeClient.findByDepartment(d.getId())));
		return departments;
	}
	
}

5. Building API gateway using Kubernetes Ingress

An Ingress is a collection of rules that allow incoming requests to reach the downstream services. In our microservices architecture ingress is playing a role of an API gateway. To create it we should first prepare YAML description file. The descriptor file should contain the hostname under which the gateway will be available and mapping rules to the downstream services.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: gateway-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  backend:
    serviceName: default-http-backend
    servicePort: 80
  rules:
  - host: microservices.info
    http:
      paths:
      - path: /employee
        backend:
          serviceName: employee
          servicePort: 8080
      - path: /department
        backend:
          serviceName: department
          servicePort: 8080
      - path: /organization
        backend:
          serviceName: organization
          servicePort: 8080

You have to execute the following command to apply the configuration visible above to the Kubernetes cluster.

$ kubectl apply -f kubernetes\ingress.yaml

For testing this solution locally we have to insert the mapping between IP address and hostname set in ingress definition inside hosts file as shown below. After it we can services through ingress using defined hostname just like that: http://microservices.info/employee.

192.168.99.100 microservices.info

You can check the details of created ingress just by executing command kubectl describe ing gateway-ingress.
micro-kube-2

6. Enabling API specification on gateway using Swagger2

Ok, what if we would like to expose single swagger documentation for all microservices deployed on Kubernetes? Well, here the things are getting complicated… We can run container with Swagger UI, and map all paths exposed by the ingress manually, but it is rather not a good solution…
In that case we can use Spring Cloud Kubernetes Ribbon one more time – this time together with Spring Cloud Netflix Zuul. Zuul will act as gateway only for serving Swagger API.
Here’s the list of dependencies used in my gateway-service project.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-netflix-zuul</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-kubernetes</artifactId>
	<version>0.3.0.BUILD-SNAPSHOT</version>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-netflix-ribbon</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-kubernetes-ribbon</artifactId>
	<version>0.3.0.BUILD-SNAPSHOT</version>
</dependency>
<dependency>
	<groupId>io.springfox</groupId>
	<artifactId>springfox-swagger-ui</artifactId>
	<version>2.9.2</version>
</dependency>
<dependency>
	<groupId>io.springfox</groupId>
	<artifactId>springfox-swagger2</artifactId>
	<version>2.9.2</version>
</dependency>

Kubernetes discovery client will detect all services exposed on cluster. We would like to display documentation only for our three microservices. That’s why I defined the following routes for Zuul.

zuul:
  routes:
    department:
      path: /department/**
    employee:
      path: /employee/**
    organization:
      path: /organization/**

Now we can use ZuulProperties bean to get routes addresses from Kubernetes discovery, and configure them as Swagger resources as shown below.

@Configuration
public class GatewayApi {

	@Autowired
	ZuulProperties properties;

	@Primary
	@Bean
	public SwaggerResourcesProvider swaggerResourcesProvider() {
		return () -> {
			List resources = new ArrayList();
			properties.getRoutes().values().stream()
					.forEach(route -> resources.add(createResource(route.getId(), "2.0")));
			return resources;
		};
	}

	private SwaggerResource createResource(String location, String version) {
		SwaggerResource swaggerResource = new SwaggerResource();
		swaggerResource.setName(location);
		swaggerResource.setLocation("/" + location + "/v2/api-docs");
		swaggerResource.setSwaggerVersion(version);
		return swaggerResource;
	}

}

Application gateway-service should be deployed on cluster the same as other applications. You can the list of running service by executing command kubectl get svc. Swagger documentation is available under address http://192.168.99.100:31237/swagger-ui.html.
micro-kube-3

Conclusion

I’m actually rooting for Spring Cloud Kubernetes project, which is still at the incubation stage. Kubernetes popularity as a platform is rapidly growing during some last months, but it still has some weaknesses. One of them is inter-service communication. Kubernetes doesn’t give us many mechanisms out-of-the-box, which allows configure more advanced rules. This a reason for creating frameworks for service mesh on Kubernetes like Istio or Linkerd. While these projects are still relatively new solutions, Spring Cloud is stable, opinionated framework. Why not to use to provide service discovery, inter-service communication or load balancing? Thanks to Spring Cloud Kubernetes it is possible.

Intro to Blockchain with Ethereum, Web3j and Spring Boot: Smart Contracts

I have already provided a quick introduction to building Spring Boot applications with Ethereum and web3j in one of my latest articles Introduction to Blockchain with Java using Ethereum, web3j and Spring Boot. That article has attracted much interest from you, so I decided to describe some more advanced aspects related to Ethereum and web3j. Today I’m going to show how you can implement Ethereum smart contracts in your application. First, let’s define what exactly is smart contract.

Smart contract is just a program that is executed on EVM (Ethereum Virtual Machine). Each contract contains a collection of code (functions) and data. It has an address in the Ethereum blockchain, can interact with other contracts, make decisions, store data, and send ether to others. Ethereum smart contracts are usually written in a language named Solidity, which is a statically typed high level language. Every contract needs to be compiled. After it you can generate source code for your application basing on the compiled binaries. Web3j library provides tools dedicated for that. Before we proceed to the source code let’s discuss an architecture of our sample system.

It consists of two independent applications contract-service and transaction-service. The most business logic is performed by contract-service application. It provides methods for creating smart wallets, deploying smart contracts on Ethereum and calling contract’s functions. Application transaction-service is responsible only for performing transaction between third-party and the owner of contract. It gets the owner’s account by calling endpoint exposed by contract-service. Application contract-service observing for transactions performed on the Ethereum node. If it is related to the contract owner’s account application calls function responsible for transferring funds to contract receiver’s account on all contracts signed by this owner. Here’s the diagram that illustrates process described above.

blockchain-contract

1. Building a smart contract with Solidity

The most popular tool for creating smart contracts in Ethereum is Solidity. Solidity is a contract-oriented, high-level language for implementing smart contracts. It was influenced by C++, Python and JavaScript and is designed to target the Ethereum Virtual Machine (EVM). It is statically typed, supports inheritance, libraries and complex user-defined types among other features. For more information about that language you should refer to Solidity documentation available on site http://solidity.readthedocs.io/.

Our main goal in this article is just to build a simple contract, compile it and generate required source code. That’s why I don’t want to go into the exact implementation details of contracts using Solidity. Here’s the implementation of contract responsible for counting a fee for incoming transaction. On the basis of this calculation it deposits funds on the transaction owner’s account and withdraws funds from sender’s account. This contract is signed between two users. Every one of them has it own smart wallet secured by their credentials. The understanding of this simple contract is very important, so let’s analyze it line after line.

Each contract is described by a percentage of transaction, which goes to receiver’s account (1) and receiver’s account address (2). Two first lines of contract declare variables for storing these parameters: fee of Solidity type uint, and receiver of type address. Both these values are initialized inside contract’s constructor (5). Parameter fee indicates the percentage fee of transaction, that is withdrawn from sender’s account and deposited on the receiver’s account. The line mapping (address => uint) public balances maps addresses of all balances to unsigned integers (3). We have also defines event Sent, which is emitted after every transaction within the contract (4). Function getReceiverBalance return the receiver’s account balance (6). Finally, there is a function sendTrx(...) that can be can be called by external client (7). It is responsible for performing withdrawal and deposit operations basing on the contract’s percentage fee and transaction amount. It requires a little more attention. First, it needs to have payable modifier to able to transfer funds between Ethereum accounts. After that, the transaction amount can be read from msg.value parameter. Then, we call function send on receiver address variable with given amount in Wei, and save this value on the contract’s balance. Additionally, we may sent an event that can be received by client application.

pragma solidity ^0.4.21;

contract TransactionFee {

    // (1)
    uint public fee;
    // (2)
    address public receiver;
    // (3)
    mapping (address => uint) public balances;
    // (4)
    event Sent(address from, address to, uint amount, bool sent);

    // (5)
    constructor(address _receiver, uint _fee) public {
        receiver = _receiver;
        fee = _fee;
    }

    // (6)
    function getReceiverBalance() public view returns(uint) {
        return receiver.balance;
    }

    // (7)
    function sendTrx() public payable {
        uint value = msg.value * fee / 100;
        bool sent = receiver.send(value);
        balances[receiver] += (value);
        emit Sent(msg.sender, receiver, value, sent);
    }

}

Once we have created a contract, we have to compile it and generate source code that can be use inside our application to deploy contract and call its functions. For just a quick check you can use Solidity compiler available online on site https://remix.ethereum.org.

2. Compiling contract and generating source code

Solidity provides up to date docker builds for their compiler. Released version are tagged with stable, while unstable changes from development branch are tagged with nightly. However, that Docker image contains only compiler executable file, so we would have to mount a persistent volume with input file with Solidity contract. Assuming that it is available under directory /home/docker on our Docker machine, we can compile it using the following command. This command creates two files: a binary .bin file, which is the smart contract code in a format the EVM can interpret, and an application binary interface .abi file, which defines the smart contract methods.

$ docker run --rm -v /home/docker:/build ethereum/solc:stable /build/TransactionFee.sol --bin --abi --optimize -o /build

The compilation output files are available under /build on the container, and are persisted inside /home/docker directory. The container is removed after compilation, because it is no needed now. We can generate source code from compiled contract using executable file provided together with Web3j library. It is available under directory ${WEB3J_HOME}/bin. When generating source code using Web3j we should pass location of .bin and .abi files, then set target package name and directory.

$ web3j solidity generate /build/transactionfee.bin /build/transactionfee.abi -p pl.piomin.services.contract.model -o src/main/java/

Web3j executable generates Java source file with Solidity contract name inside a given package. Here are the most important fragments of generated source file.

public class Transactionfee extends Contract {
    private static final String BINARY = "608060405234801561..."
    public static final String FUNC_GETRECEIVERBALANCE = "getReceiverBalance";
    public static final String FUNC_BALANCES = "balances";
    public static final String FUNC_SENDTRX = "sendTrx";
    public static final String FUNC_FEE = "fee";
    public static final String FUNC_RECEIVER = "receiver";

    // ...

    protected Transactionfee(String contractAddress, Web3j web3j, TransactionManager transactionManager, BigInteger gasPrice, BigInteger gasLimit) {
        super(BINARY, contractAddress, web3j, transactionManager, gasPrice, gasLimit);
    }

    public RemoteCall getReceiverBalance() {
        final Function function = new Function(FUNC_GETRECEIVERBALANCE,
                Arrays.asList(),
                Arrays.asList(new TypeReference() {}));
        return executeRemoteCallSingleValueReturn(function, BigInteger.class);
    }

    public RemoteCall balances(String param0) {
        final Function function = new Function(FUNC_BALANCES,
                Arrays.asList(new org.web3j.abi.datatypes.Address(param0)),
                Arrays.asList(new TypeReference() {}));
        return executeRemoteCallSingleValueReturn(function, BigInteger.class);
    }

    public RemoteCall sendTrx(BigInteger weiValue) {
        final Function function = new Function(
                FUNC_SENDTRX,
                Arrays.asList(),
                Collections.emptyList());
        return executeRemoteCallTransaction(function, weiValue);
    }

    public RemoteCall fee() {
        final Function function = new Function(FUNC_FEE,
                Arrays.asList(),
                Arrays.asList(new TypeReference() {}));
        return executeRemoteCallSingleValueReturn(function, BigInteger.class);
    }

    public RemoteCall receiver() {
        final Function function = new Function(FUNC_RECEIVER,
                Arrays.asList(),
                Arrays.&lt;TypeReference&gt;asList(new TypeReference
<Address>() {}));
        return executeRemoteCallSingleValueReturn(function, String.class);
    }

    public static RemoteCall deploy(Web3j web3j, Credentials credentials, BigInteger gasPrice, BigInteger gasLimit, String _receiver, BigInteger _fee) {
        String encodedConstructor = FunctionEncoder.encodeConstructor(Arrays.asList(new org.web3j.abi.datatypes.Address(_receiver),
                new org.web3j.abi.datatypes.generated.Uint256(_fee)));
        return deployRemoteCall(Transactionfee.class, web3j, credentials, gasPrice, gasLimit, BINARY, encodedConstructor);
    }

    public static RemoteCall deploy(Web3j web3j, TransactionManager transactionManager, BigInteger gasPrice, BigInteger gasLimit, String _receiver, BigInteger _fee) {
        String encodedConstructor = FunctionEncoder.encodeConstructor(Arrays.asList(new org.web3j.abi.datatypes.Address(_receiver),
                new org.web3j.abi.datatypes.generated.Uint256(_fee)));
        return deployRemoteCall(Transactionfee.class, web3j, transactionManager, gasPrice, gasLimit, BINARY, encodedConstructor);
    }

    // ...

    public Observable sentEventObservable(DefaultBlockParameter startBlock, DefaultBlockParameter endBlock) {
        EthFilter filter = new EthFilter(startBlock, endBlock, getContractAddress());
        filter.addSingleTopic(EventEncoder.encode(SENT_EVENT));
        return sentEventObservable(filter);
    }

    public static Transactionfee load(String contractAddress, Web3j web3j, Credentials credentials, BigInteger gasPrice, BigInteger gasLimit) {
        return new Transactionfee(contractAddress, web3j, credentials, gasPrice, gasLimit);
    }

    public static Transactionfee load(String contractAddress, Web3j web3j, TransactionManager transactionManager, BigInteger gasPrice, BigInteger gasLimit) {
        return new Transactionfee(contractAddress, web3j, transactionManager, gasPrice, gasLimit);
    }

    public static class SentEventResponse {
        public Log log;
        public String from;
        public String to;
        public BigInteger amount;
        public Boolean sent;
    }
}

3. Deploying contract

Once we have successfully generated Java object representing contract inside our application we may proceed to the application development. We will begin from contract-service. First, we will create smart wallet with credentials with sufficient funds for signing contracts as an owner. The following fragment of code is responsible for that, and is invoked just after application boot. You can also see here an implementation of HTTP GET method responsible for returning owner account address.

@PostConstruct
public void init() throws IOException, CipherException, NoSuchAlgorithmException, NoSuchProviderException, InvalidAlgorithmParameterException {
	String file = WalletUtils.generateLightNewWalletFile("piot123", null);
	credentials = WalletUtils.loadCredentials("piot123", file);
	LOGGER.info("Credentials created: file={}, address={}", file, credentials.getAddress());
	EthCoinbase coinbase = web3j.ethCoinbase().send();
	EthGetTransactionCount transactionCount = web3j.ethGetTransactionCount(coinbase.getAddress(), DefaultBlockParameterName.LATEST).send();
	Transaction transaction = Transaction.createEtherTransaction(coinbase.getAddress(), transactionCount.getTransactionCount(), BigInteger.valueOf(20_000_000_000L), BigInteger.valueOf(21_000), credentials.getAddress(),BigInteger.valueOf(25_000_000_000_000_000L));
	web3j.ethSendTransaction(transaction).send();
	EthGetBalance balance = web3j.ethGetBalance(credentials.getAddress(), DefaultBlockParameterName.LATEST).send();
	LOGGER.info("Balance: {}", balance.getBalance().longValue());
}

@GetMapping("/owner")
public String getOwnerAccount() {
	return credentials.getAddress();
}

Application contract-service exposes some endpoints that can be called by an external client or the second application in our sample system – transaction-service. The following implementation of POST /contract method performs two actions. First, it creates a new smart wallet with credentials. Then it uses those credentials to sign a smart contract with the address defined in the previous step. To sign a new contract you have to call method deploy from class generated from Solidity definition – Transactionfee. It is responsible for deploying a new instance of contract on the Ethereum node.

private List contracts = new ArrayList();

@PostMapping
public Contract createContract(@RequestBody Contract newContract) throws Exception {
	String file = WalletUtils.generateLightNewWalletFile("piot123", null);
	Credentials receiverCredentials = WalletUtils.loadCredentials("piot123", file);
	LOGGER.info("Credentials created: file={}, address={}", file, credentials.getAddress());
	Transactionfee2 contract = Transactionfee2.deploy(web3j, credentials, GAS_PRICE, GAS_LIMIT, receiverCredentials.getAddress(), BigInteger.valueOf(newContract.getFee())).send();
	newContract.setReceiver(receiverCredentials.getAddress());
	newContract.setAddress(contract.getContractAddress());
	contracts.add(contract.getContractAddress());
	LOGGER.info("New contract deployed: address={}", contract.getContractAddress());
	Optional tr = contract.getTransactionReceipt();
	if (tr.isPresent()) {
		LOGGER.info("Transaction receipt: from={}, to={}, gas={}", tr.get().getFrom(), tr.get().getTo(), tr.get().getGasUsed().intValue());
	}
	return newContract;
}

Every contract deployed on Ethereum has its own unique address. The unique address of every created contract is stored by the application. Then the application is able to load all existing contracts using those addresses. The following method is responsible for executing method sentTrx on the selected contract.

public void processContracts(long transactionAmount) {
	contracts.forEach(it -> {
		Transactionfee contract = Transactionfee.load(it, web3j, credentials, GAS_PRICE, GAS_LIMIT);
		try {
			TransactionReceipt tr = contract.sendTrx(BigInteger.valueOf(transactionAmount)).send();
			LOGGER.info("Transaction receipt: from={}, to={}, gas={}", tr.getFrom(), tr.getTo(), tr.getGasUsed().intValue());
			LOGGER.info("Get receiver: {}", contract.getReceiverBalance().send().longValue());
			EthFilter filter = new EthFilter(DefaultBlockParameterName.EARLIEST, DefaultBlockParameterName.LATEST, contract.getContractAddress());
			web3j.ethLogObservable(filter).subscribe(log -> {
				LOGGER.info("Log: {}", log.getData());
			});
		} catch (Exception e) {
			LOGGER.error("Error during contract execution", e);
		}
	});
}

Application contract-service listens for transactions incoming to Ethereum node, that has been send by transaction-service. If target account of transaction is equal to contracts owner account a given transaction is processed.

@Autowired
Web3j web3j;
@Autowired
ContractService service;

@PostConstruct
public void listen() {
	web3j.transactionObservable().subscribe(tx -> {
		if (tx.getTo() != null && tx.getTo().equals(service.getOwnerAccount())) {
			LOGGER.info("New tx: id={}, block={}, from={}, to={}, value={}", tx.getHash(), tx.getBlockHash(), tx.getFrom(), tx.getTo(), tx.getValue().intValue());
			service.processContracts(tx.getValue().longValue());
		} else {
			LOGGER.info("Not matched: id={}, to={}", tx.getHash(), tx.getTo());
		}
	});
}

Here’s the source code from transaction-service responsible for transfer funds from third-party account to contracts owner account.

@Value("${contract-service.url}")
String url;
@Autowired
Web3j web3j;
@Autowired
RestTemplate template;
Credentials credentials;

@PostMapping
public String performTransaction(@RequestBody TransactionRequest request) throws Exception {
	EthAccounts accounts = web3j.ethAccounts().send();
	String owner = template.getForObject(url, String.class);
	EthGetTransactionCount transactionCount = web3j.ethGetTransactionCount(accounts.getAccounts().get(request.getFromId()), DefaultBlockParameterName.LATEST).send();
	Transaction transaction = Transaction.createEtherTransaction(accounts.getAccounts().get(request.getFromId()), transactionCount.getTransactionCount(), GAS_PRICE, GAS_LIMIT, owner, BigInteger.valueOf(request.getAmount()));
	EthSendTransaction response = web3j.ethSendTransaction(transaction).send();
	if (response.getError() != null) {
		LOGGER.error("Transaction error: {}", response.getError().getMessage());
		return "ERR";
	}
	LOGGER.info("Transaction: {}", response.getResult());
	EthGetTransactionReceipt receipt = web3j.ethGetTransactionReceipt(response.getTransactionHash()).send();
	if (receipt.getTransactionReceipt().isPresent()) {
		TransactionReceipt r = receipt.getTransactionReceipt().get();
		LOGGER.info("Tx receipt: from={}, to={}, gas={}, cumulativeGas={}", r.getFrom(), r.getTo(), r.getGasUsed().intValue(), r.getCumulativeGasUsed().intValue());
	}
	EthGetBalance balance = web3j.ethGetBalance(accounts.getAccounts().get(request.getFromId()), DefaultBlockParameterName.LATEST).send();
	LOGGER.info("Balance: address={}, amount={}", accounts.getAccounts().get(request.getFromId()), balance.getBalance().longValue());
	balance = web3j.ethGetBalance(owner, DefaultBlockParameterName.LATEST).send();
	LOGGER.info("Balance: address={}, amount={}", owner, balance.getBalance().longValue());
	return response.getTransactionHash();
}

4. Test scenario

To run test scenario we need to have launched:

  • Ethereum node in development on Docker container
  • Ethereum Geth console client on Docker container
  • Instance of contact-service application, by default available on port 8090
  • Instance of transaction-service application, by default available on port 8091

Instruction how to run Ethereum node and Geth client using Docker container is available in my previous article about blockchain Introduction to Blockchain with Java using Ethereum, web3j and Spring Boot.

Before starting sample applications we should create at least one test account on Ethereum node. To achieve it we have to execute personal.newAccount Geth command as shown below.

blockchain-contract-1

After startup application transaction-service transfer some funds from coinbase account to all other existing accounts.

blockchain-contract-2

The next step is to create some contracts using owner account created automatically by contract-service on startup. You should call POST /contract method with fee parameter, that specifies percentage of transaction amount transfer from contract owner’s account to contract receiver’s account. Using the following command I have deployed two contracts with 10% and 5%. It means that 10% and 5% of each transaction sent to owner’s account by third-party user is transferred to the accounts generated by POST method. The address of account created by the POST method is returned in the response in the receiver field.

curl -X POST -H "Content-Type: application/json" -d '{"fee":10}' http://localhost:8090/contract
{"fee": 10,"receiver": "0x864ef9931c2690efcc6a773760237c4b09f40e65","address": "0xa6205a746ae0858fa22d6451b794cc977faa507c"}
curl -X POST -H "Content-Type: application/json" -d '{"fee":5}' http://localhost:8090/contract
{"fee": 5,"receiver": "0x098898594d7acd1481324af779e431ab87a3155d","address": "0x9c64d6b0fc01ee055e114a528fb5ad853843cde3"}

If contracts have been successfully deployed the last thing to do is to send a transaction by calling endpoint POST /transaction exposed by transaction-service. The owner account is automatically retrieved from contract-service. You have to set the transaction amount and source account index (means eth.accounts[index]).

curl -X POST -H "Content-Type: application/json" -d '{"amount":1000000,"fromId":1}' http://localhost:8090/transaction

Ok, that’s finally it. Now, the transaction is received by contract-service, which executes function sendTrx(...) on all defined contracts. As a result 10% and 5% of that transaction amount goes to contract receivers.

blockchain-contract-3

Sample applications source code is available in repository sample-spring-blockchain-contract (https://github.com/piomin/sample-spring-blockchain-contract.git). Enjoy! 🙂

 

Spring REST Docs versus SpringFox Swagger for API documentation

Recently, I have come across some articles and mentions about Spring REST Docs, where it has been present as a better alternative to traditional Swagger docs. Until now, I was always using Swagger for building API documentation, so I decided to try Spring REST Docs. You may even read on the main page of that Spring project (https://spring.io/projects/spring-restdocs) some references to Swagger, for example: “This approach frees you from the limitations of the documentation produced by tools like Swagger”. Are you interested in building API documentation using Spring REST Docs? Let’s take a closer look on that project!

A first difference in comparison to Swagger is a test-driven approach to generating API documentation. Thanks to that Spring REST Docs ensures that the documentation is always generated accurately matches the actual behavior of the API. When using Swagger SpringFox library you just need to enable it for the project and provide some configuration to force it work following your expectations. I have already described usage of Swagger 2 for automated build API documentation for Spring Boot based application in my two previous articles:

The articles mentioned above describe in the details how to use SpringFox Swagger in your Spring Boot application to automatically generate API documentation basing on the source code. Here I’ll give you only a short introduction to that technology, to easily find out differences between usage of Swagger2 and Spring REST Docs.

1. Using Swagger2 with Spring Boot

To enable SpringFox library for your application you need to include the following dependencies to pom.xml.

<dependency>
    <groupId>io.springfox</groupId>
    <artifactId>springfox-swagger2</artifactId>
    <version>2.9.2</version>
</dependency>
<dependency>
    <groupId>io.springfox</groupId>
    <artifactId>springfox-swagger-ui</artifactId>
    <version>2.9.2</version>
</dependency>

Then you should annotate the main or configuration class with @EnableSwagger2. You can also customize the behaviour of SpringFox library by declaring Docket bean.

@Bean
public Docket swaggerEmployeeApi() {
	return new Docket(DocumentationType.SWAGGER_2)
		.select()
			.apis(RequestHandlerSelectors.basePackage("pl.piomin.services.employee.controller"))
			.paths(PathSelectors.any())
		.build()
		.apiInfo(new ApiInfoBuilder().version("1.0").title("Employee API").description("Documentation Employee API v1.0").build());
}

Now, after running the application the documentation is available under context path /v2/api-docs. You can also display it in your web browser using Swagger UI available at site /swagger-ui.html.

spring-cloud-3
It looks easy? Let’s see how to do this with Spring REST Docs.

2. Using Asciidoctor with Spring Boot

There are some other differences between Spring REST Docs and SpringFox Swagger. By default, Spring REST Docs uses Asciidoctor. Asciidoctor processes plain text and produces HTML, styled and layed out to suit your needs. If you prefer, Spring REST Docs can also be configured to use Markdown. This really distinguished it from Swagger, which uses its own notation called OpenAPI Specification.
Spring REST Docs makes use of snippets produced by tests written with Spring MVC’s test framework, Spring WebFlux’s WebTestClient or REST Assured 3. I’ll show you an example based on Spring MVC.
I suggest you begin from creating base Asciidoc file. It should be placed in src/main/asciidoc directory in your application source code. I don’t know if you are familiar with Asciidoctor notation, but it is really intuitive. The sample visible below shows two important things. First we’ll display the version of the project taken from pom.xml. Then we’ll include the snippets generated during JUnit tests by declaring macro called operation containing document name and list of snippets. We can choose between such snippets like curl-request, http-request, http-response, httpie-request, links, request-body, request-fields, response-body, response-fields or path-parameters. The document name is determined by name of the test method in our JUnit test class.

= RESTful Employee API Specification
{project-version}
:doctype: book

== Add a new person

A `POST` request is used to add a new person

operation::add-person[snippets='http-request,request-fields,http-response']

== Find a person by id

A `GET` request is used to find a new person by id

operation::find-person-by-id[snippets='http-request,path-parameters,http-response,response-fields']

The source code fragment with Asciidoc natation is just a template. We would like to generate HTML file, which prettily displays all our automatically generated staff. To achieve it we should enable plugin asciidoctor-maven-plugin in the project’s pom.xml. In order to display Maven project version we need to pass it to the Asciidoc plugin configuration attributes. We also need to spring-restdocs-asciidoctor dependency to that plugin.

<plugin>
	<groupId>org.asciidoctor</groupId>
	<artifactId>asciidoctor-maven-plugin</artifactId>
	<version>1.5.6</version>
	<executions>
		<execution>
			<id>generate-docs</id>
			<phase>prepare-package</phase>
			<goals>
				<goal>process-asciidoc</goal>
			</goals>
			<configuration>
				<backend>html</backend>
				<doctype>book</doctype>
				<attributes>
					<project-version>${project.version}</project-version>
				</attributes>
			</configuration>
		</execution>
	</executions>
	<dependencies>
		<dependency>
			<groupId>org.springframework.restdocs</groupId>
			<artifactId>spring-restdocs-asciidoctor</artifactId>
			<version>2.0.0.RELEASE</version>
		</dependency>
	</dependencies>
</plugin>

Ok, the documentation is automatically generated during Maven build from our api.adoc file located inside src/main/asciidoc directory. But we still need to develop JUnit API tests that automatically generate required snippets. Let’s do that in the next step.

3. Generating snippets for Spring MVC

First, we should enable Spring REST Docs for our project. To achieve it we have to include the following dependency.

<dependency>
	<groupId>org.springframework.restdocs</groupId>
	<artifactId>spring-restdocs-mockmvc</artifactId>
	<scope>test</scope>
</dependency>

Now, all we need to do is to implement JUnit tests. Spring Boot provides an @AutoConfigureRestDocs annotation that allows you to leverage Spring REST Docs in your tests.
In fact, we need to prepare standard Spring MVC test using MockMvc bean. I also mocked some methods implemented by EmployeeRepository. Then, I used some static methods provided by Spring REST Docs with support for generating documentation of request and response payloads. First of those method is document("{method-name}/",...), which is responsible for generating snippets under directory target/generated-snippets/{method-name}, where method name is the name of the test method formatted using kebab-case. I have described all the JSON fields in the requests using requestFields(...) and responseFields(...) methods.

@RunWith(SpringRunner.class)
@WebMvcTest(EmployeeController.class)
@AutoConfigureRestDocs
public class EmployeeControllerTest {

	@MockBean
	EmployeeRepository repository;
	@Autowired
	MockMvc mockMvc;
	
	private ObjectMapper mapper = new ObjectMapper();

	@Before
	public void setUp() {
		Employee e = new Employee(1L, 1L, "John Smith", 33, "Developer");
		e.setId(1L);
		when(repository.add(Mockito.any(Employee.class))).thenReturn(e);
		when(repository.findById(1L)).thenReturn(e);
	}

	@Test
	public void addPerson() throws JsonProcessingException, Exception {
		Employee employee = new Employee(1L, 1L, "John Smith", 33, "Developer");
		mockMvc.perform(post("/").contentType(MediaType.APPLICATION_JSON).content(mapper.writeValueAsString(employee)))
			.andExpect(status().isOk())
			.andDo(document("{method-name}/", requestFields(
				fieldWithPath("id").description("Employee id").ignored(),
				fieldWithPath("organizationId").description("Employee's organization id"),
				fieldWithPath("departmentId").description("Employee's department id"),
				fieldWithPath("name").description("Employee's name"),
				fieldWithPath("age").description("Employee's age"),
				fieldWithPath("position").description("Employee's position inside organization")
			)));
	}
	
	@Test
	public void findPersonById() throws JsonProcessingException, Exception {
		this.mockMvc.perform(get("/{id}", 1).accept(MediaType.APPLICATION_JSON))
			.andExpect(status().isOk())
			.andDo(document("{method-name}/", responseFields(
				fieldWithPath("id").description("Employee id"),
				fieldWithPath("organizationId").description("Employee's organization id"),
				fieldWithPath("departmentId").description("Employee's department id"),
				fieldWithPath("name").description("Employee's name"),
				fieldWithPath("age").description("Employee's age"),
				fieldWithPath("position").description("Employee's position inside organization")
			), pathParameters(parameterWithName("id").description("Employee id"))));
	}

}

If you would like to customize some settings of Spring REST Docs you should provide @TestConfiguration class inside JUnit test class. In the following code fragment you may see an example of such customization. I overridden default snippets output directory from index to test method-specific name, and force generation of sample request and responses using prettyPrint option (single parameter in the separated line).

@TestConfiguration
static class CustomizationConfiguration implements RestDocsMockMvcConfigurationCustomizer {

	@Override
	public void customize(MockMvcRestDocumentationConfigurer configurer) {
		configurer.operationPreprocessors()
			.withRequestDefaults(prettyPrint())
			.withResponseDefaults(prettyPrint());
	}
	
	@Bean
	public RestDocumentationResultHandler restDocumentation() {
		return MockMvcRestDocumentation.document("{method-name}");
	}
}

Now, if you execute mvn clean install on your project you should see the following structure inside your output directory.
rest-api-docs-3

4. Viewing and publishing API docs

Once we have successfully built our project, the documentation has been generated. We can display HTML file available at target/generated-docs/api.html. It provides the full documentation of our API.

rest-api-docs-1
And the next part…

rest-api-docs-2
You may also want to publish it inside your application fat JAR file. If you configure maven-resources-plugin following example vibisle below it would be available under /static/docs directory inside JAR.

<plugin>
	<artifactId>maven-resources-plugin</artifactId>
	<executions>
		<execution>
			<id>copy-resources</id>
			<phase>prepare-package</phase>
			<goals>
				<goal>copy-resources</goal>
			</goals>
			<configuration>
				<outputDirectory>
					${project.build.outputDirectory}/static/docs
				</outputDirectory>
				<resources>
					<resource>
						<directory>
							${project.build.directory}/generated-docs
						</directory>
					</resource>
				</resources>
			</configuration>
		</execution>
	</executions>
</plugin>

Conclusion

That’s all what I wanted to show in this article. The sample service generating documentation using Spring REST Docs is available on GitHub under repository https://github.com/piomin/sample-spring-microservices-new/tree/rest-api-docs/employee-service. I’m not sure that Swagger and Spring REST Docs should be treated as a competitive solutions. I use Swagger for simple testing an API on the running application or exposing specification that can be used for automated generation of a client code. Spring REST Docs is rather used for generating documentation that can be published somewhere, and “is accurate, concise, and well-structured. This documentation then allows your users to get the information they need with a minimum of fuss”. I think there is no obstacle to use Spring REST Docs and SpringFox Swagger together in your project in order to provide the most valuable documentation of API exposed by the application.

Continuous Integration with Jenkins, Artifactory and Spring Cloud Contract

Consumer Driven Contract (CDC) testing is one of the method that allows you to verify integration between applications within your system. The number of such interactions may be really large especially if you maintain microservices-based architecture. Assuming that every microservice is developed by different teams or sometimes even different vendors, it is important to automate the whole testing process. As usual, we can use Jenkins server for running contract tests within our Continuous Integration (CI) process.

The sample scenario has been visualized on the picture below. We have one application (person-service) that exposes API leveraged by three different applications. Each application is implementing by a different development team. Consequently, every application is stored in the separated Git repository and has dedicated pipeline in Jenkins for building, testing and deploying.

contracts-3 (1)

The source code of sample applications is available on GitHub in the repository sample-spring-cloud-contract-ci (https://github.com/piomin/sample-spring-cloud-contract-ci.git). I placed all the sample microservices in the single Git repository only for our demo simplification. We will still treat them as a separated microservices, developed and built independently.

In this article I used Spring Cloud Contract for CDC implementation. It is the first choice solution for JVM applications written in Spring Boot. Contracts can be defined using Groovy or YAML notation. After building on the producer side Spring Cloud Contract generate special JAR file with stubs suffix, that contains all defined contracts and JSON mappings. Such a JAR file can be build on Jenkins and then published on Artifactory. Contract consumer also use the same Artifactory server, so they can use the latest version of stubs file. Because every application expects different response from person-service, we have to define three different contracts between person-service and a target consumer.

contracts-1

Let’s analyze the sample scenario. Assuming we have performed some changes in the API exposed by person-service and we have modified contracts on the producer side, we would like to publish them on shared server. First, we need to verify contracts against producer (1), and in case of success publish artifact with stubs to Artifactory (2). All the pipelines defined for applications that use this contract are able to trigger the build on a new version of JAR file with stubs (3). Then, the newest version contract is verifying against consumer (4). If contract testing fails, pipeline is able to notify the responsible team about this failure.

contracts-2

1. Pre-requirements

Before implementing and running any sample we need to prepare our environment. We need to launch Jenkins and Artifactory servers on the local machine. The most suitable way for this is through a Docker containers. Here are the commands required for run these containers.

$ docker run --name artifactory -d -p 8081:8081 docker.bintray.io/jfrog/artifactory-oss:latest
$ docker run --name jenkins -d -p 8080:8080 -p 50000:50000 jenkins/jenkins:lts

I don’t know if you are familiar with such tools like Artifactory and Jenkins. But after starting them we need to configure some things. First you need to initialize Maven repositories for Artifactory. You will be prompt for that just after a first launch. It also automatically add one remote repository: JCenter Bintray (https://bintray.com/bintray/jcenter), which is enough for our build. Jenkins also comes with default set of plugins, which you can install just after first launch (Install suggested plugins). For this demo, you will also have to install plugin for integration with Artifactory (https://wiki.jenkins.io/display/JENKINS/Artifactory+Plugin). If you need more details about Jenkins and Artifactory configuration you can refer to my older article How to setup Continuous Delivery environment.

2. Building contracts

We are beginning contract definition from the producer side application. Producer exposes only one GET /persons/{id} method that returns Person object. Here are the fields contained by Person class.

public class Person {

	private Integer id;
	private String firstName;
	private String lastName;
	@JsonFormat(pattern = "yyyy-MM-dd")
	private Date birthDate;
	private Gender gender;
	private Contact contact;
	private Address address;
	private String accountNo;

	// ...
}

The following picture illustrates, which fields of Person object are used by consumers. As you see, some of the fields are shared between consumers, while some other are required only by single consuming application.

contracts-4

Now we can take a look on contract definition between person-service and bank-service.

import org.springframework.cloud.contract.spec.Contract

Contract.make {
	request {
		method 'GET'
		urlPath('/persons/1')
	}
	response {
		status OK()
		body([
			id: 1,
			firstName: 'Piotr',
			lastName: 'Minkowski',
			gender: $(regex('(MALE|FEMALE)')),
			contact: ([
				email: $(regex(email())),
				phoneNo: $(regex('[0-9]{9}$'))
			])
		])
		headers {
			contentType(applicationJson())
		}
	}
}

For comparison, here’s definition of contract between person-service and letter-service.

import org.springframework.cloud.contract.spec.Contract

Contract.make {
	request {
		method 'GET'
		urlPath('/persons/1')
	}
	response {
		status OK()
		body([
			id: 1,
			firstName: 'Piotr',
			lastName: 'Minkowski',
			address: ([
				city: $(regex(alphaNumeric())),
				country: $(regex(alphaNumeric())),
				postalCode: $(regex('[0-9]{2}-[0-9]{3}')),
				houseNo: $(regex(positiveInt())),
				street: $(regex(nonEmpty()))
			])
		])
		headers {
			contentType(applicationJson())
		}
	}
}

3. Implementing tests on the producer side

Ok, we have three different contracts assigned to the single endpoint exposed by person-service. We need to publish them in such a way to that they are easily available for consumers. In that case Spring Cloud Contract comes with a handy solution. We may define contracts with different response for the same request, and than choose the appropriate definition on the consumer side. All those contract definitions will be published within the same JAR file. Because we have three consumers we define three different contracts placed in directories bank-consumer, contact-consumer and letter-consumer.

contracts-5

All the contracts will use a single base test class. To achieve it we need to provide a fully qualified name of that class for Spring Cloud Contract Verifier plugin in pom.xml.

<plugin>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-contract-maven-plugin</artifactId>
	<extensions>true</extensions>
	<configuration>
		<baseClassForTests>pl.piomin.services.person.BasePersonContractTest</baseClassForTests>
	</configuration>
</plugin>

Here’s the full definition of base class for our contract tests. We will mock the repository bean with the answer matching to the rules created inside contract files.

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = WebEnvironment.DEFINED_PORT)
public abstract class BasePersonContractTest {

	@Autowired
	WebApplicationContext context;
	@MockBean
	PersonRepository repository;
	
	@Before
	public void setup() {
		RestAssuredMockMvc.webAppContextSetup(this.context);
		PersonBuilder builder = new PersonBuilder()
			.withId(1)
			.withFirstName("Piotr")
			.withLastName("Minkowski")
			.withBirthDate(new Date())
			.withAccountNo("1234567890")
			.withGender(Gender.MALE)
			.withPhoneNo("500070935")
			.withCity("Warsaw")
			.withCountry("Poland")
			.withHouseNo(200)
			.withStreet("Al. Jerozolimskie")
			.withEmail("piotr.minkowski@gmail.com")
			.withPostalCode("02-660");
		when(repository.findById(1)).thenReturn(builder.build());
	}
	
}

Spring Cloud Contract Maven plugin visible above is responsible for generating stubs from contract definitions. It is executed during Maven build after running mvn clean install command. The build is performed on Jenkins CI. Jenkins pipeline is responsible for updating remote Git repository, build binaries from source code, running automated tests and finally publishing JAR file containing stubs on a remote artifact repository – Artifactory. Here’s Jenkins pipeline created for the contract producer side (person-service).

node {
  withMaven(maven:'M3') {
    stage('Checkout') {
      git url: 'https://github.com/piomin/sample-spring-cloud-contract-ci.git', credentialsId: 'piomin-github', branch: 'master'
    }
    stage('Publish') {
      def server = Artifactory.server 'artifactory'
      def rtMaven = Artifactory.newMavenBuild()
      rtMaven.tool = 'M3'
      rtMaven.resolver server: server, releaseRepo: 'libs-release', snapshotRepo: 'libs-snapshot'
      rtMaven.deployer server: server, releaseRepo: 'libs-release-local', snapshotRepo: 'libs-snapshot-local'
      rtMaven.deployer.artifactDeploymentPatterns.addInclude("*stubs*")
      def buildInfo = rtMaven.run pom: 'person-service/pom.xml', goals: 'clean install'
      rtMaven.deployer.deployArtifacts buildInfo
      server.publishBuildInfo buildInfo
    }
  }
}

We also need to include dependency spring-cloud-starter-contract-verifier to the producer app to enable Spring Cloud Contract Verifier.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-contract-verifier</artifactId>
	<scope>test</scope>
</dependency>

4. Implementing tests on the consumer side

To enable Spring Cloud Contract on the consumer side we need to include artifact spring-cloud-starter-contract-stub-runner to the project dependencies.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-contract-stub-runner</artifactId>
	<scope>test</scope>
</dependency>

Then, the only thing left is to build JUnit test, which verifies our contract by calling it through OpenFeign client. The configuration of that test is provided inside annotation @AutoConfigureStubRunner. We select the latest version of person-service stubs artifact by setting + in the version section of ids parameter. Because, we have multiple contracts defined inside person-service we need to choose the right for current service by setting consumer-name parameter. All the contract definitions are downloaded from Artifactory server, so we set stubsMode parameter to REMOTE. The address of Artifactory server has to be set using repositoryRoot property.

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = WebEnvironment.NONE)
@AutoConfigureStubRunner(ids = {"pl.piomin.services:person-service:+:stubs:8090"}, consumerName = "letter-consumer",  stubsPerConsumer = true, stubsMode = StubsMode.REMOTE, repositoryRoot = "http://192.168.99.100:8081/artifactory/libs-snapshot-local")
@DirtiesContext
public class PersonConsumerContractTest {

	@Autowired
	private PersonClient personClient;
	
	@Test
	public void verifyPerson() {
		Person p = personClient.findPersonById(1);
		Assert.assertNotNull(p);
		Assert.assertEquals(1, p.getId().intValue());
		Assert.assertNotNull(p.getFirstName());
		Assert.assertNotNull(p.getLastName());
		Assert.assertNotNull(p.getAddress());
		Assert.assertNotNull(p.getAddress().getCity());
		Assert.assertNotNull(p.getAddress().getCountry());
		Assert.assertNotNull(p.getAddress().getPostalCode());
		Assert.assertNotNull(p.getAddress().getStreet());
		Assert.assertNotEquals(0, p.getAddress().getHouseNo());
	}
	
}

Here’s Feign client implementation responsible for calling endpoint exposed by person-service

@FeignClient("person-service")
public interface PersonClient {

	@GetMapping("/persons/{id}")
	Person findPersonById(@PathVariable("id") Integer id);
	
}

5. Setup of Continuous Integration process

Ok, we have already defined all the contracts required for our exercise. We have also build a pipeline responsible for building and publishing stubs with contracts on the producer side (person-service). It always publish the newest version of stubs generated from source code. Now, our goal is to launch pipelines defined for three consumer applications, each time when new stubs would be published to Artifactory server by producer pipeline.
The best solution for that would be to trigger a Jenkins build when you deploy an artifact. To achieve it we use Jenkins plugin called URLTrigger, that can be configured to watch for changes on a certain URL, in that case REST API endpoint exposed by Artifactory for selected repository path.
After installing URLTrigger plugin we have to enable it for all consumer pipelines. You can configure it to watch for changes in the returned JSON file from the Artifactory File List REST API, that is accessed via the following URI: http://192.168.99.100:8081/artifactory/api/storage/%5BPATH_TO_FOLDER_OR_REPO%5D/. The file maven-metadata.xml will change every time you deploy a new version of application to Artifactory. We can monitor the change of response’s content between the last two polls. The last field that has to be filled is Schedule. If you set it to * * * * * it will poll for a change every minute.

contracts-6

Our three pipelines for consumer applications are ready. The first run was finished with success.

contracts-7

If you have already build person-service application and publish stubs to Artifactory you will see the following structure in libs-snapshot-local repository. I have deployed three different versions of API exposed by person-service. Each time I publish new version of contract all the dependent pipelines are triggered to verify it.

contracts-8

The JAR file with contracts is published under classifier stubs.

contracts-9

Spring Cloud Contract Stub Runner tries to find the latest version of contracts.

2018-07-04 11:46:53.273  INFO 4185 --- [           main] o.s.c.c.stubrunner.AetherStubDownloader  : Desired version is [+] - will try to resolve the latest version
2018-07-04 11:46:54.752  INFO 4185 --- [           main] o.s.c.c.stubrunner.AetherStubDownloader  : Resolved version is [1.3-SNAPSHOT]
2018-07-04 11:46:54.823  INFO 4185 --- [           main] o.s.c.c.stubrunner.AetherStubDownloader  : Resolved artifact [pl.piomin.services:person-service:jar:stubs:1.3-SNAPSHOT] to /var/jenkins_home/.m2/repository/pl/piomin/services/person-service/1.3-SNAPSHOT/person-service-1.3-SNAPSHOT-stubs.jar

6. Testing change in contract

Ok, we have already prepared contracts and configured our CI environment. Now, let’s perform change in the API exposed by person-service. We will just change the name of one field: accountNo to accountNumber.

contracts-12

This changes requires a change in contract definition created on the producer side. If you modify the field name there person-service will build successfully and new version of contract will be published to Artifactory. Because all other pipelines listens for changes in the latest version of JAR files with stubs, the build will be started automatically. Microservices letter-service and contact-service do not use field accountNo, so their pipelines will not fail. Only bank-service pipeline report error in contract as shown on the picture below.

contracts-10

Now, if you were notified about failed verification of the newest contract version between person-service and bank-service, you can perform required change on the consumer side.

contracts-11

Introduction to Blockchain with Java using Ethereum, web3j and Spring Boot

Blockchain is one of the buzzwords in IT world during some last months. This term is related to cryptocurrencies, and was created together with Bitcoins. It is decentralized, immutable data structure divided into blocks, which are linked and secured using cryptographic algorithms. Every single block in this structure typically contains a cryptographic hash of the previous block, a timestamp, and transaction data. Blockchain is managed by peer-to-peer network, and during inter-node communication every new block is validated before adding. This is short portion of theory about blockchain. In a nutshell, this is a technology which allows us to managed transactions between two parties in a decentralized way. Now, the question is how we can implement it in our system.
Here comes Ethereum. It is a decentralized platform created by Vitarik Buterin that provides scripting language for a development of applications. It is based on ideas from Bitcoin, and is driven by the new cryptocurrency called Ether. Today, Ether is the second largest cryptocurrency after Bitcoin. The heart of Ethereum technology is EVM (Ethereum Virtual Machine), which can be treated as something similar to JVM, but using a network of fully decentralized nodes. To implement transactions based Ethereum in Java world we use web3j library. This is a lightweight, reactive, type safe Java and Android library for integrating with nodes on Ethereum blockchains. More details can be found on its website https://web3j.io.

1. Running Ethereum locally

Although there are many articles on the Web about blockchain and ethereum it is not easy to find a solution describing how to run ready-for-use instance of Ethereum on the local machine. It is worth to mention that generally there are two most popular Ethereum clients we can use: Geth and Parity. It turns out we can easily run Geth node locally using Docker container. By default it connects the node to the Ethereum main network. Alternatively, you can connect it to test network or Rinkeby network. But the best option for beginning is just to run it in development mode by setting --dev parameter on Docker container running command.
Here’s the command that starts Docker container in development mode and exposes Ethereum RPC API on port 8545.

$ docker run -d --name ethereum -p 8545:8545 -p 30303:30303 ethereum/client-go --rpc --rpcaddr "0.0.0.0" --rpcapi="db,eth,net,web3,personal" --rpccorsdomain "*" --dev

The one really good message when running that container in development mode is that you have plenty of Ethers on your default, test account. In that case, you don’t have to mine any Ethers to be able to start tests. Great! Now, let’s create some other test accounts and also check out some things. To achieve it we need to run Geth’s interactive JavaScript console inside Docker container.

$ docker exec -it ethereum geth attach ipc:/tmp/geth.ipc

2. Managing Ethereum node using JavaScript console

After running JavaScript console you can easily display default account (coinbase), the list of all available accounts and their balances. Here’s the screen illustrating results for my Ethereum node.
blockchain-1
Now, we have to create some test accounts. We can do it by calling personal.newAccount(password) function. After creating required accounts, you can perform some test transactions using JavaScript console, and transfer some funds from base account to the newly created accounts. Here are the commands used for creating accounts and executing transactions.
blockchain-2

3. System architecture

The architecture of our sample system is very simple. I don’t want to complicate anything, but just show you how to send transaction to Geth node and receive notifications. While transaction-service sends new transaction to Ethereum node, bonus-service observe node and listening for incoming transactions. Then it send bonus to the sender’s account once per 10 transactions received from his account. Here’s the diagram that illustrates an architecture of our sample system.
blockchain-arch

4. Enable Web3j for Spring Boot app

I think that now we have clarity what exactly we want to do. So, let’s proceed to the implementation. First, we should include all required dependencies in order to be able to use web3j library inside Spring Boot application. Fortunately, there is a starter that can be included.

<dependency>
	<groupId>org.web3j</groupId>
	<artifactId>web3j-spring-boot-starter</artifactId>
	<version>1.6.0</version>
</dependency>

Because we are running Ethereum Geth client on Docker container we need to change auto-configured client’s address for web3j.

spring:
  application:
    name: transaction-service
server:
  port: ${PORT:8090}
web3j:
  client-address: http://192.168.99.100:8545

5. Building applications

If we included web3j starter to the project dependencies all you need is to autowire Web3j bean. Web3j is responsible for sending transaction to Geth client node. It receives response with transaction hash if it has been accepted by the node or error object if it has been rejected. While creating transaction object it is important to set gas limit to minimum 21000. If you sent lower value, you will probably receive error Error: intrinsic gas too low.

@Service
public class BlockchainService {

    private static final Logger LOGGER = LoggerFactory.getLogger(BlockchainService.class);

    @Autowired
    Web3j web3j;

    public BlockchainTransaction process(BlockchainTransaction trx) throws IOException {
        EthAccounts accounts = web3j.ethAccounts().send();
        EthGetTransactionCount transactionCount = web3j.ethGetTransactionCount(accounts.getAccounts().get(trx.getFromId()), DefaultBlockParameterName.LATEST).send();
        Transaction transaction = Transaction.createEtherTransaction(accounts.getAccounts().get(trx.getFromId()), transactionCount.getTransactionCount(), BigInteger.valueOf(trx.getValue()), BigInteger.valueOf(21_000), accounts.getAccounts().get(trx.getToId()),BigInteger.valueOf(trx.getValue()));
        EthSendTransaction response = web3j.ethSendTransaction(transaction).send();
        if (response.getError() != null) {
            trx.setAccepted(false);
            return trx;
        }
        trx.setAccepted(true);
        String txHash = response.getTransactionHash();
        LOGGER.info("Tx hash: {}", txHash);
        trx.setId(txHash);
        EthGetTransactionReceipt receipt = web3j.ethGetTransactionReceipt(txHash).send();
        if (receipt.getTransactionReceipt().isPresent()) {
            LOGGER.info("Tx receipt: {}", receipt.getTransactionReceipt().get().getCumulativeGasUsed().intValue());
        }
        return trx;
    }

}

The @Service bean visible above is invoked by the controller. The implementation of POST method takes BlockchainTransaction object as parameter. You can send there sender id, receiver id, and transaction amount. Sender and receiver ids are equivalent to index in query eth.account[index].

@RestController
public class BlockchainController {

    @Autowired
    BlockchainService service;

    @PostMapping("/transaction")
    public BlockchainTransaction execute(@RequestBody BlockchainTransaction transaction) throws NoSuchAlgorithmException, NoSuchProviderException, InvalidAlgorithmParameterException, CipherException, IOException {
        return service.process(transaction);
    }

}

You can send a test transaction by calling POST method using the following command.

  
$ curl --header "Content-Type: application/json" --request POST --data '{"fromId":2,"toId":1,"value":3}' http://localhost:8090/transaction

Before sending any transactions you should also unlock sender account.
blockchain-3

Application bonus-service listens for transactions processed by Ethereum node. It subscribes for notifications from Web3j library by calling web3j.transactionObservable().subscribe(...) method. It returns the amount of received transaction to the sender’s account once per 10 transactions sent from that address. Here’s the implementation of observable method inside application bonus-service.

@Autowired
Web3j web3j;

@PostConstruct
public void listen() {
	Subscription subscription = web3j.transactionObservable().subscribe(tx -> {
		LOGGER.info("New tx: id={}, block={}, from={}, to={}, value={}", tx.getHash(), tx.getBlockHash(), tx.getFrom(), tx.getTo(), tx.getValue().intValue());
		try {
			EthCoinbase coinbase = web3j.ethCoinbase().send();
			EthGetTransactionCount transactionCount = web3j.ethGetTransactionCount(tx.getFrom(), DefaultBlockParameterName.LATEST).send();
			LOGGER.info("Tx count: {}", transactionCount.getTransactionCount().intValue());
			if (transactionCount.getTransactionCount().intValue() % 10 == 0) {
				EthGetTransactionCount tc = web3j.ethGetTransactionCount(coinbase.getAddress(), DefaultBlockParameterName.LATEST).send();
				Transaction transaction = Transaction.createEtherTransaction(coinbase.getAddress(), tc.getTransactionCount(), tx.getValue(), BigInteger.valueOf(21_000), tx.getFrom(), tx.getValue());
				web3j.ethSendTransaction(transaction).send();
			}
		} catch (IOException e) {
			LOGGER.error("Error getting transactions", e);
		}
	});
	LOGGER.info("Subscribed");
}

Conclusion

Blockchain and cryptocurrencies are not the easy topics to start. Ethereum simplifies development of applications that use blockchain, by providing a complete, scripting language. Using web3j library together with Spring Boot and Docker image of Ethereum Geth client allows to quickly start local development of solution implementing blockchain technology. IF you would like to try it locally just clone my repository available on GitHub https://github.com/piomin/sample-spring-blockchain.git

Building and testing message-driven microservices using Spring Cloud Stream

Spring Boot and Spring Cloud give you a great opportunity to build microservices fast using different styles of communication. You can create synchronous REST microservices based on Spring Cloud Netflix libraries as shown in one of my previous articles Quick Guide to Microservices with Spring Boot 2.0, Eureka and Spring Cloud. You can create asynchronous, reactive microservices deployed on Netty with Spring WebFlux project and combine it succesfully with some Spring Cloud libraries as shown in my article Reactive Microservices with Spring WebFlux and Spring Cloud. And finally, you may implement message-driven microservices based on publish/subscribe model using Spring Cloud Stream and message broker like Apache Kafka or RabbitMQ. The last of listed approaches to building microservices is the main subject of this article. I’m going to show you how to effectively build, scale, run and test messaging microservices basing on RabbitMQ broker.

Architecture

For the purpose of demonstrating Spring Cloud Stream features we will design a sample system which uses publish/subscribe model for inter-service communication. We have three microservices: order-service, product-service and account-service. Application order-service exposes HTTP endpoint that is responsible for processing orders sent to our system. All the incoming orders are processed asynchronously – order-service prepare and send message to RabbitMQ exchange and then respond to the calling client that the request has been accepted for processing. Applications account-service and product-service are listening for the order messages incoming to the exchange. Microservice account-service is responsible for checking if there are sufficient funds on customer’s account for order realization and then withdrawing cash from this account. Microservice product-service checks if there is sufficient amount of products in the store, and changes the number of available products after processing order. Both account-service and product-service send asynchronous response through RabbitMQ exchange (this time it is one-to-one communication using direct exchange) with a status of operation. Microservice order-service after receiving response messages sets the appropriate status of the order and exposes it through REST endpoint GET /order/{id} to the external client.

If you feel that the description of our sample system is a little incomprehensible, here’s the diagram with architecture for clarification.

stream-1

Enabling Spring Cloud Stream

The recommended way to include Spring Cloud Stream in the project is with a dependency management system. Spring Cloud Stream has an independent release trains management in relation to the whole Spring Cloud framework. However, if we have declared spring-cloud-dependencies in the Elmhurst.RELEASE version inside the dependencyManagement
section, we wouldn’t have to declare anything else in pom.xml. If you prefer to use only the Spring Cloud Stream project, you should define the following section.

<dependencyManagement>
  <dependencies>
    <dependency>
      <groupId>org.springframework.cloud</groupId>
      <artifactId>spring-cloud-stream-dependencies</artifactId>
      <version>Elmhurst.RELEASE</version>
      <type>pom</type>
      <scope>import</scope>
    </dependency>
  </dependencies>
</dependencyManagement>

The next step is to add spring-cloud-stream artifact to the project dependencies. I also recommend you include at least the spring-cloud-sleuth library to provide sending messaging with the same traceId as the source request incoming to order-service.

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-stream</artifactId>
</dependency>
<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-sleuth</artifactId>
</dependency>

Spring Cloud Stream programming model

To enable connectivity to a message broker for your application, annotate the main class with @EnableBinding. The @EnableBinding annotation takes one or more interfaces as parameters. You may choose between three interfaces provided by Spring Cloud Stream:

  • Sink: This is used for marking a service that receives messages from the inbound channel.
  • Source: This is used for sending messages to the outbound channel.
  • Processor: This can be used in case you need both an inbound channel and an outbound channel, as it extends the Source and Sink interfaces. Because order-service sends messages, as well as receives them, its main class has been annotated with @EnableBinding(Processor.class).

Here’s the main class of order-service that enables Spring Cloud Stream binding.

@SpringBootApplication
@EnableBinding(Processor.class)
public class OrderApplication {
  public static void main(String[] args) {
    new SpringApplicationBuilder(OrderApplication.class).web(true).run(args);
  }
}

Adding message broker

In Spring Cloud Stream nomenclature the implementation responsible for integration with specific message broker is called binder. By default, Spring Cloud Stream provides binder implementations for Kafka and RabbitMQ. It is able to automatically detect and use a binder found on the classpath. Any middleware-specific settings can be overridden through external configuration properties in the form supported by Spring Boot, such as application arguments, environment variables, or just the application.yml file. To include support for RabbitMQ, which used it this article as a message broker, you should add the following dependency to the project.

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-starter-stream-rabbit</artifactId>
</dependency>

Now, our applications need to connected with one, shared instance of RabbitMQ broker. That’s why I run Docker image with RabbitMQ exposed outside on default 5672 port. It also launches web dashboard available under address http://192.168.99.100:15672.

$ docker run -d --name rabbit -p 15672:15672 -p 5672:5672 rabbitmq:management

We need to override default address of RabbitMQ for every Spring Boot application by settings property spring.rabbitmq.host to Docker machine IP 192.168.99.100.

spring:
  rabbitmq:
    host: 192.168.99.100
    port: 5672

Implementing message-driven microservices

Spring Cloud Stream is built on top of Spring Integration project. Spring Integration extends the Spring programming model to support the well-known Enterprise Integration Patterns (EIP). EIP defines a number of components that are typically used for orchestration in distributed systems. You have probably heard about patterns such as message channels, routers, aggregators, or endpoints. Let’s proceed to the implementation.
We begin from order-service, that is responsible for accepting orders, publishing them on shared topic and then collecting asynchronous responses from downstream services. Here’s the @Service, which builds message and publishes it to the remote topic using Source bean.

@Service
public class OrderSender {
  @Autowired
  private Source source;

  public boolean send(Order order) {
    return this.source.output().send(MessageBuilder.withPayload(order).build());
  }
}

That @Service is called by the controller, which exposes the HTTP endpoints for submitting new orders and getting order with status by id.

@RestController
public class OrderController {

	private static final Logger LOGGER = LoggerFactory.getLogger(OrderController.class);

	private ObjectMapper mapper = new ObjectMapper();

	@Autowired
	OrderRepository repository;
	@Autowired
	OrderSender sender;	

	@PostMapping
	public Order process(@RequestBody Order order) throws JsonProcessingException {
		Order o = repository.add(order);
		LOGGER.info("Order saved: {}", mapper.writeValueAsString(order));
		boolean isSent = sender.send(o);
		LOGGER.info("Order sent: {}", mapper.writeValueAsString(Collections.singletonMap("isSent", isSent)));
		return o;
	}

	@GetMapping("/{id}")
	public Order findById(@PathVariable("id") Long id) {
		return repository.findById(id);
	}

}

Now, let’s take a closer look on consumer side. The message sent by OrderSender bean from order-service is received by account-service and product-service. To receive the message from topic exchange, we just have to annotate the method that takes the Order object as a parameter with @StreamListener. We also have to define target channel for listener – in that case it is Processor.INPUT.

@SpringBootApplication
@EnableBinding(Processor.class)
public class OrderApplication {

	private static final Logger LOGGER = LoggerFactory.getLogger(OrderApplication.class);

	@Autowired
	OrderService service;

	public static void main(String[] args) {
		new SpringApplicationBuilder(OrderApplication.class).web(true).run(args);
	}

	@StreamListener(Processor.INPUT)
	public void receiveOrder(Order order) throws JsonProcessingException {
		LOGGER.info("Order received: {}", mapper.writeValueAsString(order));
		service.process(order);
	}

}

Received order is then processed by AccountService bean. Order may be accepted or rejected by account-service dependending on sufficient funds on customer’s account for order’s realization. The response with acceptance status is sent back to order-service via output channel invoked by the OrderSender bean.

@Service
public class AccountService {

	private static final Logger LOGGER = LoggerFactory.getLogger(AccountService.class);

	private ObjectMapper mapper = new ObjectMapper();

	@Autowired
	AccountRepository accountRepository;
	@Autowired
	OrderSender orderSender;

	public void process(final Order order) throws JsonProcessingException {
		LOGGER.info("Order processed: {}", mapper.writeValueAsString(order));
		List accounts =  accountRepository.findByCustomer(order.getCustomerId());
		Account account = accounts.get(0);
		LOGGER.info("Account found: {}", mapper.writeValueAsString(account));
		if (order.getPrice() <= account.getBalance()) {
			order.setStatus(OrderStatus.ACCEPTED);
			account.setBalance(account.getBalance() - order.getPrice());
		} else {
			order.setStatus(OrderStatus.REJECTED);
		}
		orderSender.send(order);
		LOGGER.info("Order response sent: {}", mapper.writeValueAsString(order));
	}

}

The last step is configuration. It is provided inside application.yml file. We have to properly define destinations for channels. While order-service is assigning orders-out destination to output channel, and orders-in destination to input channel, account-service and product-service do the opposite. It is logical, because message sent by order-service via its output destination is received by consuming services via their input destinations. But it is still the same destination on shared broker’s exchange. Here are configuration settings of order-service.

spring:
  cloud:
    stream:
      bindings:
        output:
          destination: orders-out
        input:
          destination: orders-in
      rabbit:
        bindings:
          input:
            consumer:
              exchangeType: direct

Here’s configuration provided for account-service and product-service.

spring:
  cloud:
    stream:
      bindings:
        output:
          destination: orders-in
        input:
          destination: orders-out
      rabbit:
        bindings:
          output:
            producer:
              exchangeType: direct
              routingKeyExpression: '"#"'

Finally, you can run our sample microservice. For now, we just need to run a single instance of each microservice. You can easily generate some test requests by running JUnit test class OrderControllerTest provided in my source code repository inside module order-service. This case is simple. In the next we will study more advanced sample with multiple running instances of consuming services.

Scaling up

To scale up our Spring Cloud Stream applications we just need to launch additional instances of each microservice. They will still listen for the incoming messages on the same topic exchange as the currently running instances. After adding one instance of account-service and product-service we may send a test order. The result of that test won’t be satisfactory for us… Why? A single order is received by all the running instances of every microservice. This is exactly how topic exchanges works – the message sent to topic is received by all consumers, which are listening on that topic. Fortunately, Spring Cloud Stream is able to solve that problem by providing solution called consumer group. It is responsible for guarantee that only one of the instances is expected to handle a given message, if they are placed in a competing consumer relationship. The transformation to consumer group mechanism when running multiple instances of the service has been visualized on the following figure.

stream-2

Configuration of a consumer group mechanism is not very difficult. We just have to set group parameter with name of the group for given destination. Here’s the current binding configuration for account-service. The orders-in destination is a queue created for direct communication with order-service, so only orders-out is grouped using spring.cloud.stream.bindings..group property.

spring:
  cloud:
    stream:
      bindings:
        output:
          destination: orders-in
        input:
          destination: orders-out
          group: account

Consumer group mechanisms is a concept taken from Apache Kafka, and implemented in Spring Cloud Stream also for RabbitMQ broker, which does not natively support it. So, I think it is pretty interesting how it is configured on RabbitMQ. If you run two instances of the service without setting group name on destination there are two bindings created for a single exchange (one binding per one instance) as shown in the picture below. Because two applications are listening on that exchange, there four bindings assigned to that exchange in total.

stream-3

If you set group name for selected destination Spring Cloud Stream will create a single binding for all running instances of given service. The name of binding will be suffixed with group name.

B08597_11_06

Because, we have included spring-cloud-starter-sleuth to the project dependencies the same traceId header is sent between all the asynchronous requests exchanged during realization of single request incoming to the order-service POST endpoint. Thanks to that we can easily correlate all logs using this header using Elastic Stack (Kibana).

B08597_11_05

Automated Testing

You can easily test your microservice without connecting to a message broker. To achieve it you need to include spring-cloud-stream-test-support to your project dependencies. It contains the TestSupportBinder bean that lets you interact with the bound channels and inspect any messages sent and received by the application.

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-stream-test-support</artifactId>
  <scope>test</scope>
</dependency>

In the test class we need to declare MessageCollector bean, which is responsible for receiving messages retained by TestSupportBinder. Here’s my test class from account-service. Using Processor bean I send test order to input channel. Then MessageCollector receives message that is sent back to order-service via output channel. Test method testAccepted creates order that should be accepted by account-service, while testRejected method sets too high order price that results in rejecting the order.

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
public class OrderReceiverTest {

	private static final Logger LOGGER = LoggerFactory.getLogger(OrderReceiverTest.class);

	@Autowired
	private Processor processor;
	@Autowired
	private MessageCollector messageCollector;

	@Test
	@SuppressWarnings("unchecked")
	public void testAccepted() {
		Order o = new Order();
		o.setId(1L);
		o.setAccountId(1L);
		o.setCustomerId(1L);
		o.setPrice(500);
		o.setProductIds(Collections.singletonList(2L));
		processor.input().send(MessageBuilder.withPayload(o).build());
		Message received = (Message) messageCollector.forChannel(processor.output()).poll();
		LOGGER.info("Order response received: {}", received.getPayload());
		assertNotNull(received.getPayload());
		assertEquals(OrderStatus.ACCEPTED, received.getPayload().getStatus());
	}

	@Test
	@SuppressWarnings("unchecked")
	public void testRejected() {
		Order o = new Order();
		o.setId(1L);
		o.setAccountId(1L);
		o.setCustomerId(1L);
		o.setPrice(100000);
		o.setProductIds(Collections.singletonList(2L));
		processor.input().send(MessageBuilder.withPayload(o).build());
		Message received = (Message) messageCollector.forChannel(processor.output()).poll();
		LOGGER.info("Order response received: {}", received.getPayload());
		assertNotNull(received.getPayload());
		assertEquals(OrderStatus.REJECTED, received.getPayload().getStatus());
	}

}

Conclusion

Message-driven microservices are a good choice whenever you don’t need synchronous response from your API. In this article I have shown sample use case of publish/subscribe model in inter-service communication between your microservices. The source code is as usual available on GitHub (https://github.com/piomin/sample-message-driven-microservices.git). For more interesting examples with usage of Spring Cloud Stream library, also with Apache Kafka, you can refer to Chapter 11 in my book Mastering Spring Cloud (https://www.packtpub.com/application-development/mastering-spring-cloud).

Managing Spring Boot apps locally with Trampoline

Today I came across interesting solution for managing Spring Boot applications locally – Trampoline. It is rather a simple product, that provides web console allowing you to start, stop and monitor your application. However, it can sometimes be useful, especially if you run many different applications locally during microservices development. In this article I’m going to show the main features provided by Trampoline.

How it works

Trampoline is also Spring Boot application, so you can easily start it using your IDE or with java -jar command after building the project with mvn clean install. By default web console is available on 8080 port, but you can easily override it with server.port parameter. It allows you to:

  • Start your application – it is realized by running Maven Spring Boot plugin command mvn spring-boot:run that build the binary from source code and run Java application
  • Shutdown your application – it is realized by calling Spring Boot Actuator /shutdown endpoint that performs gracefully shutdown of your application
  • Monitor your application – it displays some basic information retrieved from Spring Boot Actuator endpoints like trace, logs, metrics and Git commit data.

Setup

First, you need to clone Trampoline repository from GitHub. It is available here: https://github.com/ErnestOrt/Trampoline.git. The application is available inside trampoline directory. You can run its main class Application or just run Maven command mvn spring-boot:run. And it is all. Trampoline is available under address http://localhost:8080.

Configuring applications

We will use one of my previous sample of microservices built with Spring Boot 2.0. It is available on my GitHub account in repository sample-spring-microservices-new available here: https://github.com/piomin/sample-spring-microservices-new.git. Before deploying these microservices on Trampoline we need to perform some minor changes. First, all the microservices have to expose Spring Boot Actuator endpoints. Be sure that endpoint /shutdown is enabled. All changes should be perform in Spring Boot YAML configuration files, which are stored on config-service.

management:
  endpoint.shutdown.enabled: true
  endpoints.web.exposure.include: '*'

If you would like to provide information about last commit you should include Maven plugin git-commit-id-plugin, which is executed during application build. Of course, you also need to add spring-boot-maven-plugin plugin, which is used for building and running Spring Boot application from Maven. All the required changes are available in branch trampoline (https://github.com/piomin/sample-spring-microservices-new/tree/trampoline).

<build>
	<plugins>
		<plugin>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-maven-plugin</artifactId>
		</plugin>
		<plugin>
			<groupId>pl.project13.maven</groupId>
			<artifactId>git-commit-id-plugin</artifactId>
		</plugin>
	</plugins>
</build>

Adding microservices

The further configuration will be provided using Trampoline web console. First, got to section SETTINGS. You need to register every single instance of your microservices. You can register:

  • External, already running application by providing its IP address and HTTP port
  • Git repository with your microservice, which then will be cloned into your machine
  • Git repository with your microservice existing on the local machine just by providing its location

I have cloned the repository with microservices by myself, so I’m selecting a third choice. Inside Register Microservice form we have to set microservice name, port, actuator endpoint context path, default build tool and Maven pom.xml file location.

trampoline-1

It is important to remember about setting Maven home location in the panel Maven Settings. After registering all sample microservices (config-service, discovery-service, gateway-service, and three Spring Cloud applications) we may add them to one group. It is very useful feature, because then we could deploy them all with one click.

trampoline-2

Here’s the full list of services registered in Trampoline.

trampoline-3

Managing microservices

Now, we can navigate to the section INSTANCES. We can launch single instance of microservices or a group of microservices. If you would like to launch a single instance just select it from list on Launch Instance panel and click button Launch. It immediately starts new command window, builds your application from source code and launches it under selected port.

trampoline-4

The list of running microservices is available below. You can see there application’s HTTP port and status. You may also display trace, logs or metrics by clicking on one of icon available at every row.

trampoline-5

Here’s an information about last commit for discovery-service.

trampoline-6

If you decide to restart an application Trampoline sends request to /shutdown endpoint, rebuilds your application with newest version of code and runs it again. Alternatively, you may use Spring Boot Devtools (by including dependency org.springframework.boot:spring-boot-devtools), which forces your application to be restarted after source code modification. Because Trampoline is continuously monitoring status of all registered applications by calling its actuator endpoints you will still see the full list of running microservices.

Chaos Monkey for Spring Boot Microservices

How many of you have never encountered a crash or a failure of your systems in production environment? Certainly, each one of you, sooner or later, has experienced it. If we are not able to avoid a failure, the solution seems to be maintaining our system in the state of permanent failure. This concept underpins the tool invented by Netflix to test the resilience of its IT infrastructure – Chaos Monkey. A few days ago I came across the solution, based on the idea behind Netflix’s tool, designed to test Spring Boot applications. Such a library has been implemented by Codecentric. Until now, I recognize them only as the authors of other interesting solution dedicated for Spring Boot ecosystem – Spring Boot Admin. I have already described this library in one of my previous articles Monitoring Microservices With Spring Boot Admin (https://piotrminkowski.wordpress.com/2017/06/26/monitoring-microservices-with-spring-boot-admin).
Today I’m going to show you how to include Codecentric’s Chaos Monkey in your Spring Boot application, and then implement chaos engineering in sample system consists of some microservices. The Chaos Monkey library can be used together with Spring Boot 2.0, and the current release version of it is 1.0.1. However, I’ll implement the sample using version 2.0.0-SNAPSHOT, because it has some new interesting features not available in earlier versions of this library. In order to be able to download SNAPSHOT version of Codecentric’s Chaos Monkey library you have to remember about including Maven repository https://oss.sonatype.org/content/repositories/snapshots to your repositories in pom.xml.

1. Enable Chaos Monkey for an application

There are two required steps for enabling Chaos Monkey for Spring Boot application. First, let’s add library chaos-monkey-spring-boot to the project’s dependencies.

<dependency>
	<groupId>de.codecentric</groupId>
	<artifactId>chaos-monkey-spring-boot</artifactId>
	<version>2.0.0-SNAPSHOT</version>
</dependency>

Then, we should activate profile chaos-monkey on application startup.

$ java -jar target/order-service-1.0-SNAPSHOT.jar --spring.profiles.active=chaos-monkey

2. Sample system architecture

Our sample system consists of three microservices, each started in two instances, and a service discovery server. Microservices registers themselves against a discovery server, and communicates with each other through HTTP API. Chaos Monkey library is included to every single instance of all running microservices, but not to the discovery server. Here’s the diagram that illustrates the architecture of our sample system.

chaos

The source code of sample applications is available on GitHub in repository sample-spring-chaosmonkey (https://github.com/piomin/sample-spring-chaosmonkey.git). After cloning this repository and building it using mnv clean install command, you should first run discovery-service. Then run two instances of every microservice on different ports by setting -Dserver.port property with an appropriate number. Here’s a set of my running commands.

$ java -jar target/discovery-service-1.0-SNAPSHOT.jar
$ java -jar target/order-service-1.0-SNAPSHOT.jar --spring.profiles.active=chaos-monkey
$ java -jar -Dserver.port=9091 target/order-service-1.0-SNAPSHOT.jar --spring.profiles.active=chaos-monkey
$ java -jar target/product-service-1.0-SNAPSHOT.jar --spring.profiles.active=chaos-monkey
$ java -jar -Dserver.port=9092 target/product-service-1.0-SNAPSHOT.jar --spring.profiles.active=chaos-monkey
$ java -jar target/customer-service-1.0-SNAPSHOT.jar --spring.profiles.active=chaos-monkey
$ java -jar -Dserver.port=9093 target/customer-service-1.0-SNAPSHOT.jar --spring.profiles.active=chaos-monkey

3. Process configuration

In version 2.0.0-SNAPSHOT of chaos-monkey-spring-boot library Chaos Monkey is by default enabled for applications that include it. You may disable it using property chaos.monkey.enabled. However, the only assault which is enabled by default is latency. This type of assault adds a random delay to the requests processed by the application in the range determined by properties chaos.monkey.assaults.latencyRangeStart and chaos.monkey.assaults.latencyRangeEnd. The number of attacked requests is dependent of property chaos.monkey.assaults.level, where 1 means each request and 10 means each 10th request. We can also enable exception and appKiller assault for our application. For simplicity, I set the configuration for all the microservices. Let’s take a look on settings provided in application.yml file.

chaos:
  monkey:
    assaults:
	  level: 8
	  latencyRangeStart: 1000
	  latencyRangeEnd: 10000
	  exceptionsActive: true
	  killApplicationActive: true
	watcher:
	  repository: true
      restController: true

In theory, the configuration visible above should enable all three available types of assaults. However, if you enable latency and exceptions, killApplication will never happen. Also, if you enable both latency and exceptions, all the requests send to application will be attacked, no matter which level is set with chaos.monkey.assaults.level property. It is important to remember about activating restController watcher, which is disabled by default.

4. Enable Spring Boot Actuator endpoints

Codecentric implements a new feature in the version 2.0 of their Chaos Monkey library – the endpoint for Spring Boot Actuator. To enable it for our applications we have to activate it following actuator convention – by setting property management.endpoint.chaosmonkey.enabled to true. Additionally, beginning from version 2.0 of Spring Boot we have to expose that HTTP endpoint to be available after application startup.

management:
  endpoint:
    chaosmonkey:
      enabled: true
  endpoints:
    web:
      exposure:
        include: health,info,chaosmonkey

The chaos-monkey-spring-boot provides several endpoints allowing you to check out and modify configuration. You can use method GET /chaosmonkey to fetch the whole configuration of library. Yo may also disable chaos monkey after starting application by calling method POST /chaosmonkey/disable. The full list of available endpoints is listed here: https://codecentric.github.io/chaos-monkey-spring-boot/2.0.0-SNAPSHOT/#endpoints.

5. Running applications

All the sample microservices stores data in MySQL. So, the first step is to run MySQL database locally using its Docker image. The Docker command visible below also creates database and user with password.

$ docker run -d --name mysql -e MYSQL_DATABASE=chaos -e MYSQL_USER=chaos -e MYSQL_PASSWORD=chaos123 -e MYSQL_ROOT_PASSWORD=123456 -p 33306:3306 mysql

After running all the sample applications, where all microservices are multiplied in two instances listening on different ports, our environment looks like in the figure below.

chaos-4

You will see the following information in your logs during application boot.

chaos-5

We may check out Chaos Monkey configuration settings for every running instance of application by calling the following actuator endpoint.

chaos-3

6. Testing the system

For the testing purposes, I used popular performance testing library – Gatling. It creates 20 simultaneous threads, which calls POST /orders and GET /order/{id} methods exposed by order-service via API gateway 500 times per each thread.

class ApiGatlingSimulationTest extends Simulation {

  val scn = scenario("AddAndFindOrders").repeat(500, "n") {
        exec(
          http("AddOrder-API")
            .post("http://localhost:8090/order-service/orders")
            .header("Content-Type", "application/json")
            .body(StringBody("""{"productId":""" + Random.nextInt(20) + ""","customerId":""" + Random.nextInt(20) + ""","productsCount":1,"price":1000,"status":"NEW"}"""))
            .check(status.is(200),  jsonPath("$.id").saveAs("orderId"))
        ).pause(Duration.apply(5, TimeUnit.MILLISECONDS))
        .
        exec(
          http("GetOrder-API")
            .get("http://localhost:8090/order-service/orders/${orderId}")
            .check(status.is(200))
        )
  }

  setUp(scn.inject(atOnceUsers(20))).maxDuration(FiniteDuration.apply(10, "minutes"))

}

POST endpoint is implemented inside OrderController in add(...) method. It calls find methods exposed by customer-service and product-service using OpenFeign clients. If customer has a sufficient funds and there are still products in stock, it accepts the order and performs changes for customer and product using PUT methods. Here’s the implementation of two methods tested by Gatling performance test.

@RestController
@RequestMapping("/orders")
public class OrderController {

	@Autowired
	OrderRepository repository;
	@Autowired
	CustomerClient customerClient;
	@Autowired
	ProductClient productClient;

	@PostMapping
	public Order add(@RequestBody Order order) {
		Product product = productClient.findById(order.getProductId());
		Customer customer = customerClient.findById(order.getCustomerId());
		int totalPrice = order.getProductsCount() * product.getPrice();
		if (customer != null && customer.getAvailableFunds() >= totalPrice && product.getCount() >= order.getProductsCount()) {
			order.setPrice(totalPrice);
			order.setStatus(OrderStatus.ACCEPTED);
			product.setCount(product.getCount() - order.getProductsCount());
			productClient.update(product);
			customer.setAvailableFunds(customer.getAvailableFunds() - totalPrice);
			customerClient.update(customer);
		} else {
			order.setStatus(OrderStatus.REJECTED);
		}
		return repository.save(order);
	}

	@GetMapping("/{id}")
	public Order findById(@PathVariable("id") Integer id) {
		Optional order = repository.findById(id);
		if (order.isPresent()) {
			Order o = order.get();
			Product product = productClient.findById(o.getProductId());
			o.setProductName(product.getName());
			Customer customer = customerClient.findById(o.getCustomerId());
			o.setCustomerName(customer.getName());
			return o;
		} else {
			return null;
		}
	}

	// ...

}

Chaos Monkey sets random latency between 1000 and 10000 milliseconds (as shown in the step 3). It is important to change default timeouts for Feign and Ribbon clients before starting a test. I decided to set readTimeout to 5000 milliseconds. It will cause some delayed requests to be timed out, while some will succeeded (around 50%-50%). Here’s timeouts configuration for Feign client.

feign:
  client:
    config:
      default:
        connectTimeout: 5000
        readTimeout: 5000
  hystrix:
    enabled: false

Here’s Ribbon client timeouts configuration for API gateway. We have also changed Hystrix settings to disable circuit breaker for Zuul.

ribbon:
  ConnectTimeout: 5000
  ReadTimeout: 5000

hystrix:
  command:
    default:
      execution:
        isolation:
          thread:
            timeoutInMilliseconds: 15000
      fallback:
        enabled: false
      circuitBreaker:
        enabled: false

To launch Gatling performance test go to performance-test directory and run gradle loadTest command. Here’s a result generated for the settings latency assaults. Of course, we can change this result by manipulating Chaos Monkey latency values or Ribbon and Feign timeout values.

chaos-5

Here’s Gatling graph with average response times. Results do not look good. However, we should remember that a single POST method from order-service calls two methods exposed by product-service and two methods exposed by customer-service.

chaos-6

Here’s the next Gatling result graph – this time it illustrates timeline with error and success responses. All HTML reports generated by Gatling during performance test are available under directory performance-test/build/gatling-results

chaos-7

Secure Discovery with Spring Cloud Netflix Eureka

Building standard discovery mechanism basing on Spring Cloud Netflix Eureka is rather an easy thing to do. The same solution built over secure SSL communication between discovery client and server may be slightly more advanced challenge. I haven’t find any any complete example of such an application on web. Let’s try to implement it beginning from the server-side application.

1. Generate certificates

If you develop Java applications for some years you have probably heard about keytool. This tool is available in your ${JAVA_HOME}\bin directory, and is designed for managing keys and certificates. We begin from generating keystore for server-side Spring Boot application. Here’s the appropriate keytool command that generates certficate stored inside JKS keystore file named eureka.jks.

secure-discovery-2

2. Setting up a secure discovery server

Since Eureka server is embedded to Spring Boot application, we need to secure it using standard Spring Boot properties. I placed generated keystore file eureka.jks on the application’s classpath. Now, the only thing that has to be done is to prepare some configuration settings inside application.yml that point to keystore file location, type, and access password.

server:
  port: 8761
  ssl:
    enabled: true
    key-store: classpath:eureka.jks
    key-store-password: 123456
    trust-store: classpath:eureka.jks
    trust-store-password: 123456
    key-alias: eureka

3. Setting up two-way SSL authentication

We will complicate our example a little. A standard SSL configuration assumes that only the client verifies the server certificate. We will force client’s certificate authentication on the server-side. It can be achieved by setting the property server.ssl.client-auth to need.

server:
  ssl:
    client-auth: need

It’s not all, because we also have to add client’s certficate to the list of trusted certificates on the server-side. So, first let’s generate client’s keystore using the same keytool command as for server’s keystore.

secure-deiscovery-1

Now, we need to export certficates from generated keystores for both client and server sides.

secure-discovery-3

Finally, we import client’s certficate to server’s keystore and server’s certficate to client’s keystore.

secure-discovery-4

4. Running secure Eureka server

The sample applications are available on GitHub in repository sample-secure-eureka-discovery (https://github.com/piomin/sample-secure-eureka-discovery.git). After running discovery-service application, Eureka is available under address https://localhost:8761. If you try to visit its web dashboard you get the following exception in your web browser. It means Eureka server is secured.

hqdefault

Well, Eureka dashboard is sometimes an useful tool, so let’s import client’s keystore to our web browser to be able to access it. We have to convert client’s keystore from JKS to PKCS12 format. Here’s the command that performs mentioned operation.

$ keytool -importkeystore -srckeystore client.jks -destkeystore client.p12 -srcstoretype JKS -deststoretype PKCS12 -srcstorepass 123456 -deststorepass 123456 -srcalias client -destalias client -srckeypass 123456 -destkeypass 123456 -noprompt

5. Client’s application configuration

When implementing secure connection on the client side, we generally need to do the same as in the previous step – import a keystore. However, it is not very simple thing to do, because Spring Cloud does not provide any configuration property that allows you to pass the location of SSL keystore to a discovery client. What’s worth mentioning Eureka client leverages Jersey client to communicate with server-side application. It may be surprising a little it is not Spring RestTemplate, but we should remember that Spring Cloud Eureka is built on top of Netflix OSS Eureka client, which does not use Spring libraries.
HTTP basic authentication is automatically added to your eureka client if you include security credentials to connection URL, for example http://piotrm:12345@localhost:8761/eureka. For more advanced configuration, like passing SSL keystore to HTTP client we need to provide @Bean of type DiscoveryClientOptionalArgs.
The following fragment of code shows how to enable SSL connection for discovery client. First, we set location of keystore and truststore files using javax.net.ssl.* Java system property. Then, we provide custom implementation of Jersey client based on Java SSL settings, and set it for DiscoveryClientOptionalArgs bean.

@Bean
public DiscoveryClient.DiscoveryClientOptionalArgs discoveryClientOptionalArgs() throws NoSuchAlgorithmException {
	DiscoveryClient.DiscoveryClientOptionalArgs args = new DiscoveryClient.DiscoveryClientOptionalArgs();
	System.setProperty("javax.net.ssl.keyStore", "src/main/resources/client.jks");
	System.setProperty("javax.net.ssl.keyStorePassword", "123456");
	System.setProperty("javax.net.ssl.trustStore", "src/main/resources/client.jks");
	System.setProperty("javax.net.ssl.trustStorePassword", "123456");
	EurekaJerseyClientBuilder builder = new EurekaJerseyClientBuilder();
	builder.withClientName("account-client");
	builder.withSystemSSLConfiguration();
	builder.withMaxTotalConnections(10);
	builder.withMaxConnectionsPerHost(10);
	args.setEurekaJerseyClient(builder.build());
	return args;
}

6. Enabling HTTPS on the client side

The configuration provided in the previous step applies only to communication between discovery client and Eureka server. What if we also would like to secure HTTP endpoints exposed by the client-side application? The first step is pretty the same as for the discovery server: we need to generate keystore and set it using Spring Boot properties inside application.yml.

server:
  port: ${PORT:8090}
  ssl:
    enabled: true
    key-store: classpath:client.jks
    key-store-password: 123456
    key-alias: client

During registration we need to “inform” Eureka server that our application’s endpoints are secured. To achieve it we should set property eureka.instance.securePortEnabled to true, and also disable non secure port, which is enabled by default.with nonSecurePortEnabled property.

eureka:
  instance:
    nonSecurePortEnabled: false
    securePortEnabled: true
    securePort: ${server.port}
    statusPageUrl: https://localhost:${server.port}/info
    healthCheckUrl: https://localhost:${server.port}/health
    homePageUrl: https://localhost:${server.port}
  client:
    securePortEnabled: true
    serviceUrl:
      defaultZone: https://localhost:8761/eureka/

7. Running client’s application

Finally, we can run client-side application. After launching the application should be visible in Eureka Dashboard.

secure-discovery-5

All the client application’s endpoints are registred in Eureka under HTTPS protocol. I have also override default implementation of actuator endpoint /info, as shown on the code fragment below.

@Component
public class SecureInfoContributor implements InfoContributor {

	@Override
	public void contribute(Builder builder) {
		builder.withDetail("hello", "I'm secure app!");
	}

}

Now, we can try to visit /info endpoint one more time. You should see the same information as below.

secure-discovery-6

Alternatively, if you try to set on the client-side the certificate, which is not trusted by server-side, you will see the following exception while starting your client application.

secure-discovery-7

Conclusion

Securing connection between microservices and Eureka server is only the first step of securing the whole system. We need to thing about secure connection between microservices and config server, and also between all microservices during inter-service communication with @LoadBalanced RestTemplate or OpenFeign client. You can find the examples of such implementations and many more in my book “Mastering Spring Cloud” (https://www.packtpub.com/application-development/mastering-spring-cloud).

Exporting metrics to InfluxDB and Prometheus using Spring Boot Actuator

Spring Boot Actuator is one of the most modified projects after release of Spring Boot 2. It has been through the major improvements, which aimed to simplify customization, and include some new features like support for other web technologies, for example the new reactive module – Spring WebFlux. It also adds out-of-the-box support for exporting metrics to InfluxDB – an open source time series database designed to handle high volumes of timestamped data.  It is really a great simplification in comparison to the version used with Spring Boot 1.5. You can see for yourself how much by reading one of my previous articles Custom metrics visualization with Grafana and InfluxDB. I described there how to export metrics generated by Spring Boot Actuator to InfluxDB using @ExportMetricsWriter bean. The sample Spring Boot application has been available for that article on GitHub repository sample-spring-graphite (https://github.com/piomin/sample-spring-graphite.git) in the branch master. For the current article, I have created the branch spring2 (https://github.com/piomin/sample-spring-graphite/tree/spring2), which show how to implement the same feature as before using version 2.0 of Spring Boot and Spring Boot Actuator.

Additionally, I’m going to show you how to export the same metrics to another popular monitoring system for efficiently storing timeseries data – Prometheus. There is one major difference between models of exporting metrics between InfluxDB and Prometheus. First of them is a push based system, while the second is a pull based system. So, our sample application needs to to actively send data to the InfluxDB monitoring system, while with Prometheus it only has to expose endpoint that will be fetched for data periodically. Let’s begin from InfluxDB.

1. Running InfluxDB

In the previous article I didn’t write much about this database and its configuration. So, now I say some words about it. First step is typical for my examples – we will run Docker container with InfluxDB. Here’s the simplest command that run InfluxDB on your local machine and exposes HTTP API over 8086 port.

$ docker run -d --name influx -p 8086:8086 influxdb

Once we started that container, you would probably want to login there and execute some commands. Nothing simpler, just run the following command and you would be able to do it. After login you should see the version of InfluxDB running on the target Docker container.

$ docker exec -it influx influx
Connected to http://localhost:8086 version 1.5.2
InfluxDB shell version: 1.5.2

The first step is to create database. As you can probably guess, tt can be achieved using command create database. Then switch to the newly created database.

$ create database springboot
$ use springboot

Is that semantic looks familiar for you? Yes, InfluxDB provides very similar query language to SQL. It is called InluxQL, and allows you to define SELECT statements, GROUP BY or INTO clauses, and many more. However, before executing such queries, we should have data stored inside database, am I right? Now, let’s proceed to the next steps in order to generate some test metrics.

2. Integrating Spring Boot application with InfluxDB

If you include artifact micrometer-registry-influx to the project’s dependencies, an export to InfluxDB will be enabled automatically. Of course, we also need to include starter spring-boot-starter-actuator.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
	<groupId>io.micrometer</groupId>
	<artifactId>micrometer-registry-influx</artifactId>
</dependency>

The only thing you have to do is to override default address of InfluxDB, because we are running InfluxDB Docker container on VM. By default, Spring Boot Data tries to connect database named mydb. However, I have already created database springboot, so I should also override this default value. In the version 2 of Spring Boot all the configuration properties related to Spring Boot Actuator endpoints has been moved to management.* section.

management:
  metrics:
    export:
      influx:
        db: springboot
        uri: http://192.168.99.100:8086

You may be surprised a little after starting Spring Boot application with actuator included on the classpath, that it exposes only two HTTP endpoints by default /actuator/info and /actuator/health. That’s why in the newest version of Spring Boot all actuators other than /health and /info are disabled by default, for security purposes. To enable all the actuator enpoints, you have to set property management.endpoints.web.exposure.include to '*'.
In the newest version of Spring Boot monitoring of HTTP metrics has been improved significantly. We can enable collecting all Spring MVC metrics by setting the property management.metrics.web.server.auto-time-requests to true. Alternatively, when it is set to false, you can enable metrics for the specific REST controller by annotating it with @Timed. You can also annotate a single method inside controller, to generate metrics only for specific endpoint.
After application boot you may check out the full list of generated metrics by calling endpoint GET /actuator/metrics. By default, metrics for Spring MVC controller are generated under the name http.server.requests. This name can be customized by setting the management.metrics.web.server.requests-metric-name property. If you run the sample application available inside my GitHub repository it is by default available uder port 2222. Now, you can check out the list of statistics generated for a single metric by calling the endpoint GET /actuator/metrics/{requiredMetricName}, as shown in the following picture.

actuator-6

3. Building Spring Boot application

The sample Spring Boot application used for generating metrics consists of a single controller that implements basic CRUD operations for manipulating Person entity, repository bean and entity class. The application connects to MySQL database using Spring Data JPA repository providing CRUD implementation. Here’s the controller class.

@RestController
@Timed
public class PersonController {

	protected Logger logger = Logger.getLogger(PersonController.class.getName());

	@Autowired
	PersonRepository repository;

	@GetMapping("/persons/pesel/{pesel}")
	public List findByPesel(@PathVariable("pesel") String pesel) {
		logger.info(String.format("Person.findByPesel(%s)", pesel));
		return repository.findByPesel(pesel);
	}

	@GetMapping("/persons/{id}")
	public Person findById(@PathVariable("id") Integer id) {
		logger.info(String.format("Person.findById(%d)", id));
		return repository.findById(id).get();
	}

	@GetMapping("/persons")
	public List findAll() {
		logger.info(String.format("Person.findAll()"));
		return (List) repository.findAll();
	}

	@PostMapping("/persons")
	public Person add(@RequestBody Person person) {
		logger.info(String.format("Person.add(%s)", person));
		return repository.save(person);
	}

	@PutMapping("/persons")
	public Person update(@RequestBody Person person) {
		logger.info(String.format("Person.update(%s)", person));
		return repository.save(person);
	}

	@DeleteMapping("/persons/{id}")
	public void remove(@PathVariable("id") Integer id) {
		logger.info(String.format("Person.remove(%d)", id));
		repository.deleteById(id);
	}

}

Before running the application we have setup MySQL database. The most convenient way to achieve it is through MySQL Docker image. Here’s the command that runs container with database grafana, defines user and password, and exposes MySQL 5 on port 33306.

docker run -d --name mysql -e MYSQL_DATABASE=grafana -e MYSQL_USER=grafana -e MYSQL_PASSWORD=grafana -e MYSQL_ALLOW_EMPTY_PASSWORD=yes -p 33306:3306 mysql:5

Then we need to set some database configuration properties on the application side. All the required tables will be created on application’s boot thanks to setting property spring.jpa.properties.hibernate.hbm2ddl.auto to update.

spring:
  datasource:
    url: jdbc:mysql://192.168.99.100:33306/grafana?useSSL=false
    username: grafana
    password: grafana
    driverClassName: com.mysql.jdbc.Driver
  jpa:
    properties:
      hibernate:
        dialect: org.hibernate.dialect.MySQL5Dialect
        hbm2ddl.auto: update

4. Generating metrics

After starting the application and the required Docker containers, the only thing that needs to be is done is to generate some test statistics. I created JUnit test class that generates some test data and calls endpoints exposed by the application in a loop. Here’s the fragment of that test method.

int ix = new Random().nextInt(100000);
Person p = new Person();
p.setFirstName("Jan" + ix);
p.setLastName("Testowy" + ix);
p.setPesel(new DecimalFormat("0000000").format(ix) + new DecimalFormat("000").format(ix%100));
p.setAge(ix%100);
p = template.postForObject("http://localhost:2222/persons", p, Person.class);
LOGGER.info("New person: {}", p);

p = template.getForObject("http://localhost:2222/persons/{id}", Person.class, p.getId());
p.setAge(ix%100);
template.put("http://localhost:2222/persons", p);
LOGGER.info("Person updated: {} with age={}", p, ix%100);

template.delete("http://localhost:2222/persons/{id}", p.getId());

Now, let’s move back to the step 1. As you probably remember, I have shown you how to run the influx client in the InfluxDB Docker container. After some minutes of working test unit should call exposed endpoints many times. We can check out the values of metric http_server_requests stored on Influx. The following query returns list of measurements collected during last 3 minutes.

actuator-1

As you see, all the metrics generated by Spring Boot Actuator are tagged with the following information: method, uri, status and exception. Thanks to that tags we may easily group metrics per signle endpoint including failures and success percentage. Let’s see how to configure and view it in Grafana.

5. Metrics visualization using Grafana

Once we have exported succesfully metrics to InfluxDB, it is time to visualize them using Grafana. First, let’s run Docker container with Grafana.

$ docker run -d --name grafana -p 3000:3000 grafana/grafana

Grafana provides user friedly interface for creating influx queries. We define a graph that visualizes requests processing time per each of calling endpoints and total number of requests received by the application. If we filter the statistics stored in the table http_server_requests by method type and uri, we would collect all metrics generated per single endpoint.

actuator-4

The similar definition should be created for the other endpoints. We will illustrate them all on a single graph.

actuator-5

Here’s the final result.

actuator-2

Here’s the graph that visualizes total number of requests sent to the application.

actuator-3

6. Running Prometheus

The most suitable way to run Prometheus locally is obviously through a Docker container. The API is exposed under port 9090. We should also pass the initial configuration file and name of Docker network. Why? You will find all the anwers in the next part of this step description.

docker run -d --name prometheus -p 9090:9090 -v /tmp/prometheus.yml:/etc/prometheus/prometheus.yml --network springboot prom/prometheus

In contrast to InfluxDB, Prometheus pulls metrics from an application. Therefore, we need to enable actuator endpoint that exposes metrics for Prometheus, which is disabled by default. To enable it, set property management.endpoint.prometheus.enabled to true, as shown on the configuration fragment below.

management:
  endpoint:
    prometheus:
	  enabled: true

Then we should set the address of actuator endpoint exposed by the application in Prometheus configuration file. A scrape_config section is responsible for specifying a set of targets and parameters describing how to connect with them. By default, Prometheus tries to collect data from defined target endpoint once a minute.

scrape_configs:
  - job_name: 'springboot'
    metrics_path: '/actuator/prometheus'
    static_configs:
    - targets: ['person-service:2222']

The similar as for integration with InfluxDB we need to include the following artifact to the project’s dependencies.

<dependency>
	<groupId>io.micrometer</groupId>
	<artifactId>micrometer-registry-prometheus</artifactId>
</dependency>

In my case, Docker is running on VM, and is available under IP 192.168.99.100. If I would like Prometheus, which is launched as a Docker container, to be able to connect my application, I also should launch it as Docker container. The most convenient way to link two independent containers is through Docker network. If both containers are assigned to the same network, they would be able to connect to each other using container’s name as a target address. Dockerfile is available in the root directory of the sample application’s source code. Second command visible below (docker build) is not required, because the required image piomin/person-service is available on my Docker Hub repository.

$ docker network create springboot
$ docker build -t piomin/person-service .
$ docker run -d --name person-service -p 2222:2222 --network springboot piomin/person-service

7. Integrate Prometheus with Grafana

Prometheus exposes web console under address 192.168.99.100:9090, where you can specify query and display graph with metrics. However, we can integrate it with Grafana to take an advantage of nicer visualization offered by this tool. First, you should create Prometheus data source.

actuator-9

Then we should define queries for collecting metrics from Prometheus API. Spring Boot Actuator exposes three different metrics related to HTTP traffic: http_server_requests_seconds_counthttp_server_requests_seconds_sum and http_server_requests_seconds_max. For example, we may calculate per-second average rate of increase of the time series for http_server_requests_seconds_sum, that returns total number of seconds spent on processing requests by using rate() function. The values can be filtered by method and uri using expression inside {}. The following picture illustrates configuration of rate() function per each endpoint.

actuator-8

Here’s the graph.

actuator-7

Summary

The improvement in metrics generation between version 1.5 and 2.0 of Spring Boot is significant. Exporting data to such the popular monitoring systems like InfluxDB or Prometheus is now much easier then before, and does not require any additional development. The metrics relating to HTTP traffic are more detailed and they may be easily associated with specific endpoint, thanks to tags indicating the uri, type and status of HTTP request. I think that modifications in Spring Boot Actuator in relation to the previous version of Spring Boot, could be one of the main motivation to migrate your applications to the newest version.

Microservices traffic management using Istio on Kubernetes

I have already described a simple example of route configuration between two microservices deployed on Kubernetes in one of my previous articles: Service Mesh with Istio on Kubernetes in 5 steps. You can refer to this article if you are interested in the basic information about Istio, and its deployment on Kubernetes via Minikube. Today we will create some more advanced traffic management rules basing on the same sample applications as used in the previous article about Istio.

The source code of sample applications is available on GitHub in repository sample-istio-services (https://github.com/piomin/sample-istio-services.git). There are two sample application callme-service and caller-service deployed in two different versions 1.0 and 2.0. Version 1.0 is available in branch v1 (https://github.com/piomin/sample-istio-services/tree/v1), while version 2.0 in the branch v2 (https://github.com/piomin/sample-istio-services/tree/v2). Using these sample applications in different versions I’m going to show you different strategies of traffic management depending on a HTTP header set in the incoming requests.

We may force caller-service to route all the requests to the specific version of callme-service by setting header x-version to v1 or v2. We can also do not set this header in the request what results in splitting traffic between all existing versions of service. If the request comes to version v1 of caller-service the traffic is splitted 50-50 between two instances of callme-service. If the request is received by v2 instance of caller-service 75% traffic is forwarded to version v2 of callme-service, while only 25% to v1. The scenario described above has been illustrated on the following diagram.

istio-advanced-1

Before we proceed to the example, I should say some words about traffic management with Istio. If you have read my previous article about Istio, you would probably know that each rule is assigned to a destination. Rules control a process of requests routing within a service mesh. The one very important information about them,especially for the purposes of the example illustrated on the diagram above, is that multiple rules can be applied to the same destination. The priority of every rule is determined by the precedence field of the rule. There is one principle related to a value of this field: the higher value of this integer field, the greater priority of the rule. As you may probably guess, if there is more than one rule with the same precedence value the order of rules evaluation is undefined. In addition to a destination, we may also define a source of the request in order to restrict a rule only to a specific caller. If there are multiple deployments of a calling service, we can even filter them out by setting source’s label field. Of course, we can also specify the attributes of an HTTP request such as uri, scheme or headers that are used for matching a request with defined rule.

Ok, now let’s take a look on the rule with the highest priority. Its name is callme-service-v1 (1). It applies to callme-service (2),  and has the highest priority in comparison to other rules (3). It is applies only to requests sent by caller-service (4), that contain HTTP header x-version with value v1 (5). This route rule applies only to version v1 of callme-service (6).

apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
  name: callme-service-v1 # (1)
spec:
  destination:
    name: callme-service # (2)
  precedence: 4 # (3)
  match:
    source:
      name: caller-service # (4)
    request:
      headers:
        x-version:
          exact: "v1" # (5)
  route:
  - labels:
      version: v1 # (6)

Here’s the fragment of the first diagram, which is handled by this route rule.

istio-advanced-7

The next rule callme-service-v2 (1) has a lower priority (2). However, it does not conflicts with first rule, because it applies only to the requests containing x-version header with value v2 (3). It forwards all requests to version v2 of callme-service (4).

apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
  name: callme-service-v2 # (1)
spec:
  destination:
    name: callme-service
  precedence: 3 # (2)
  match:
    source:
      name: caller-service
    request:
      headers:
        x-version:
          exact: "v2" # (3)
  route:
  - labels:
      version: v2 # (4)

As before, here’s the fragment of the first diagram, which is handled by this route rule.

istio-advanced-6

The rule callme-service-v1-default (1) visible in the code fragment below has a lower priority (2) than two previously described rules. In practice it means that it is executed only if conditions defined in two previous rules were not fulfilled. Such a situation occurs if you do not pass the header x-version inside HTTP request, or it would have diferent value than v1 or v2. The rule visible below applies only to the instance of service labeled with v1 version (3). Finally, the traffic to callme-service is load balanced in propertions 50-50 between two versions of that service (4).

apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
  name: callme-service-v1-default # (1)
spec:
  destination:
    name: callme-service
  precedence: 2 # (2)
  match:
    source:
      name: caller-service
      labels:
        version: v1 # (3)
  route: # (4)
  - labels:
      version: v1
    weight: 50
  - labels:
      version: v2
    weight: 50

Here’s the fragment of the first diagram, which is handled by this route rule.

istio-advanced-4

The last rule is pretty similar to the previously described callme-service-v1-default. Its name is callme-service-v2-default (1), and it applies only to version v2 of caller-service (3). It has the lowest priority (2), and splits traffic between two version of callme-service in proportions 75-25 in favor of version v2 (4).

apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
  name: callme-service-v2-default # (1)
spec:
  destination:
    name: callme-service
  precedence: 1 # (2)
  match:
    source:
      name: caller-service
      labels:
        version: v2 # (3)
  route: # (4)
  - labels:
      version: v1
    weight: 25
  - labels:
      version: v2
    weight: 75

The same as before, I have also included the diagram illustrated a behaviour of this rule.

istio-advanced-5

All the rules may be placed inside a single file. In that case they should be separated with line ---. This file is available in code’s repository inside callme-service module as multi-rule.yaml. To deploy all defined rules on Kubernetes just execute the following command.

$ kubectl apply -f multi-rule.yaml

After successful deploy you may check out the list of available rules by running command istioctl get routerule.

istio-advanced-2

Before we will start any tests, we obviously need to have sample applications deployed on Kubernetes. This applications are really simple and pretty similar to the applications used for tests in my previous article about Istio. The controller visible below implements method GET /callme/ping, which prints version of application taken from pom.xml and value of x-version HTTP header received in the request.

@RestController
@RequestMapping("/callme")
public class CallmeController {

	private static final Logger LOGGER = LoggerFactory.getLogger(CallmeController.class);

	@Autowired
	BuildProperties buildProperties;

	@GetMapping("/ping")
	public String ping(@RequestHeader(name = "x-version", required = false) String version) {
		LOGGER.info("Ping: name={}, version={}, header={}", buildProperties.getName(), buildProperties.getVersion(), version);
		return buildProperties.getName() + ":" + buildProperties.getVersion() + " with version " + version;
	}

}

Here’s the controller class that implements method GET /caller/ping. It prints version of caller-service taken from pom.xml and calls method GET callme/ping exposed by callme-service. It needs to include x-version header to the request when sending it to the downstream service.

@RestController
@RequestMapping("/caller")
public class CallerController {

	private static final Logger LOGGER = LoggerFactory.getLogger(CallerController.class);

	@Autowired
	BuildProperties buildProperties;
	@Autowired
	RestTemplate restTemplate;

	@GetMapping("/ping")
	public String ping(@RequestHeader(name = "x-version", required = false) String version) {
		LOGGER.info("Ping: name={}, version={}, header={}", buildProperties.getName(), buildProperties.getVersion(), version);
		HttpHeaders headers = new HttpHeaders();
		if (version != null)
			headers.set("x-version", version);<span id="mce_SELREST_start" style="overflow:hidden;line-height:0;"></span>
		HttpEntity entity = new HttpEntity(headers);
		ResponseEntity response = restTemplate.exchange("http://callme-service:8091/callme/ping", HttpMethod.GET, entity, String.class);
		return buildProperties.getName() + ":" + buildProperties.getVersion() + ". Calling... " + response.getBody() + " with header " + version;
	}

}

Now, we may proceeed to applications build and deployment on Kubernetes. Here are are the further steps.

1. Building appplication

First, switch to branch v1 and build the whole project sample-istio-services by executing mvn clean install command.

2. Building Docker image

The Dockerfiles are placed in the root directory of every application. Build their Docker images by executing the following commands.

$ docker build -t piomin/callme-service:1.0 .
$ docker build -t piomin/caller-service:1.0 .

Alternatively, you may omit this step, because images piomin/callme-service and piomin/caller-service are available on my Docker Hub account.

3. Inject Istio components to Kubernetes deployment file

Kubernetes YAML deployment file is available in the root directory of every application as deployment.yaml. The result of the following command should be saved as separated file, for example deployment-with-istio.yaml.

$ istioctl kube-inject -f deployment.yaml

4. Deployment on Kubernetes

Finally, you can execute well-known kubectl command in order to deploy Docker container with our sample application.

$ kubectl apply -f deployment-with-istio.yaml

Then switch to branch v2, and repeat the steps described above for version 2.0 of the sample applications. The final deployment result is visible on picture below.

istio-advanced-3

One very useful thing when running Istio on Kubernetes is out-of-the-box integration with such tools like Zipkin, Grafana or Prometheus. Istio automatically sends some metrics, that are collected by Prometheus, for example total number of requests in metric istio_request_count. YAML deployment files for these plugins ara available inside directory ${ISTIO_HOME}/install/kubernetes/addons. Before installing Prometheus using kubectl command I suggest to change service type from default ClusterIP to NodePort by adding the line type: NodePort.

apiVersion: v1
kind: Service
metadata:
  annotations:
    prometheus.io/scrape: 'true'
  labels:
    name: prometheus
  name: prometheus
  namespace: istio-system
spec:
  type: NodePort
  selector:
    app: prometheus
  ports:
  - name: prometheus
    protocol: TCP
    port: 9090

Then we should run command kubectl apply -f prometheus.yaml in order to deploy Prometheus on Kubernetes. The deployment is available inside istio-system namespace. To check the external port of service run the following command. For me, it is available under address http://192.168.99.100:32293.

istio-advanced-14

In the following diagram visualized using Prometheus I filtered out only the requests sent to callme-service. Green color points to requests received by version v2 of the service, while red color points to requests processed by version v1 of the service. Like you can see in this diagram, in the beginning I have sent the requests to caller-service with HTTP header x-version set to value v2, then I didn’t set this header and traffic has been splitted between to deployed instances of the service. Finally I set it to v1. I defined an expression rate(istio_request_count{callme-service.default.svc.cluster.local}[1m]), which returns per-second rate of requests received by callme-service.

istio-advanced-13

Testing

Before sending some test requests to caller-service we need to obtain its address on Kubernetes. After executing the following command you see that it is available under address http://192.168.99.100:32237/caller/ping.

istio-services-16

We have four possible scenarios. In first, when we set header x-version to v1 the request will be always routed to callme-service-v1.

istio-advanced-10

If a header x-version is not included in the requests the traffic will be splitted between callme-service-v1

istio-advanced-11

… and callme-service-v2.

istio-advanced-12

Finally, if we set header x-version to v2 the request will be always routed to callme-service-v2.

istio-advanced-14

Conclusion

Using Istio you can easily create and apply simple and more advanced traffic management rules to the applications deployed on Kubernetes. You can also monitor metrics and traces through the integration between Istio and Zipkin, Prometheus and Grafana.

Mastering Spring Cloud

Let me share with you the result of my last couple months of work – the book published on 26th April by Packt. The book Mastering Spring Cloud is strictly linked to the topics frequently published in this blog – it describes how to build microservices using Spring Cloud framework. I tried to create this book in well-known style of writing from this blog, where I focus on giving you the practical samples of working code without unnecessary small-talk and scribbles 🙂 If you like my style of writing, and in addition you are interested in Spring Cloud framework and microservices, this book is just for you 🙂

The book consists of fifteen chapters, where I have guided you from the basic to the most advanced examples illustrating use cases for almost all projects being a part of Spring Cloud. While creating a blog posts I not always have time to go into all the details related to Spring Cloud. I’m trying to describe a lot of different, interesting trends and solutions in the area of Java development. The book describes many details related to the most important projects of Spring Cloud like service discovery, distributed configuration, inter-service communication, security, logging, testing or continuous delivery. It is available on http://www.packtpub.com site: https://www.packtpub.com/application-development/mastering-spring-cloud. The detailed description of all the topics raised in that book is available on that site.

Personally, I particulary recommend to read the following more advanced subjects described in the book:

  • Peer-to-peer replication between multiple instances of Eureka servers, and using zoning mechanism in inter-service communication
  • Automatically reloading configuration after changes with Spring Cloud Config push notifications mechanism based on Spring Cloud Bus
  • Advanced configuration of inter-service communication with Ribbon client-side load balancer and Feign client
  • Enabling SSL secure communication between microservices and basic elements of microservices-based architecture like service discovery or configuration server
  • Building messaging microservices based on publish/subscribe communication model including cunsumer grouping, partitioning and scaling with Spring Cloud Stream and message brokers (Apache Kafka, RabbitMQ)
  • Setting up continuous delivery for Spring Cloud microservices with Jenkins and Docker
  • Using Docker for running Spring Cloud microservices on Kubernetes platform simulated locally by Minikube
  • Deploying Spring Cloud microservices on cloud platforms like Pivotal Web Services (Pivotal Cloud Foundry hosted cloud solution) and Heroku

Those examples and many others are available together with this book. At the end, a short description taken from packtpub.com site:

Developing, deploying, and operating cloud applications should be as easy as local applications. This should be the governing principle behind any cloud platform, library, or tool. Spring Cloud–an open-source library–makes it easy to develop JVM applications for the cloud. In this book, you will be introduced to Spring Cloud and will master its features from the application developer’s point of view.

Reactive Microservices with Spring WebFlux and Spring Cloud

I have already described Spring reactive support about one year ago in the article Reactive microservices with Spring 5. At that time project Spring WebFlux has been under active development, and now after official release of Spring 5 it is worth to take a look on the current version of it. Moreover, we will try to put our reactive microservices inside Spring Cloud ecosystem, which contains such the elements like service discovery with Eureka, load balancing with Spring Cloud Commons @LoadBalanced, and API gateway using Spring Cloud Gateway (also based on WebFlux and Netty). We will also check out Spring reactive support for NoSQL databases by the example of Spring Data Reactive Mongo project.

Here’s the figure that illustrates an architecture of our sample system consisting of two microservices, discovery server, gateway and MongoDB databases. The source code is as usual available on GitHub in sample-spring-cloud-webflux repository.

reactive-1

Let’s describe the further steps on the way to create the system illustrated above.

Step 1. Building reactive application using Spring WebFlux

To enable library Spring WebFlux for the project we should include starter spring-boot-starter-webflux to the dependencies. It includes some dependent libraries like Reactor or Netty server.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-webflux</artifactId>
</dependency>

REST controller looks pretty similar to the controller defined for synchronous web services. The only difference is in type of returned objects. Instead of single object we return instance of class Mono, and instead of list we return instance of class Flux. Thanks to Spring Data Reactive Mongo we don’t have to do nothing more that call the needed method on the repository bean.

@RestController
public class AccountController {

	private static final Logger LOGGER = LoggerFactory.getLogger(AccountController.class);

	@Autowired
	private AccountRepository repository;

	@GetMapping("/customer/{customer}")
	public Flux findByCustomer(@PathVariable("customer") String customerId) {
		LOGGER.info("findByCustomer: customerId={}", customerId);
		return repository.findByCustomerId(customerId);
	}

	@GetMapping
	public Flux findAll() {
		LOGGER.info("findAll");
		return repository.findAll();
	}

	@GetMapping("/{id}")
	public Mono findById(@PathVariable("id") String id) {
		LOGGER.info("findById: id={}", id);
		return repository.findById(id);
	}

	@PostMapping
	public Mono create(@RequestBody Account account) {
		LOGGER.info("create: {}", account);
		return repository.save(account);
	}

}

Step 2. Integrate an application with database using Spring Data Reactive Mongo

The implementation of integration between application and database is also very simple. First, we need to include starter spring-boot-starter-data-mongodb-reactive to the project dependencies.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-data-mongodb-reactive</artifactId>
</dependency>

The support for reactive Mongo repositories is automatically enabled after including the starter. The next step is to declare entity with ORM mappings. The following class is also returned as reponse by AccountController.

@Document
public class Account {

	@Id
	private String id;
	private String number;
	private String customerId;
	private int amount;

	...

}

Finally, we may create repository interface that extends ReactiveCrudRepository. It follows the patterns implemented by Spring Data JPA and provides some basic methods for CRUD operations. It also allows to define methods with names, which are automatically mapped to queries. The only difference in comparison with standard Spring Data JPA repositories is in method signatures. The objects are wrapped by Mono and Flux.

public interface AccountRepository extends ReactiveCrudRepository {

	Flux findByCustomerId(String customerId);

}

In this example I used Docker container for running MongoDB locally. Because I run Docker on Windows using Docker Toolkit the default address of Docker machine is 192.168.99.100. Here’s the configuration of data source in application.yml file.

spring:
  data:
    mongodb:
      uri: mongodb://192.168.99.100/test

Step 3. Enabling service discovery using Eureka

Integration with Spring Cloud Eureka is pretty the same as for synchronous REST microservices. To enable discovery client we should first include starter spring-cloud-starter-netflix-eureka-client to the project dependencies.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
</dependency>

Then we have to enable it using @EnableDiscoveryClient annotation.

@SpringBootApplication
@EnableDiscoveryClient
public class AccountApplication {

	public static void main(String[] args) {
		SpringApplication.run(AccountApplication.class, args);
	}

}

Microservice will automatically register itself in Eureka. Of cource, we may run more than instance of every service. Here’s the screen illustrating Eureka Dashboard (http://localhost:8761) after running two instances of account-service and a single instance of customer-service.  I would not like to go into the details of running application with embedded Eureka server. You may refer to my previous article for details: Quick Guide to Microservices with Spring Boot 2.0, Eureka and Spring Cloud. Eureka server is available as discovery-service module.

spring-reactive

Step 4. Inter-service communication between reactive microservices with WebClient

An inter-service communication is realized by the WebClient from Spring WebFlux project. The same as for RestTemplate you should annotate it with Spring Cloud Commons @LoadBalanced . It enables integration with service discovery and load balancing using Netflix OSS Ribbon client. So, the first step is to declare a client builder bean with @LoadBalanced annotation.

@Bean
@LoadBalanced
public WebClient.Builder loadBalancedWebClientBuilder() {
	return WebClient.builder();
}

Then we may inject WebClientBuilder into the REST controller. Communication with account-service is implemented inside GET /{id}/with-accounts , where first we are searching for customer entity using reactive Spring Data repository. It returns object Mono , while the WebClient returns Flux . Now, our main goal is to merge those to publishers and return single Mono object with the list of accounts taken from Flux without blocking the stream. The following fragment of code illustrates how I used WebClient to communicate with other microservice, and then merge the response and result from repository to single Mono object. This merge may probably be done in more “ellegant” way, so fell free to create push request with your proposal.

@Autowired
private WebClient.Builder webClientBuilder;

@GetMapping("/{id}/with-accounts")
public Mono findByIdWithAccounts(@PathVariable("id") String id) {
	LOGGER.info("findByIdWithAccounts: id={}", id);
	Flux accounts = webClientBuilder.build().get().uri("http://account-service/customer/{customer}", id).retrieve().bodyToFlux(Account.class);
	return accounts
			.collectList()
			.map(a -> new Customer(a))
			.mergeWith(repository.findById(id))
			.collectList()
			.map(CustomerMapper::map);
}

Step 5. Building API gateway using Spring Cloud Gateway

Spring Cloud Gateway is one of the newest Spring Cloud project. It is built on top of Spring WebFlux, and thanks to that we may use it as a gateway to our sample system based on reactive microservices. Similar to Spring WebFlux applications it is ran on embedded Netty server. To enable it for Spring Boot application just include the following dependency to your project.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-gateway</artifactId>
</dependency>

We should also enable discovery client in order to allow the gateway to fetch list of registered microservices. However, there is no need to register gateway’s application in Eureka. To disable registration you may set property eureka.client.registerWithEureka to false inside application.yml file.

@SpringBootApplication
@EnableDiscoveryClient
public class GatewayApplication {

	public static void main(String[] args) {
		SpringApplication.run(GatewayApplication.class, args);
	}

}

By default, Spring Cloud Gateway does not enable integration with service discovery. To enable it we should set property spring.cloud.gateway.discovery.locator.enabled to true. Now, the last thing that should be done is the configuration of the routes. Spring Cloud Gateway provides two types of components that may be configured inside routes: filters and predicates. Predicates are used for matching HTTP requests with route, while filters can be used to modify requests and responses before or after sending the downstream request. Here’s the full configuration of gateway. It enables service discovery location, and defines two routes based on entries in service registry. We use the Path Route Predicate factory for matching the incoming requests, and the RewritePath GatewayFilter factory for modifying the requested path to adapt it to the format exposed by the downstream services (endpoints are exposed under path /, while gateway expose them under paths /account and /customer).

spring:
  cloud:
    gateway:
      discovery:
        locator:
          enabled: true
      routes:
      - id: account-service
        uri: lb://account-service
        predicates:
        - Path=/account/**
        filters:
        - RewritePath=/account/(?.*), /$\{path}
      - id: customer-service
        uri: lb://customer-service
        predicates:
        - Path=/customer/**
        filters:
        - RewritePath=/customer/(?.*), /$\{path}

Step 6. Testing the sample system

Before making some tests let’s just recap our sample system. We have two microservices account-service, customer-service that use MongoDB as a database. Microservice customer-service calls endpoint GET /customer/{customer} exposed by account-service. URL of account-service is taken from Eureka. The whole sample system is hidden behind gateway, which is available under address localhost:8090.
Now, the first step is to run MongoDB on Docker container. After executing the following command Mongo is available under address 192.168.99.100:27017.

$ docker run -d --name mongo -p 27017:27017 mongo

Then we may proceeed to running discovery-service. Eureka is available under its default address localhost:8761. You may run it using your IDE or just by executing command java -jar target/discovery-service-1.0-SNAPHOT.jar. The same rule applies to our sample microservices. However, account-service needs to be multiplied in two instances, so you need to override default HTTP port when running second instance using -Dserver.port VM argument, for example java -jar -Dserver.port=2223 target/account-service-1.0-SNAPSHOT.jar. Finally, after running gateway-service we may add some test data.

$ curl --header "Content-Type: application/json" --request POST --data '{"firstName": "John","lastName": "Scott","age": 30}' http://localhost:8090/customer
{"id": "5aec1debfa656c0b38b952b4","firstName": "John","lastName": "Scott","age": 30,"accounts": null}
$ curl --header "Content-Type: application/json" --request POST --data '{"number": "1234567890","amount": 5000,"customerId": "5aec1debfa656c0b38b952b4"}' http://localhost:8090/account
{"id": "5aec1e86fa656c11d4c655fb","number": "1234567892","customerId": "5aec1debfa656c0b38b952b4","amount": 5000}
$ curl --header "Content-Type: application/json" --request POST --data '{"number": "1234567891","amount": 12000,"customerId": "5aec1debfa656c0b38b952b4"}' http://localhost:8090/account
{"id": "5aec1e91fa656c11d4c655fc","number": "1234567892","customerId": "5aec1debfa656c0b38b952b4","amount": 12000}
$ curl --header "Content-Type: application/json" --request POST --data '{"number": "1234567892","amount": 2000,"customerId": "5aec1debfa656c0b38b952b4"}' http://localhost:8090/account
{"id": "5aec1e99fa656c11d4c655fd","number": "1234567892","customerId": "5aec1debfa656c0b38b952b4","amount": 2000}

To test inter-service communication just call endpoint GET /customer/{id}/with-accounts on gateway-service. It forward the request to customer-service, and then customer-service calls enpoint exposed by account-service using reactive WebClient. The result is visible below.

reactive-2

Conclusion

Since Spring 5 and Spring Boot 2.0 there is a full range of available ways to build microservices-based architecture. We can build standard synchronous system using one-to-one communication with Spring Cloud Netflix project, messaging microservices based on message broker and publish/subscribe communication model with Spring Cloud Stream, and finally asynchronous, reactive microservices with Spring WebFlux. The main goal of this article is to show you how to use Spring WebFlux together with Spring Cloud projects in order to provide such a mechanisms like service discovery, load balancing or API gateway for reactive microservices build on top of Spring Boot. Before Spring 5 the lack of support for reactive microservices was one of the drawback of Spring framework, but now with Spring WebFlux it is no longer the case. Not only that, we may leverage Spring reactive support for the most popular NoSQL databases like MongoDB or Cassandra, and easily place our reactive microservices inside one system together with synchronous REST microservices.

Quick Guide to Microservices with Spring Boot 2.0, Eureka and Spring Cloud

There are many articles on my blog about microservices with Spring Boot and Spring Cloud. The main purpose of this article is to provide a brief summary of the most important components provided by these frameworks that help you in creating microservices and in fact explain you what is Spring Cloud for microservices architecture. The topics covered in this article are:

  • Using Spring Boot 2 in cloud-native development
  • Providing service discovery for all microservices with Spring Cloud Netflix Eureka
  • Distributed configuration with Spring Cloud Config
  • API Gateway pattern using a new project inside Spring Cloud: Spring Cloud Gateway
  • Correlating logs with Spring Cloud Sleuth

Before we proceed to the source code, let’s take a look on the following diagram. It illustrates the architecture of our sample system. We have three independent microservices, which register themself in service discovery, fetch properties from configuration service and communicate with each other. The whole system is hidden behind API gateway.

spring-cloud-1

Currently, the newest version of Spring Cloud is Finchley.M9. This version of spring-cloud-dependencies should be declared as a BOM for dependency management.

<dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-dependencies</artifactId>
			<version>Finchley.M9</version>
			<type>pom</type>
			<scope>import</scope>
		</dependency>
	</dependencies>
</dependencyManagement>

Now, let’s consider the further steps to be taken in order to create working microservices-based system using Spring Cloud. We will begin from Configuration Server.

The source code of sample applications presented in this article is available on GitHub in repository https://github.com/piomin/sample-spring-microservices-new.git.

Step 1. Building configuration server with Spring Cloud Config

To enable Spring Cloud Config feature for an application, first include spring-cloud-config-server to your project dependencies.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-config-server</artifactId>
</dependency>

Then enable running embedded configuration server during application boot use @EnableConfigServer annotation.

@SpringBootApplication
@EnableConfigServer
public class ConfigApplication {

	public static void main(String[] args) {
		new SpringApplicationBuilder(ConfigApplication.class).run(args);
	}

}

By default Spring Cloud Config Server store the configuration data inside Git repository. This is very good choice in production mode, but for the sample purpose file system backend will be enough. It is really easy to start with config server, because we can place all the properties in the classpath. Spring Cloud Config by default search for property sources inside the following locations: classpath:/, classpath:/config, file:./, file:./config.

We place all the property sources inside src/main/resources/config. The YAML filename will be the same as the name of service. For example, YAML file for discovery-service will be located here: src/main/resources/config/discovery-service.yml.

And last two important things. If you would like to start config server with file system backend you have activate Spring Boot profile native. It may be achieved by setting parameter --spring.profiles.active=native during application boot. I have also changed the default config server port (8888) to 8061 by setting property server.port in bootstrap.yml file.

Step 2. Building service discovery with Spring Cloud Netflix Eureka

More to the point of configuration server. Now, all other applications, including discovery-service, need to add spring-cloud-starter-config dependency in order to enable config client. We also have to include dependency to spring-cloud-starter-netflix-eureka-server.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-netflix-eureka-server</artifactId>
</dependency>

Then you should enable running embedded discovery server during application boot by setting @EnableEurekaServer annotation on the main class.

@SpringBootApplication
@EnableEurekaServer
public class DiscoveryApplication {

	public static void main(String[] args) {
		new SpringApplicationBuilder(DiscoveryApplication.class).run(args);
	}

}

Application has to fetch property source from configuration server. The minimal configuration required on the client side is an application name and config server’s connection settings.

spring:
  application:
    name: discovery-service
  cloud:
    config:
      uri: http://localhost:8088

As I have already mentioned, the configuration file discovery-service.yml should be placed inside config-service module. However, it is required to say a few words about the configuration visible below. We have changed Eureka running port from default value (8761) to 8061. For standalone Eureka instance we have to disable registration and fetching registry.

server:
  port: 8061

eureka:
  instance:
    hostname: localhost
  client:
    registerWithEureka: false
    fetchRegistry: false
    serviceUrl:
      defaultZone: http://${eureka.instance.hostname}:${server.port}/eureka/

Now, when you are starting your application with embedded Eureka server you should see the following logs.

spring-cloud-2

Once you have succesfully started application you may visit Eureka Dashboard available under address http://localhost:8061/.

Step 3. Building microservice using Spring Boot and Spring Cloud

Our microservice has te perform some operations during boot. It needs to fetch configuration from config-service, register itself in discovery-service, expose HTTP API and automatically generate API documentation. To enable all these mechanisms we need to include some dependencies in pom.xml. To enable config client we should include starter spring-cloud-starter-config. Discovery client will be enabled for microservice after including spring-cloud-starter-netflix-eureka-client and annotating the main class with @EnableDiscoveryClient. To force Spring Boot application generating API documentation we should include springfox-swagger2 dependency and add annotation @EnableSwagger2.

Here is the full list of dependencies defined for my sample microservice.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-config</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
	<groupId>io.springfox</groupId>
	<artifactId>springfox-swagger2</artifactId>
	<version>2.8.0</version>
</dependency>

And here is the main class of application that enables Discovery Client and Swagger2 for the microservice.

@SpringBootApplication
@EnableDiscoveryClient
@EnableSwagger2
public class EmployeeApplication {

	public static void main(String[] args) {
		SpringApplication.run(EmployeeApplication.class, args);
	}

	@Bean
	public Docket swaggerApi() {
		return new Docket(DocumentationType.SWAGGER_2)
			.select()
				.apis(RequestHandlerSelectors.basePackage("pl.piomin.services.employee.controller"))
				.paths(PathSelectors.any())
			.build()
			.apiInfo(new ApiInfoBuilder().version("1.0").title("Employee API").description("Documentation Employee API v1.0").build());
	}

	...

}

Application has to fetch configuration from a remote server, so we should only provide bootstrap.yml file with service name and server URL. In fact, this is the example of Config First Bootstrap approach, when an application first connects to a config server and takes a discovery server address from a remote property source. There is also Discovery First Bootstrap, where a config server address is fetched from a discovery server.

spring:
  application:
    name: employee-service
  cloud:
    config:
      uri: http://localhost:8088

There is no much configuration settings. Here’s application’s configuration file stored on a remote server. It stores only HTTP running port and Eureka URL. However, I also placed file employee-service-instance2.yml on remote config server. It sets different HTTP port for application, so you can esily run two instances of the same service locally basing on remote properties. Now, you may run the second instance of employee-service on port 9090 after passing argument spring.profiles.active=instance2 during an application startup. With default settings you will start the microservice on port 8090.

server:
  port: 9090

eureka:
  client:
    serviceUrl:
      defaultZone: http://localhost:8061/eureka/

Here’s the code with implementation of REST controller class. It provides an implementation for adding new employee and searching for employee using different filters.

@RestController
public class EmployeeController {

	private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeController.class);

	@Autowired
	EmployeeRepository repository;

	@PostMapping
	public Employee add(@RequestBody Employee employee) {
		LOGGER.info("Employee add: {}", employee);
		return repository.add(employee);
	}

	@GetMapping("/{id}")
	public Employee findById(@PathVariable("id") Long id) {
		LOGGER.info("Employee find: id={}", id);
		return repository.findById(id);
	}

	@GetMapping
	public List findAll() {
		LOGGER.info("Employee find");
		return repository.findAll();
	}

	@GetMapping("/department/{departmentId}")
	public List findByDepartment(@PathVariable("departmentId") Long departmentId) {
		LOGGER.info("Employee find: departmentId={}", departmentId);
		return repository.findByDepartment(departmentId);
	}

	@GetMapping("/organization/{organizationId}")
	public List findByOrganization(@PathVariable("organizationId") Long organizationId) {
		LOGGER.info("Employee find: organizationId={}", organizationId);
		return repository.findByOrganization(organizationId);
	}

}

Step 4. Communication between microservice with Spring Cloud Open Feign

Our first microservice has been created and started. Now, we will add other microservices that communicate with each other. The following diagram illustrates the communication flow between three sample microservices: organization-service, department-service and employee-service. Microservice organization-service collect list of departments with (GET /organization/{organizationId}/with-employees) or without employees (GET /organization/{organizationId}) from department-service, and list of employees without dividing them into different departments directly from employee-service. Microservice department-service is able to collect list of employees assigned to the particular department.

spring-cloud-2

In the scenario described above both organization-service and department-service have to localize other microservices and communicate with them. That’s why we need to include additional dependency for those modules: spring-cloud-starter-openfeign. Spring Cloud Open Feign is a declarative REST client that used Ribbon client-side load balancer in order to communicate with other microservice.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-openfeign</artifactId>
</dependency>

The alternative solution to Open Feign is Spring RestTemplate with @LoadBalanced. However, Feign provides more ellegant way of defining client, so I prefer it instead of RestTemplate. After including the required dependency we should also enable Feign clients using @EnableFeignClients annotation.

@SpringBootApplication
@EnableDiscoveryClient
@EnableFeignClients
@EnableSwagger2
public class OrganizationApplication {

	public static void main(String[] args) {
		SpringApplication.run(OrganizationApplication.class, args);
	}

	...

}

Now, we need to define client’s interfaces. Because organization-service communicates with two other microservices we should create two interfaces, one per single microservice. Every client’s interface should be annotated with @FeignClient. One field inside annotation is required – name. This name should be the same as the name of target service registered in service discovery. Here’s the interface of the client that calls endpoint GET /organization/{organizationId} exposed by employee-service.

@FeignClient(name = "employee-service")
public interface EmployeeClient {

	@GetMapping("/organization/{organizationId}")
	List findByOrganization(@PathVariable("organizationId") Long organizationId);

}

The second client’s interface available inside organization-service calls two endpoints from department-service. First of them GET /organization/{organizationId} returns organization only with the list of available departments, while the second GET /organization/{organizationId}/with-employees return the same set of data including the list employees assigned to every department.

@FeignClient(name = "department-service")
public interface DepartmentClient {

	@GetMapping("/organization/{organizationId}")
	public List findByOrganization(@PathVariable("organizationId") Long organizationId);

	@GetMapping("/organization/{organizationId}/with-employees")
	public List findByOrganizationWithEmployees(@PathVariable("organizationId") Long organizationId);

}

Finally, we have to inject Feign client’s beans to the REST controller. Now, we may call the methods defined inside DepartmentClient and EmployeeClient, which is equivalent to calling REST endpoints.

@RestController
public class OrganizationController {

	private static final Logger LOGGER = LoggerFactory.getLogger(OrganizationController.class);

	@Autowired
	OrganizationRepository repository;
	@Autowired
	DepartmentClient departmentClient;
	@Autowired
	EmployeeClient employeeClient;

	...

	@GetMapping("/{id}")
	public Organization findById(@PathVariable("id") Long id) {
		LOGGER.info("Organization find: id={}", id);
		return repository.findById(id);
	}

	@GetMapping("/{id}/with-departments")
	public Organization findByIdWithDepartments(@PathVariable("id") Long id) {
		LOGGER.info("Organization find: id={}", id);
		Organization organization = repository.findById(id);
		organization.setDepartments(departmentClient.findByOrganization(organization.getId()));
		return organization;
	}

	@GetMapping("/{id}/with-departments-and-employees")
	public Organization findByIdWithDepartmentsAndEmployees(@PathVariable("id") Long id) {
		LOGGER.info("Organization find: id={}", id);
		Organization organization = repository.findById(id);
		organization.setDepartments(departmentClient.findByOrganizationWithEmployees(organization.getId()));
		return organization;
	}

	@GetMapping("/{id}/with-employees")
	public Organization findByIdWithEmployees(@PathVariable("id") Long id) {
		LOGGER.info("Organization find: id={}", id);
		Organization organization = repository.findById(id);
		organization.setEmployees(employeeClient.findByOrganization(organization.getId()));
		return organization;
	}

}

Step 5. Building API gateway using Spring Cloud Gateway

Spring Cloud Gateway is relatively new Spring Cloud project. It is built on top of Spring Framework 5, Project Reactor and Spring Boot 2.0. It requires the Netty runtime provided by Spring Boot and Spring Webflux. This is really nice alternative to Spring Cloud Netflix Zuul, which has been the only one Spring Cloud project providing API gateway for microservices until now.

API gateway is implemented inside module gateway-service. First, we should include starter spring-cloud-starter-gateway to the project dependencies.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-gateway</artifactId>
</dependency>

We also need to have discovery client enabled, because gateway-service integrates with Eureka in order to be able to perform routing to the downstream services. Gateway will also expose API specification of all the endpoints exposed by our sample microservices. That’s why we enabled Swagger2 also on the gateway.

@SpringBootApplication
@EnableDiscoveryClient
@EnableSwagger2
public class GatewayApplication {

	public static void main(String[] args) {
		SpringApplication.run(GatewayApplication.class, args);
	}

}

Spring Cloud Gateway provides three basic components used for configuration: routes, predicates and filters. Route is the basic building block of the gateway. It contains destination URI and list of defined predicates and filters. Predicate is responsible for matching on anything from the incoming HTTP request, such as headers or parameters. Filter may modify request and response before and after sending it to downstream services. All these components may be set using configuration properties. We will create and place on the confiration server file gateway-service.yml with the routes defined for our sample microservices.

But first, we should enable integration with discovery server for the routes by setting property spring.cloud.gateway.discovery.locator.enabled to true. Then we may proceed to defining the route rules. We use the Path Route Predicate Factory for matching the incoming requests, and the RewritePath GatewayFilter Factory for modifying the requested path to adapt it to the format exposed by downstream services. The uri parameter specifies the name of target service registered in discovery server. Let’s take a look on the following routes definition. For example, in order to make organization-service available on gateway under path /organization/**, we should define predicate Path=/organization/**, and then strip prefix /organization from the path, because the target service is exposed under path /**. The address of target service is fetched for Eureka basing uri value lb://organization-service.

spring:
  cloud:
    gateway:
      discovery:
        locator:
          enabled: true
      routes:
      - id: employee-service
        uri: lb://employee-service
        predicates:
        - Path=/employee/**
        filters:
        - RewritePath=/employee/(?.*), /$\{path}
      - id: department-service
        uri: lb://department-service
        predicates:
        - Path=/department/**
        filters:
        - RewritePath=/department/(?.*), /$\{path}
      - id: organization-service
        uri: lb://organization-service
        predicates:
        - Path=/organization/**
        filters:
        - RewritePath=/organization/(?.*), /$\{path}

Step 6. Enabling API specification on gateway using Swagger2

Every Spring Boot microservice that is annotated with @EnableSwagger2 exposes Swagger API documentation under path /v2/api-docs. However, we would like to have that documentation located in the single place – on API gateway. To achieve it we need to provide bean implementing SwaggerResourcesProvider interface inside gateway-service module. That bean is responsible for defining list storing locations of Swagger resources, which should be displayed by the application. Here’s the implementation of SwaggerResourcesProvider that takes the required locations from service discovery basing on the Spring Cloud Gateway configuration properties.

Unfortunately, SpringFox Swagger still does not provide support for Spring WebFlux. It means that if you include SpringFox Swagger dependencies to the project application will fail to start… I hope the support for WebFlux will be available soon, but now we have to use Spring Cloud Netflix Zuul as a gateway, if we would like to run embedded Swagger2 on it.

I created module proxy-service that is an alternative API gateway based on Netflix Zuul to gateway-service based on Spring Cloud Gateway. Here’s a bean with SwaggerResourcesProvider implementation available inside proxy-service. It uses ZuulProperties bean to dynamically load routes definition into the bean.

@Configuration
public class ProxyApi {

	@Autowired
	ZuulProperties properties;

	@Primary
	@Bean
	public SwaggerResourcesProvider swaggerResourcesProvider() {
		return () -> {
			List resources = new ArrayList();
			properties.getRoutes().values().stream()
					.forEach(route -> resources.add(createResource(route.getServiceId(), route.getId(), "2.0")));
			return resources;
		};
	}

	private SwaggerResource createResource(String name, String location, String version) {
		SwaggerResource swaggerResource = new SwaggerResource();
		swaggerResource.setName(name);
		swaggerResource.setLocation("/" + location + "/v2/api-docs");
		swaggerResource.setSwaggerVersion(version);
		return swaggerResource;
	}

}

Here’s Swagger UI for our sample microservices system available under address http://localhost:8060/swagger-ui.html.

spring-cloud-3

Step 7. Running applications

Let’s take a look on the architecture of our system visible on the following diagram. We will discuss it from the organization-service point of view. After starting organization-service connects to config-service available under address localhost:8088 (1). Basing on remote configuration settings it is able to register itself in Eureka (2). When the endpoint of organization-service is invoked by external client via gateway (3) available under address localhost:8060, the request is forwarded to instance of organization-service basing on entries from service discovery (4). Then organization-service lookup for address of department-service in Eureka (5), and call its endpoint (6). Finally department-service calls endpont from employee-service. The request as load balanced between two available instance of employee-service by Ribbon (7).

spring-cloud-3

Let’s take a look on the Eureka Dashboard available under address http://localhost:8061. There are four instances of microservices registered there: a single instance of organization-service and department-service, and two instances of employee-service.

spring-cloud-4

Now, let’s call endpoint http://localhost:8060/organization/1/with-departments-and-employees.

spring-cloud-5

Step 8. Correlating logs between independent microservices using Spring Cloud Sleuth

Correlating logs between different microservice using Spring Cloud Sleuth is very easy. In fact, the only thing you have to do is to add starter spring-cloud-starter-sleuth to the dependencies of every single microservice and gateway.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-sleuth</artifactId>
</dependency>

For clarification we will change default log format a little to: %d{yyyy-MM-dd HH:mm:ss} ${LOG_LEVEL_PATTERN:-%5p} %m%n. Here are the logs generated by our three sample miccroservices. There are four entries inside braces [] generated by Spring Cloud Stream. The most important for us is the second entry, which indicates on traceId, that is set once per incoming HTTP request on the edge of the system.

spring-cloud-7

spring-cloud-6

spring-cloud-8

Deploying Spring Cloud Microservices on Hashicorp’s Nomad

Nomad is a little less popular HashiCorp’s cloud product than Consul, Terraform or Vault. It is also not as popular as a competitive software like Kubernetes and Docker Swarm. However, it has its advantages. While Kubernetes is specifically focused on Docker, Nomad is more general purpose. It supports containerized Docker applications as well as simple applications delivered as an executable JAR files. Besides that, Nomad is architecturally much simpler. It is a single binary, both for clients and servers, and does not require any services for coordination or storage.

In this article I’m going to show you how to install, configure and use Nomad in order to run on it some microservices created in Spring Boot and Spring Cloud frameworks. Let’s move on.

Step 1. Installing and running Nomad

HashiCorp’s Nomad can be easily started on Windows. You just have to download it from the following site https://www.nomadproject.io/downloads.html, and then add nomad.exe file to your PATH. Now you are able to run Nomad commands from your command-line. Let’s begin from starting Nomad agent. For simplicity, we will run it in development mode (-dev). With this option it is acting both as a client and a server.  Here’s command that starts Nomad agent on my local machine.

nomad agent -dev -network-interface="WiFi" -consul-address=192.168.99.100:8500

Sometimes you could be required to pass selected network interface as a parameter. We also need to integrate agent node with Consul discovery for the purpose of inter-service communication discussed in the next part of this article. The most suitable way to run Consul on your local machine is through a Docker container. Here’s the command that launches single node Consul discovery server and exposes it on port 8500. If you run Docker on Windows it is probably available under address 192.168.99.100.

docker run -d --name consul -p 8500:8500 consul

Step 2. Creating job

Nomad is a tool for managing a cluster of machines and running applications on them. To run the application there we should first create job. Job is the primary configuration unit that users interact with when using Nomad. Job is a specification of tasks that should be ran by Nomad. The job consists of multiple groups, and each group may have multiple tasks.

There are some properties that has to be provided, for example datacenters. You should also set type parameter that indicates scheduler type. I set type service, which is designed for scheduling long lived services that should never go down, like an application exposing HTTP API.

Let’s take a look on Nomad’s job descriptor file. The most important elements of that configuration has been marked by the sequence numbers:

  1. Property count specifies the number of the task groups that should be running under this group. In practice it scales up number of instances of the service started by the task. Here, it has been set to 2.
  2. Property driver specifies the driver that should be used by Nomad clients to run the task. The driver name corresponds to a technology used for running the application. For example we can set docker, rkt for containerization solutions or java for executing Java applications packaged into a Java JAR file. Here, the property has been set to java.
  3. After settings the driver we should provide some configuration for this driver in the job spec. There are some options available for java driver. But I decided to set the absolute path to the downloaded JAR and some JVM options related to the memory limits.
  4. We may set some requirements for the task including memory, network, CPU, and more. Our task requires max 300 MB or RAM, and enables dynamic port allocation for the port labeled “http”.
  5. Now, it is required to point out very important thing. When the task is started, it is passed an additional environment variable named NOMAD_HOST_PORT_http which indicates the host port that the HTTP service is bound to. The suffix http relates to the label set for the port.
  6. Property service inside task specifies integrations with Consul for service discovery. Now, Nomad automatically registers a task with the provided name when a task is started and de-registers it when the task dies. As you probably remember, the port number is generated automatically by Nomad. However, I passed the label http to force Nomad to register in Consul with automatically generated port.
job "caller-service" {
	datacenters = ["dc1"]
	type = "service"
	group "caller" {
		count = 2 # (1)
		task "api" {
			driver = "java" # (2)
			config { # (3)
				jar_path    = "C:\\Users\\minkowp\\git\\sample-nomad-java-services\\caller-service\\target\\caller-service-1.0.0-SNAPSHOT.jar"
				jvm_options = ["-Xmx256m", "-Xms128m"]
			}
			resources { # (4)
				cpu    = 500
				memory = 300
				network {
					port "http" {} # (5)
				}
			}
			service { # (6)
				name = "caller-service"
				port = "http"
			}
		}
		restart {
			attempts = 1
		}
	}
}

Once we saved the content visible above as job.nomad file, we may apply it to the Nomad node by executing the following command.

nomad job run job.nomad

Step 3. Building sample microservices

Source code of sample applications is available on GitHub in my repository sample-nomad-java-services. There are two simple microservices callme-service and caller-service. I have already use that sample for in the previous articles for showing inter-service communication mechanism. Microservice callme-service does nothing more than exposing endpoint GET /callme/ping that displays service’s name and version.

@RestController
@RequestMapping("/callme")
public class CallmeController {

	private static final Logger LOGGER = LoggerFactory.getLogger(CallmeController.class);

	@Autowired
	BuildProperties buildProperties;

	@GetMapping("/ping")
	public String ping() {
		LOGGER.info("Ping: name={}, version={}", buildProperties.getName(), buildProperties.getVersion());
		return buildProperties.getName() + ":" + buildProperties.getVersion();
	}

}

Implementation of caller-service endpoint is a little bit more complicated. First we have to connect our service with Consul in order to fetch list of registered instances of callme-service. Because we use Spring Boot for creating sample microservices, the most suitable way to enable Consul client is through Spring Cloud Consul library.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-consul-discovery</artifactId>
</dependency>

We should override auto-configured connection settings in application.yml. In addition to host and property we have also set spring.cloud.consul.discovery.register property to false. We don’t want discovery client to register application in Consul after startup, because it has been already performed by Nomad.

spring:
  application:
    name: caller-service
  cloud:
    consul:
      host: 192.168.99.100
      port: 8500
      discovery:
        register: false

Then we should enable Spring Cloud discovery client and RestTemplate load balancer in the main class of application.

@SpringBootApplication
@EnableDiscoveryClient
public class CallerApplication {

	public static void main(String[] args) {
		SpringApplication.run(CallerApplication.class, args);
	}

	@Bean
	@LoadBalanced
	RestTemplate restTemplate() {
		return new RestTemplate();
	}

}

Finally, we can implement method GET /caller/ping that call endpoint exposed by callme-service.

@RestController
@RequestMapping("/caller")
public class CallerController {

	private static final Logger LOGGER = LoggerFactory.getLogger(CallerController.class);

	@Autowired
	BuildProperties buildProperties;
	@Autowired
	RestTemplate restTemplate;

	@GetMapping("/ping")
	public String ping() {
		LOGGER.info("Ping: name={}, version={}", buildProperties.getName(), buildProperties.getVersion());
		String response = restTemplate.getForObject("http://callme-service/callme/ping", String.class);
		LOGGER.info("Calling: response={}", response);
		return buildProperties.getName() + ":" + buildProperties.getVersion() + ". Calling... " + response;
	}

}

As you probably remember the port of application is automatically generated by Nomad during task execution. It passes an additional environment variable named NOMAD_HOST_PORT_http to the application. Now, this environment variable should be configured inside application.yml file as the value of server.port property.

server:
  port: ${NOMAD_HOST_PORT_http:8090}

The last step is to build the whole project sample-nomad-java-services with mvn clean install command.

Step 4. Using Nomad web console

During two previous steps we have created, build and deployed our sample applications on Nomad. Now, we should verify the installation. We can do it using CLI or by visiting web console provided by nomad. Web console is available under address http://localhost:4646.

In the main site of web console we may see the summery of existing jobs. If everything goes fine field status is equal to RUNNING and bar Summary is green.

nomad-1

We can display the details of every job in the list. The next screen shows the history of the job, reserved resources and number of running instances (tasks).

nomad-2

If you would like to check out the details related to the single task, you should navigate to Task Group details.

nomad-3

We may also display the details related to the client node.

nomad-4

To display the details of allocation select the row in the table. You would be redirected to the following site. You may check out there an IP address of the application instance.

nomad-5

Step 5. Testing a sample system

Assuming you have succesfully deployed the applications on Nomad you should see the following services registered in Consul.

nomad-6

Now, if you call one of two available instances of caller-service, you should see the following response. The address of callme-service instance has been succesfully fetched from Consul through Spring Cloud Consul Client.

nomad-7

Service Mesh with Istio on Kubernetes in 5 steps

In this article I’m going to show you some basic and more advanced samples that illustrate how to use Istio platform in order to provide communication between microservices deployed on Kubernetes. Following the description on Istio website it is:

An open platform to connect, manage, and secure microservices. Istio provides an easy way to create a network of deployed services with load balancing, service-to-service authentication, monitoring, and more, without requiring any changes in service code.

Istio provides mechanisms for traffic management like request routing, discovery, load balancing, handling failures and fault injection. Additionally you may enable istio-auth that provides RBAC (Role-Based Access Control) and Mutual TLS Authentication. In this article we will discuss only about traffic management mechanisms.

Step 1. Installing Istio on Minikube platform

The most comfortable way to test Istio locally on Kubernetes is through Minikube. I have already described how to configure Minikube on your local machine in this article: Microservices with Kubernetes and Docker. When installing Istio on Minikube we should first enable some Minikube’s plugins during startup.

minikube start --extra-config=controller-manager.ClusterSigningCertFile="/var/lib/localkube/certs/ca.crt" --extra-config=controller-manager.ClusterSigningKeyFile="/var/lib/localkube/certs/ca.key" --extra-config=apiserver.Admission.PluginNames=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota

Istio is installed in dedicated namespace called istio-system, but is able to manage services from all other namespaces. First, you should go to release page and download installation file corresponding to your OS. For me it is Windows, and all the next steps will be described with the assumption that we are using exactly this OS. After running Minikube it would be useful to enable Docker on Minikube’s VM. Thanks to that you will be able to execute docker commands.

@FOR /f "tokens=* delims=^L" %i IN ('minikube docker-env') DO @call %i

Now, extract Istio files to your local filesystem. File istioctl.exe, which is available under ${ISTIO_HOME}/bin directory should be added to your PATH. Istio contains some installation files for Kubernetes platform in ${ISTIO_HOME}/install/kubernetes. To install Istio’s core components on Minikube just apply the following YAML definition file.

kubectl apply -f install/kubernetes/istio.yaml

Now, you have Istio’s core components deployed on your Minikube instance. These components are:

Envoy – it is an open-source edge and service proxy, designed for cloud-native application. Istio uses an extended version of the Envoy proxy. If you are interested in some details about Envoy and microservices read my article Envoy Proxy with Microservices, that describes how to integrate Envoy gateway with service discovery.

Mixer – it is a platform-independent component responsible for enforcing access control and usage policies across the service mesh.

Pilot – it provides service discovery for the Envoy sidecars, traffic management capabilities for intelligent routing and resiliency.

The configuration provided inside istio.yaml definition file deploys some pods and services related to the components mentioned above. You can verify the installation using kubectl command or just by visiting Web Dashboard available after executing command minikube dashboard.

istio-2

Step 2. Building sample applications based on Spring Boot

Before we start configure any traffic rules with Istio, we need to create sample applications that will communicate with each other. These are really simple services. The source code of these applications is available on my GitHub account inside repository sample-istio-services. There are two services: caller-service and callme-service. Both of them expose endpoint ping which prints application’s name and version. Both of these values are taken from Spring Boot build-info file, which is generated during application build. Here’s implementation of endpoint GET /callme/ping.

@RestController
@RequestMapping("/callme")
public class CallmeController {

	private static final Logger LOGGER = LoggerFactory.getLogger(CallmeController.class);

	@Autowired
	BuildProperties buildProperties;

	@GetMapping("/ping")
	public String ping() {
		LOGGER.info("Ping: name={}, version={}", buildProperties.getName(), buildProperties.getVersion());
		return buildProperties.getName() + ":" + buildProperties.getVersion();
	}

}

And here’s implementation of endpoint GET /caller/ping. It calls GET /callme/ping endpoint using Spring RestTemplate. We are assuming that callme-service is available under address callme-service:8091 on Kubernetes. This service is will be exposed inside Minikube node under port 8091.

@RestController
@RequestMapping("/caller")
public class CallerController {

	private static final Logger LOGGER = LoggerFactory.getLogger(CallerController.class);

	@Autowired
	BuildProperties buildProperties;
	@Autowired
	RestTemplate restTemplate;

	@GetMapping("/ping")
	public String ping() {
		LOGGER.info("Ping: name={}, version={}", buildProperties.getName(), buildProperties.getVersion());
		String response = restTemplate.getForObject("http://callme-service:8091/callme/ping", String.class);
		LOGGER.info("Calling: response={}", response);
		return buildProperties.getName() + ":" + buildProperties.getVersion() + ". Calling... " + response;
	}

}

The sample applications have to be started on Docker container. Here’s Dockerfile that is responsible for building image with caller-service application.

FROM openjdk:8-jre-alpine
ENV APP_FILE caller-service-1.0.0-SNAPSHOT.jar
ENV APP_HOME /usr/app
EXPOSE 8090
COPY target/$APP_FILE $APP_HOME/
WORKDIR $APP_HOME
ENTRYPOINT ["sh", "-c"]
CMD ["exec java -jar $APP_FILE"]

The similar Dockerfile is available for callme-service. Now, the only thing we have to is to build Docker images.

docker build -t piomin/callme-service:1.0 .
docker build -t piomin/caller-service:1.0 .

There is also version 2.0.0-SNAPSHOT of callme-service available in branch v2. Switch to this branch, build the whole application, and then build docker image with 2.0 tag. Why we need version 2.0? I’ll describe it in the next section.

docker build -t piomin/callme-service:2.0 .

Step 3. Deploying sample applications on Minikube

Before we start deploying our applications on Minikube, let’s take a look on the sample system architecture visible on the following diagram. We are going to deploy callme-service in two versions: 1.0 and 2.0. Application caller-service is just calling callme-service, so I does not know anything about different versions of the target service. If we would like to route traffic between two versions of callme-service in proportions 20% to 80%, we have to configure the proper Istio’s routerule. And also one thing. Because Istio Ingress is not supported on Minikube, we will just Kubernetes Service. If we need to expose it outside Minikube cluster we should set type to NodePort.

istio-1

Let’s proceed to the deployment phase. Here’s deployment definition for callme-service in version 1.0.

apiVersion: v1
kind: Service
metadata:
  name: callme-service
  labels:
    app: callme-service
spec:
  type: NodePort
  ports:
  - port: 8091
    name: http
  selector:
    app: callme-service
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: callme-service
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: callme-service
        version: v1
    spec:
      containers:
      - name: callme-service
        image: piomin/callme-service:1.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8091

Before deploying it on Minikube we have to inject some Istio properties. The command visible below prints a new version of deployment definition enriched with Istio configuration. We may copy it and save as deployment-with-istio.yaml file.

istioctl kube-inject -f deployment.yaml

Now, let’s apply the configuration to Kubernetes.

kubectl apply -f deployment-with-istio.yaml

The same steps should be performed for caller-service, and also for version 2.0 of callme-service. All YAML configuration files are committed together with applications, and are located in the root directory of every application’s module. If you have succesfully deployed all the required components you should see the following elements in your Minikube’s dashboard.

istio-3

Step 4. Applying Istio routing rules

Istio provides a simple Domain-specific language (DSL) that allows you configure some interesting rules that control how requests are routed within your service mesh. I’m going to show you the following rules:

  • Split traffic between different service versions
  • Injecting the delay in the request path
  • Injecting HTTP error as a reponse from service

Here’s sample route rule definition for callme-service. It splits traffic in proportions 20:80 between versions 1.0 and 2.0 of the service. It also adds 3 seconds delay in 10% of the requests, and returns an HTTP 500 error code for 10% of the requests.

apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
  name: callme-service
spec:
  destination:
    name: callme-service
  route:
  - labels:
      version: v1
    weight: 20
  - labels:
      version: v2
    weight: 80
  httpFault:
    delay:
      percent: 10
      fixedDelay: 3s
    abort:
      percent: 10
      httpStatus: 500

Let’s apply a new route rule to Kubernetes.

kubectl apply -f routerule.yaml

Now, we can easily verify that rule by executing command istioctl get routerule.

istio-6

Step 5. Testing the solution

Before we start testing let’s deploy Zipkin on Minikube. Istio provides deployment definition file zipkin.yaml inside directory ${ISTIO_HOME}/install/kubernetes/addons.

kubectl apply -f zipkin.yaml

Let’s take a look on the list of services deployed on Minikube. API provided by application caller-service is available under port 30873.

istio-4

We may easily test the service for a web browser by calling URL http://192.168.99.100:30873/caller/ping. It prints the name and version of the service, and also the name and version of callme-service invoked by caller-service. Because 80% of traffic is routed to version 2.0 of callme-service you will probably see the following response.

istio-7

However, sometimes version 1.0 of callme-service may be called…

istio-8

… or Istio can simulate HTTP 500 code.

istio-9

You can easily analyze traffic statistics with Zipkin console.

istio-10

Or just take a look on the logs generated by pods.

istio-11

Apache Ignite Cluster together with Spring Boot

I have already introduced Apache Ignite in one of my previous articles In-memory data grid with Apache Ignite. Apache Ignite can be easily launched locally together with Spring Boot application. The only thing we have to do is to include artifact org.apache.ignite:ignite-spring-data to the project dependencies and then declare Ignite instance @Bean. Sample @Bean declaration is visible below.

@Bean
public Ignite igniteInstance() {
	IgniteConfiguration cfg = new IgniteConfiguration();
	cfg.setIgniteInstanceName("ignite-cluster-node");
	CacheConfiguration ccfg1 = new CacheConfiguration("PersonCache");
	ccfg1.setIndexedTypes(Long.class, Person.class);
	CacheConfiguration ccfg2 = new CacheConfiguration("ContactCache");
	ccfg2.setIndexedTypes(Long.class, Contact.class);
	cfg.setCacheConfiguration(ccfg1, ccfg2);
	IgniteLogger log = new Slf4jLogger();
	cfg.setGridLogger(log);
	return Ignition.start(cfg);
}

In this article I would like to show you a little more advanced sample where we will start multiple Ignite’s nodes inside cluster, Ignite’s web console for monitoring cluster, and Ignite’s agent for providing communication between cluster’s nodes and web console. Let’s begin by looking on the picture with an architecture of our sample solution.

ignite-2-1

We have three nodes which are part of the cluster. If you carefully take a look at the picture illustrating an architecture you have probably noticed that there are two nodes called as Server Node, and one called as Client Node. By default, all Ignite nodes are started as server nodes. Client mode needs to be explicitly enabled. Server nodes participate in caching, compute execution, stream processing, while client nodes provide an ability to connect to the servers remotely. However, they allow using the whole set of Ignite APIs, including near caching, transactions, compute and streaming.

Here’s Ignite’s client instance @Bean declaration.

@Bean
public Ignite igniteInstance() {
	IgniteConfiguration cfg = new IgniteConfiguration();
	cfg.setIgniteInstanceName("ignite-cluster-node");
	cfg.setClientMode(true);
	CacheConfiguration ccfg1 = new CacheConfiguration("PersonCache");
	ccfg1.setIndexedTypes(Long.class, Person.class);
	CacheConfiguration ccfg2 = new CacheConfiguration("ContactCache");
	ccfg2.setIndexedTypes(Long.class, Contact.class);
	cfg.setCacheConfiguration(ccfg1, ccfg2);
	return Ignition.start(cfg);
}

The fact is that we don’t have to do anything more to make our nodes working together within the cluster. Every new node is automatically detected by all other cluster’s nodes using multicast communication. When starting our sample application we only have to guarantee that each instance’s server would listen of different port by overriding server.port Spring Boot property. Here’s command that starts the sample application, which is available on GitHub (https://github.com/piomin/sample-ignite-jpa.git) under branch cluster (https://github.com/piomin/sample-ignite-jpa/tree/cluster). Each node exposes the same REST API, which may be easily tested using Swagger2 just by opening its dashboard available under address http://localhost:port/swagger-ui.html.

java -jar -Dserver.port=8901 -Xms512m -Xmx1024m -XX:+UseG1GC -XX:+DisableExplicitGC -XX:MaxDirectMemorySize=256m target/ignite-rest-service-1.0-SNAPSHOT.jar

If you have successfully started a new node you should see the similar information in your application logs.

>>> +----------------------------------------------------------------------+
>>> Ignite ver. 2.4.0#20180305-sha1:aa342270b13cc1f4713382a8eb23b2eb7edaa3a5
>>> +----------------------------------------------------------------------+
>>> OS name: Windows 10 10.0 amd64
>>> CPU(s): 4
>>> Heap: 1.0GB
>>> VM name: 14132@piomin
>>> Ignite instance name: ignite-cluster-node
>>> Local node [ID=9DB1296A-7EEC-4564-BAAD-14E5D4A3A08D, order=2, clientMode=false]
>>> Local node addresses: [piomin/0:0:0:0:0:0:0:1, piomin/127.0.0.1, piomin/192.168.1.102, piomin/192.168.116.1, /192.168.226.1, /192.168.99.1]
>>> Local ports: TCP:8082 TCP:10801 TCP:11212 TCP:47101 UDP:47400 TCP:47501

Let’s move back for a moment to the source code of our sample application. I assume you have already cloned a given repository from GitHub. There are two Maven modules available. The module ignite-rest-service is responsible for starting Ignite’s cluster node in server mode, while ignite-client-service for starting node in client mode. Because we run only a single instance of client’s node, we would not override its default port set inside application.yml file. You can build the project using mvn clean install command and then start with java -jar or just run the main class IgniteClientApplication from your IDE.

There is also JUnit test class inside module ignite-client-service, which defines one test responsible for calling HTTP endpoints (POST /person, POST /contact) that put data into Ignite’s cache. This test performs two operations. It puts some data to the Ignite’s in-memory cluster by calling endpoints exposed by client node, and then check if that data has been propagated through the cluster by calling GET /person/{id}/withContacts endpoint exposed by one of the selected server nodes.

public class TestCluster {

	TestRestTemplate template = new TestRestTemplate();
	Random r = new Random();
	int[] clusterPorts = new int[] {8901, 8902};

	@Test
	public void testCluster() throws InterruptedException {
		for (int i=0; i<1000; i++) {
			Person p = template.postForObject("http://localhost:8090/person", createPerson(), Person.class);
			Assert.notNull(p, "Create person failed");
			Contact c1 = template.postForObject("http://localhost:8090/contact", createContact(p.getId(), 0), Contact.class);
			Assert.notNull(c1, "Create contact failed");
			Contact c2 = template.postForObject("http://localhost:8090/contact", createContact(p.getId(), 1), Contact.class);
			Assert.notNull(c2, "Create contact failed");
			Thread.sleep(10);
			Person result = template.getForObject("http://localhost:{port}/person/{id}/withContacts", Person.class, clusterPorts[r.nextInt(2)], p.getId());
			Assert.notNull(result, "Person not found");
			Assert.notEmpty(result.getContacts(), "Contacts not found");
		}
	}

	private Contact createContact(Long personId, int index) {
		...
	}

	private Person createPerson() {
		...
	}

}

Before running any tests, we should launch two additional elements being a part of our architecture: Ignite's web console and agent. The most suitable way to run Ignite's web console on the local machine is through its Docker image apacheignite/web-console-standalone.  Here's Docker command that starts Ignite's web console and exposes it on port 80. Because I run Docker on Windows, it is now available under default VM address http://192.168.99.100/.

docker run -d -p 80:80 -p 3001:3001 -v /var/data:/var/lib/mongodb --name ignite-web-console apacheignite/web-console-standalone

In order to access it you should first register your user. Although mail server is not available on the Docker container, you would be logged in after it. You can configure your cluster using Ignite’s web console, and also run some SQL queries on that cluster. Of course, we still need to connect our cluster consisting of three nodes with the instance of web console started on Docker container. To achieve it you have to download a web agent. Probably it is not very intuitive, but you have to click button Start Demo, which is located on the right corner of Ignite’s web console. Then you would be redirected to the download page, where you can accept download of ignite-web-agent-2.4.0.zip file, which contains all needed libraries and configuration to start web agent locally.

ignite-2-2

After downloading and unpacking web agent go to its main directory and change property server-uri to http://192.168.99.100 inside default.properties file. Then you may run script ignite-web-agent.bat (or .sh if you are testing it on Linux), which starts web agent. Unfortunately, it’s not all what has to be done. Every server node’s application should include artifact ignite-rest-http in order to be able to communicate with the agent. It is responsible for exposing HTTP endpoint that is accessed by a web agent. It is based on Jetty server, what causes some problems in conjunction with Spring Boot. Spring Boot sets default versions of Jetty libraries used inside the project. The problem is that ignite-rest-http requires older versions of that libraries, so we also have to override some default managed versions in pom.xml file according to the sample visible below.

<dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>org.eclipse.jetty</groupId>
			<artifactId>jetty-http</artifactId>
			<version>9.2.11.v20150529</version>
		</dependency>
		<dependency>
			<groupId>org.eclipse.jetty</groupId>
			<artifactId>jetty-server</artifactId>
			<version>9.2.11.v20150529</version>
		</dependency>
		<dependency>
			<groupId>org.eclipse.jetty</groupId>
			<artifactId>jetty-io</artifactId>
			<version>9.2.11.v20150529</version>
		</dependency>
		<dependency>
			<groupId>org.eclipse.jetty</groupId>
			<artifactId>jetty-continuation</artifactId>
			<version>9.2.11.v20150529</version>
		</dependency>
		<dependency>
			<groupId>org.eclipse.jetty</groupId>
			<artifactId>jetty-util</artifactId>
			<version>9.2.11.v20150529</version>
		</dependency>
		<dependency>
			<groupId>org.eclipse.jetty</groupId>
			<artifactId>jetty-xml</artifactId>
			<version>9.2.11.v20150529</version>
		</dependency>
	</dependencies>
</dependencyManagement>

After implementing the changes described above, we may finally proceed to running all the elements being a part of our sample system. If you start Ignite Web Agent locally it should automatically detect all running cluster nodes. Here’s the screen with the logs displayed by the agent after startup.

ignite-2-3

At the same time you should see that a new cluster has been detected by Ignite Web Console.

ignite-2-4

You can configure a new or a currently existing cluster using web console, or just run a test query on the selected managed cluster. You have to include a name of cache as a prefix to the table name when defining a query.

ignite-2-5

Similar queries have be declared inside a repository interface. Here are additional methods used for finding entities stored in PersonCache. If you would like to include results stored in other cache, you have to explicitly declare its name together with table name.

@RepositoryConfig(cacheName = "PersonCache")
public interface PersonRepository extends IgniteRepository {

	List findByFirstNameAndLastName(String firstName, String lastName);

	@Query("SELECT p.id, p.firstName, p.lastName, c.id, c.type, c.location FROM Person p JOIN \"ContactCache\".Contact c ON p.id=c.personId WHERE p.id=?")
	List<List> findByIdWithContacts(Long id);

	@Query("SELECT c.* FROM Person p JOIN \"ContactCache\".Contact c ON p.id=c.personId WHERE p.firstName=? and p.lastName=?")
	List selectContacts(String firstName, String lastName);

	@Query("SELECT p.id, p.firstName, p.lastName, c.id, c.type, c.location FROM Person p JOIN \"ContactCache\".Contact c ON p.id=c.personId WHERE p.firstName=? and p.lastName=?")
	List<List> selectContacts2(String firstName, String lastName);
}

We are nearing the end. Now, let’s run our JUnit test TestCluster in order to generate some test data and put it into the clustered cache. You can monitor a size of a cache using web console. All you have to do is to run SELECT COUNT(*) query, and set graph mode as a default mode for result displaying. The chart visible below illustrates number of entities stored inside Ignite’s cluster at 5s intervals.

ignite-2-6

Versioning REST API with Spring Boot and Swagger

One thing’s for sure. If you don’t have to version your API, do not try to do that. However, sometimes you have to. A large part of the most popular services like Twitter, Facebook, Netflix or PayPal is versioning their REST APIs. The advantages and disadvantages of that approach are obvious. On the one hand you don’t have to worry about making changes in your API even if many external clients and applications consume it. But on the other hand, you have maintain different versions of API implementation in your code, what sometimes may be troublesome.

In this article I’m going to show you how to maintain the several versions of REST API in your application in the most comfortable way. We will base on the sample application written on the top of Spring Boot framework and exposing API documentation using Swagger and SpringFox libraries.

Spring Boot does not provide any dedicated solutions for versioning APIs. The situation is different for SpringFox Swagger2 library, which provides grouping mechanism from version 2.8.0, which is perfect for generating documentation of versioned REST API.

I have already introduced Swagger2 together with Spring Boot application in one of my previous posts. In the article Microservices API Documentation with Swagger2 you may read how to use Swagger2 for generating API documentation for all the independent microservices and publishing it in one place – on API Gateway.

Different approaches to API versioning

There are some different ways to provide an API versioning in your application. The most popular of them are:

  1. Through an URI path – you include the version number in the URL path of the endpoint, for example /api/v1/persons
  2. Through query parameters – you pass the version number as a query parameter with specified name, for example /api/persons?version=1
  3. Through custom HTTP headers – you define a new header that contains the version number in the request
  4. Through a content negotiation – the version number is included to the “Accept” header together with accepted content type. The request with cURL would look like in the following sample: curl -H "Accept: application/vnd.piomin.v1+json" http://localhost:8080/api/persons

The decision, which of that approach implement in the application is up to you. We would discuss the advantages and disadvantages of every single approach, however it is not the main purpose of that article. The main purpose is to show you how to implement versioning in Spring Boot application and then publish the API documentation automatically using Swagger2. The sample application source code is available on GitHub (https://github.com/piomin/sample-api-versioning.git). I have implemented two of the approaches described above – in point 1 and 4.

Enabling Swagger for Spring Boot

Swagger2 can be enabled in Spring Boot application by including SpringFox library. In fact, this is the suite of java libraries used for automating the generation of machine and human readable specifications for JSON APIs written using Spring Framework. It supports such formats like swagger, RAML and jsonapi. To enable it for your application include the following Maven dependencies to the project: io.springfox:springfox-swagger-ui, io.springfox:springfox-swagger2, io.springfox:springfox-spring-web. Then you will have to annotate the main class with @EnableSwagger2 and define Docker object. Docket is a Springfox’s primary configuration mechanism for Swagger 2.0. We will discuss the details about it in the next section along with the sample for each way of versioning API.

Sample API

Our sample API is very simple. It exposes basic CRUD methods for Person entity. There are three versions of API available for external clients: 1.0, 1.1 and 1.2. In the version 1.1 I have changed the method for updating Person entity. In version 1.0 it was available under /person path, while now it is available under /person/{id} path. This is the only difference between versions 1.0 and 1.1. There is also one only difference in API between versions 1.1 and 1.2. Instead of field birthDate it returns age as integer parameter. This change affects to all the endpoints except DELETE /person/{id}. Now, let’s proceed to the implementation.

Versioning using URI path

Here’s the full implementation of URI path versioning inside Spring @RestController.

@RestController
@RequestMapping("/person")
public class PersonController {

	@Autowired
	PersonMapper mapper;
	@Autowired
	PersonRepository repository;

	@PostMapping({"/v1.0", "/v1.1"})
	public PersonOld add(@RequestBody PersonOld person) {
		return (PersonOld) repository.add(person);
	}

	@PostMapping("/v1.2")
	public PersonCurrent add(@RequestBody PersonCurrent person) {
		return mapper.map((PersonOld) repository.add(person));
	}

	@PutMapping("/v1.0")
	@Deprecated
	public PersonOld update(@RequestBody PersonOld person) {
		return (PersonOld) repository.update(person);
	}

	@PutMapping("/v1.1/{id}")
	public PersonOld update(@PathVariable("id") Long id, @RequestBody PersonOld person) {
		return (PersonOld) repository.update(person);
	}

	@PutMapping("/v1.2/{id}")
	public PersonCurrent update(@PathVariable("id") Long id, @RequestBody PersonCurrent person) {
		return mapper.map((PersonOld) repository.update(person));
	}

	@GetMapping({"/v1.0/{id}", "/v1.1/{id}"})
	public PersonOld findByIdOld(@PathVariable("id") Long id) {
		return (PersonOld) repository.findById(id);
	}

	@GetMapping("/v1.2/{id}")
	public PersonCurrent findById(@PathVariable("id") Long id) {
		return mapper.map((PersonOld) repository.findById(id));
	}

	@DeleteMapping({"/v1.0/{id}", "/v1.1/{id}", "/v1.2/{id}"})
	public void delete(@PathVariable("id") Long id) {
		repository.delete(id);
	}

}

If you would like to have three different versions available in the single generated API specification you should declare three Docket @Beans – one per single version. In this case the swagger group concept, which has been already introduced by SpringFox, would be helpful for us. The reason this concept bas been introduced is a necessity for support applications which require more than one swagger resource listing. Usually you need more than one resource listing in order to provide different versions of the same API. We can assign group to every Docket just by invoking groupName DSL method on it. Because different versions of API method are implemented within the same controller, we have to distinguish them by declaring path regex matching the selected version. All other settings are standard.

@Bean
public Docket swaggerPersonApi10() {
	return new Docket(DocumentationType.SWAGGER_2)
		.groupName("person-api-1.0")
		.select()
			.apis(RequestHandlerSelectors.basePackage("pl.piomin.services.versioning.controller"))
			.paths(regex("/person/v1.0.*"))
		.build()
		.apiInfo(new ApiInfoBuilder().version("1.0").title("Person API").description("Documentation Person API v1.0").build());
}

@Bean
public Docket swaggerPersonApi11() {
	return new Docket(DocumentationType.SWAGGER_2)
		.groupName("person-api-1.1")
		.select()
			.apis(RequestHandlerSelectors.basePackage("pl.piomin.services.versioning.controller"))
			.paths(regex("/person/v1.1.*"))
		.build()
		.apiInfo(new ApiInfoBuilder().version("1.1").title("Person API").description("Documentation Person API v1.1").build());
}

@Bean
public Docket swaggerPersonApi12() {
	return new Docket(DocumentationType.SWAGGER_2)
		.groupName("person-api-1.2")
		.select()
			.apis(RequestHandlerSelectors.basePackage("pl.piomin.services.versioning.controller"))
			.paths(regex("/person/v1.2.*"))
		.build()
		.apiInfo(new ApiInfoBuilder().version("1.2").title("Person API").description("Documentation Person API v1.2").build());
}

Now, we may display Swagger UI for our API just by calling URL in the web browser path /swagger-ui.html. You can switch between all available versions of API as you can see on the picture below.

api-1
Switching between available versions of API

Specification is generated by the exact version of API. Here’s documentation for version 1.0. Because method PUT /person is annotated with @Deprecated it is crossed out on the generated HTML documentation page.

api-2
Person API 1.0 specification

If you switch to group person-api-1 you will see all the methods that contains v1.1 in the path. Along them you may recognize the current version of PUT method with {id} field in the path.

api-3
Person API 1.1 specification

When using documentation generated by Swagger you may easily call every method after expanding it. Here’s the sample of calling method PUT /person/{id} from implemented for version 1.2.

api-5
Updating Person entity by calling method PUT from 1.2 version

Versioning using ‘Accept’ header

To access the implementation of versioning witt ‘Accept’ header you should switch to branch header (https://github.com/piomin/sample-api-versioning/tree/header). Here’s the full implementation of content negotiation using ‘Accept’ header versioning inside Spring @RestController.

@RestController
@RequestMapping("/person")
public class PersonController {

	@Autowired
	PersonMapper mapper;
	@Autowired
	PersonRepository repository;

	@PostMapping(produces = {"application/vnd.piomin.app-v1.0+json", "application/vnd.piomin.app-v1.1+json"})
	public PersonOld add(@RequestBody PersonOld person) {
		return (PersonOld) repository.add(person);
	}

	@PostMapping(produces = "application/vnd.piomin.app-v1.2+json")
	public PersonCurrent add(@RequestBody PersonCurrent person) {
		return mapper.map((PersonOld) repository.add(person));
	}

	@PutMapping(produces = "application/vnd.piomin.app-v1.0+json")
	@Deprecated
	public PersonOld update(@RequestBody PersonOld person) {
		return (PersonOld) repository.update(person);
	}

	@PutMapping(value = "/{id}", produces = "application/vnd.piomin.app-v1.1+json")
	public PersonOld update(@PathVariable("id") Long id, @RequestBody PersonOld person) {
		return (PersonOld) repository.update(person);
	}

	@PutMapping(value = "/{id}", produces = "application/vnd.piomin.app-v1.2+json")
	public PersonCurrent update(@PathVariable("id") Long id, @RequestBody PersonCurrent person) {
		return mapper.map((PersonOld) repository.update(person));
	}

	@GetMapping(name = "findByIdOld", value = "/{idOld}", produces = {"application/vnd.piomin.app-v1.0+json", "application/vnd.piomin.app-v1.1+json"})
	@Deprecated
	public PersonOld findByIdOld(@PathVariable("idOld") Long id) {
		return (PersonOld) repository.findById(id);
	}

	@GetMapping(name = "findById", value = "/{id}", produces = "application/vnd.piomin.app-v1.2+json")
	public PersonCurrent findById(@PathVariable("id") Long id) {
		return mapper.map((PersonOld) repository.findById(id));
	}

	@DeleteMapping(value = "/{id}", produces = {"application/vnd.piomin.app-v1.0+json", "application/vnd.piomin.app-v1.1+json", "application/vnd.piomin.app-v1.2+json"})
	public void delete(@PathVariable("id") Long id) {
		repository.delete(id);
	}

}

We still have to define three Docker @Beans, but the filtering criterias are slightly different. The simple filtering by path is not an option here. We have to crate Predicate for RequestHandler object and pass it to apis DSL method. The predicate implementation should filter every method in order to find only those which have produces field with required version number. Here’s sample Docket implementation for version 1.2.

@Bean
public Docket swaggerPersonApi12() {
	return new Docket(DocumentationType.SWAGGER_2)
		.groupName("person-api-1.2")
		.select()
			.apis(p -> {
				if (p.produces() != null) {
					for (MediaType mt : p.produces()) {
						if (mt.toString().equals("application/vnd.piomin.app-v1.2+json")) {
							return true;
						}
					}
				}
				return false;
			})
		.build()
		.produces(Collections.singleton("application/vnd.piomin.app-v1.2+json"))
		.apiInfo(new ApiInfoBuilder().version("1.2").title("Person API").description("Documentation Person API v1.2").build());
}

As you can see on the picture below the generated methods does not have the version number in the path.

api-6
Person API 1.2 specification for a content negotiation approach

When calling method for the selected version of API the only difference is in the response’s required content type.

api-7
Updating person and setting response content type

Summary

Versioning is one of the most important concept around HTTP APIs designing. No matter which approach to versioning you choose you should do everything to describe your API well. This seems to be especially important in the era of microservices, where your interface may be called by many other independent applications. In this case creating documentation in isolation from the source code could be troublesome. Swagger solves all of described problems. It may be easily integrated with your application, supports versioning. Thanks to SpringFox project it also can be easily customized in your Spring Boot application to meet more advanced demands.