Spring Cloud Microservices at Pivotal Platform

Imagine you have multiple microservices running on different machines as multiple instances. It seems natural to think about the tools that helps you in the process of monitoring and managing all of them. If we add that our microservices are created based on the Spring Cloud framework obviously seems we should look at the Pivotal platform. Here is figure with platform’s architecture download from the main Pivotal’s site.

PVDI-Microservices-Architecture

Although Pivotal Platform can run applications written in many languages it has the best support for Spring Cloud Services and Netflix OSS tools like you can see in the figure above. From the possibilities offered by Pivotal we can take advantage of three ways.

Pivotal Cloud Foundry – solution can be ran on public IaaS or private cloud like AWS, Google Cloud Platform, Microsoft Azure, VMware vSphere, OpenStack.

Pivotal Web Services – hosted cloud-native platform available at pivotal.io site.

PCF Dev – the instance which can be run locally as a a single virtual machine. It offers the opportunity to develop apps using an offline environment which basic services installed like Spring Cloud Services (SCS), MySQL, Redis databases and RabbitMQ broker. If you want to run it locally with SCS you need more than 6GB RAM free.

As a Spring Cloud Services there are available Circuit Breaker (Hystrix), Service Registry (Eureka) and standard Spring Configuration Server based on git configuration.

scs

That’s all I wanted to say about the theory. Let’s move on to practice. On the Pivotal website we have detailed materials on how to set it up, create and deploy a simple microservice based on Spring Cloud solutions. In this article I will try to present the essence collected from these descriptions based on one of my standard examples from the previous posts. As always sample source code is available on GitHub. If you are interested in detailed description of the sample application, microservices and Spring Cloud read my previous articles:

Part 1: Creating microservice using Spring Cloud, Eureka and Zuul

Part 3: Creating Microservices: Circuit Breaker, Fallback and Load Balancing with Spring Cloud

If you have a lot of free RAM you can install PCF Dev on your local workstation. You need to have Virtual Box installed. Then download and install Cloud Foundry Command Line Interface (CF CLI) and PCF Dev. All is described here. Finally you can run command below and take a small break for coffee. Virtual machine needs to downloaded and started.

cf dev start -s scs

For those who do not have RAM enough (like me) there is Pivotal Web Services platform. It is available here. Before use it you have to register on Pivotal site. The rest of the article is identical for both options.
In comparison to previous examples of Spring Cloud based microservices, we need to make some changes. There is one additional dependency inside every microservice’s pom.xml.

<properties>
	...
	<spring-cloud-services.version>1.4.1.RELEASE</spring-cloud-services.version>
	<spring-cloud.version>Dalston.RELEASE</spring-cloud.version>
</properties>

<dependencies>
	<dependency>
		<groupId>io.pivotal.spring.cloud</groupId>
		<artifactId>spring-cloud-services-starter-service-registry</artifactId>
	</dependency>
	...
</dependencies>

<dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-dependencies</artifactId>
			<version>${spring-cloud.version}</version>
			<type>pom</type>
			<scope>import</scope>
		</dependency>
		<dependency>
			<groupId>io.pivotal.spring.cloud</groupId>
			<artifactId>spring-cloud-services-dependencies</artifactId>
			<version>${spring-cloud-services.version}</version>
			<type>pom</type>
			<scope>import</scope>
		</dependency>
	</dependencies>
</dependencyManagement>

We also use Maven Cloud Foundry plugin cf-maven-plugin for application deployment on Pivotal platform. Here is sample for account-service. We run two instances of that microservice with max memory 512MB. Our application name is piomin-account-service.

<plugin>
	<groupId>org.cloudfoundry</groupId>
	<artifactId>cf-maven-plugin</artifactId>
	<version>1.1.3</version>
	<configuration>
		<target>http://api.run.pivotal.io</target>
		<org>piotrminkowski</org>
		<space>development</space>
		<appname>piomin-account-service</appname>
		<memory>512</memory>
		<instances>2</instances>
		<server>cloud-foundry-credentials</server>
	</configuration>
</plugin>

Don’t forget to add credentials configuration into Maven settings.xml file.

<server>
	<id>cloud-foundry-credentials</id>
	<username>piotr.minkowski@gmail.com</username>
	<password>***</password>
</server>

Now, when building sample application we to append cf:push command.

mvn clean install cf:push

Here is circuit breaker implementation inside customer-service.

@Service
public class AccountService {

	@Autowired
	private AccountClient client;

	@HystrixCommand(fallbackMethod = "getEmptyList")
	public List<Account> getAccounts(Integer customerId) {
		return client.getAccounts(customerId);
	}

	List<Account> getEmptyList(Integer customerId) {
		return new ArrayList<>();
	}

}

There is randomly generated delay on the account’s service side, so 25% of calls circuit breaker should be activated.

@RequestMapping("/accounts/customer/{customer}")
public List<Account> findByCustomer(@PathVariable("customer") Integer customerId) {
	logger.info(String.format("Account.findByCustomer(%s)", customerId));
	Random r = new Random();
	int rr = r.nextInt(4);
	if (rr == 1) {
		try {
			Thread.sleep(2000);
		} catch (InterruptedException e) {
			e.printStackTrace();
		}
	}
	return accounts.stream().filter(it -> it.getCustomerId().intValue() == customerId.intValue())
		.collect(Collectors.toList());
}

After successfully deploying application using Maven cf:push command we can go to Pivotal Web Services console available at https://console.run.pivotal.io/. Here are our two deployed services: two instances of piomin-account-service and one instance of piomin-customer-service.

pivotal-1

I have also activated Circuit Breaker and Service Registry from Marketplace.

pivotal-2

Every application need to be bound to service. To enable it select service, then expand Bound Apps overlap and select checkbox next to each service name.

pivotal-4

After this step applications needs to be restarted. It also can be be using web dashboard inside each service.

pivotal-5

Finally, all services are registered in Eureka and we can perform some tests using customer endpoint https://piomin-customer-service.cfapps.io/customers/{id}.

pivotal-4

Final words

With Pivotal solution we can easily deploy, scale and monitor our microservices. Deployment and scaling can be done using Maven plugin or via web dashboard. On Pivotal there are also available some services prepared especially for microservices needs like service registry, circuit breaker and configuration server. Pivotal is a competition for such solutions like Kubernetes which based on Docker containerization (more about this tools here). It is especially useful if you are creating a microservices based on Spring Boot and Spring Cloud frameworks.

Part 3: Creating Microservices: Circuit Breaker, Fallback and Load Balancing with Spring Cloud

Probably you read some articles about Hystrix and you know in what purpose it is used for. Today I would like to show you an example of exactly how to use it, which gives you the ability to combine with other tools from Netflix OSS stack like Feign and Ribbon. In this I assume that you have basic knowledge on topics such as microservices, load balancing, service discovery. If not I suggest you read some articles about it, for example my short introduction to microservices architecture available here: Part 1: Creating microservice using Spring Cloud, Eureka and Zuul. The code sample used in that article is also also used now. There is also sample source code available on GitHub. For the sample described now see hystrix branch, for basic sample master branch. 

Let’s look at some scenarios for using fallback and circuit breaker. We have Customer Service which calls API method from Account Service. There two running instances of Account Service. The requests to Account Service instances are load balanced by Ribbon client 50/50.

micro-details-1

Scenario 1

Hystrix is disabled for Feign client (1), auto retries mechanism is disabled for Ribbon client on local instance (2) and other instances (3). Ribbon read timeout is shorter than request max process time (4). This scenario also occurs with the default Spring Cloud configuration without Hystrix. When you call customer test method you sometimes receive full response and sometimes 500 HTTP error code (50/50).

ribbon:
  eureka:
    enabled: true
  MaxAutoRetries: 0 #(2)
  MaxAutoRetriesNextServer: 0 #(3)
  ReadTimeout: 1000 #(4)

feign:
  hystrix:
    enabled: false #(1)

Scenario 2

Hystrix is still disabled for Feign client (1), auto retries mechanism is disabled for Ribbon client on local instance (2) but enabled on other instances once (3). You always receive full response. If your request is received by instance with delayed response it is timed out after 1 second and then Ribbon calls another instance – in that case not delayed. You can always change MaxAutoRetries to positive value but gives us nothing in that sample.

ribbon:
  eureka:
    enabled: true
  MaxAutoRetries: 0 #(2)
  MaxAutoRetriesNextServer: 1 #(3)
  ReadTimeout: 1000 #(4)

feign:
  hystrix:
    enabled: false #(1)

Scenario 3

Here is not a very elegant solution to the problem. We set ReadTimeout on value bigger than delay inside API method (5000 ms).

ribbon:
  eureka:
    enabled: true
  MaxAutoRetries: 0
  MaxAutoRetriesNextServer: 0
  ReadTimeout: 10000

feign:
  hystrix:
    enabled: false

Generally configuration from Scenario 2 and 3 is right, you always get the full response. But in some cases you will wait more than 1 second (Scenario 2) or more than 5 seconds (Scenario 3) and delayed instance receives 50% requests from Ribbon client. But fortunately there is Hystrix – circuit breaker.

Scenario 4

Let’s enable Hystrix just by removing feign property. There is no auto retries for Ribbon client (1) and its read timeout (2) is bigger than Hystrix’s timeout (3). 1000ms is also default value for Hystrix timeoutInMilliseconds property. Hystrix circuit breaker and fallback will work for delayed instance of account service. For some first requests you receive fallback response from Hystrix. Then delayed instance will be cut off from requests, most of them will be directed to not delayed instance.

ribbon:
  eureka:
    enabled: true
  MaxAutoRetries: 0 #(1)
  MaxAutoRetriesNextServer: 0
  ReadTimeout: 2000 #(2)

hystrix:
  command:
    default:
      execution:
        isolation:
          thread:
            timeoutInMilliseconds: 1000 #(3)

Scenario 5

This scenario is a more advanced development of Scenario 4. Now Ribbon timeout (2) is lower than Hystrix timeout (3) and also auto retries mechanism is enabled (1) for local instance and for other instances (4). The result is same as for Scenario 2 and 3 – you receive full response, but Hystrix is enabled and it cuts off delayed instance from future requests.

ribbon:
  eureka:
    enabled: true
  MaxAutoRetries: 3 #(1)
  MaxAutoRetriesNextServer: 1 #(4)
  ReadTimeout: 1000 #(2)

hystrix:
  command:
    default:
      execution:
        isolation:
          thread:
            timeoutInMilliseconds: 10000 #(3)

I could imagine a few other scenarios. But the idea was just a show differences in circuit breaker and fallback when modifying configuration properties for Feign, Ribbon and Hystrix in application.yml.

Hystrix

Let’s take a closer look on standard Hystrix circuit breaker and  usage described in Scenario 4. To enable Hystrix in your Spring Boot application you have to following dependencies to pom.xml. Second step is to add annotation @EnableCircuitBreaker to main application class and also @EnableHystrixDashboard if you would like to have UI dashboard available.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-hystrix</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-hystrix-dashboard</artifactId>
</dependency>

Hystrix fallback is set on Feign client inside customer service.

@FeignClient(value = "account-service", fallback = AccountFallback.class)
public interface AccountClient {

    @RequestMapping(method = RequestMethod.GET, value = "/accounts/customer/{customerId}")
    List<Account> getAccounts(@PathVariable("customerId") Integer customerId);

}

Fallback implementation is really simple. In this case I just return empty list instead of customer’s account list received from account service.

@Component
public class AccountFallback implements AccountClient {

	@Override
	public List<Account> getAccounts(Integer customerId) {
		List<Account> acc = new ArrayList<Account>();
		return acc;
	}

}

Now, we can perform some tests. Let’s start discovery service, two instances of account service on different ports (-DPORT VM argument during startup) and customer service. Endpoint for tests is /customers/{id}. There is also JUnit test class which sends multiple requests to this enpoint available in customer-service module pl.piomin.microservices.customer.ApiTest.

	@RequestMapping("/customers/{id}")
	public Customer findById(@PathVariable("id") Integer id) {
		logger.info(String.format("Customer.findById(%s)", id));
		Customer customer = customers.stream().filter(it -> it.getId().intValue()==id.intValue()).findFirst().get();
		List<Account> accounts =  accountClient.getAccounts(id);
		customer.setAccounts(accounts);
		return customer;
	}

I enabled Hystrix Dashboard on account-service main class. If you would like to access it call from your web browser http://localhost:2222/hystrix address and then type Hystrix’s stream address from customer-service http://localhost:3333/hystrix.stream. When I run test that sends 1000 requests to customer service about 20 (2%) of them were forwarder to delayed instance of account service, remaining to not delayed instance. Hystrix dashboard during that test is visible below. For more advanced Hystrix configuration refer to its documentation available here.

hystrix-1

Testing Java Microservices

While developing a new application we should never forget about testing. This term seems to be particularly important when working with microservices. Microservices testing requires different approach than tests designing for monolithic applications. As far as monolithic testing is concerned, the main focus is put on unit testing and also in most cases integration tests with the database layer. In the case of microservices, the most important test seems to be interactions between those microservices. Although every microservice is independently developed and released the change in one of them can affect on all which are interacting with that service. Interaction between them is realized by messages. Usually these are messages send via REST or AMQP protocols.

We can divide five different layers of microservices tests. The first three of them are same as for monolith applications.

Unit tests – we are testing the smallest pieces of code, for example single method or component and mocking every call of other methods or components. There are many popular frameworks that supporting unit tests in java like JUnit, TestNG and Mockito for mocking. The main task of this type of testing is to confirm that the implementation meets the requirements.

Integration tests – we are testing interaction and communication between components basing on their interfaces with external services mocked out.

End-to-end test – also known as functional tests. The main goal of that tests is to verify if the system meets the external requirements. It means that we should design test scenarios which test all the microservices take a part in that process.

Contract tests – test at the boundary of an external service verifying that it meets the contract expected by a consuming service

Component tests – limits the scope of the exercised software to a portion of the system under test, manipulating the system through internal code interfaces and using test doubles to isolate the code under test from other components.

In the figure below we can see the component diagram of the one sample microservice (customer service). That architecture is similar for all other sample microservices described in that post. Customer service is interacting with Mongo database and storing there all customers. Mapped between object and database is realized by Spring Data @Document. We also use @Repository component as a DAO for Customer entity. Communication with other microservices is realized by @Feign REST client. Customer service collects all customer’s accounts and products from external microservices. @Repository and @Feign clients are injected into the @Controller which is exposed outside via REST resource.

testingmicroservices1

In this article I’ll show you contract and component tests for sample microservices architecture. In the figure below you can see test strategy for architecture showed in previous picture. For our tests we use embedded in-memory Mongo database and RESTful stubs generated with Spring Cloud Contract framework.

testingmicroservices2

Now, let’s take a look on the big picture. We have four microservices interacting with each other like we see in the figure below. Spring Cloud Contract uses WireMock in the backgroud for recording and matching requests and responses. For testing purposes Eureka discovering on all microservices needs to be disabled.

testingmicroservices3

Sample application source code is available on GitHub. All microservices are basing on Spring Boot and Spring Cloud (Eureka, Zuul, Feign, Ribbon) frameworks. Interaction with Mongo database is realized with Spring Data MongoDB (spring-boot-starter-data-mongodb dependency in pom.xml) library. DAO is really simple. It extends MongoRepository CRUD component. @Repository and @Feign clients are injected into CustomerController.

public interface CustomerRepository extends MongoRepository<Customer, String> {

	public Customer findByPesel(String pesel);
	public Customer findById(String id);

}

Here’s full controller code.

@RestController
public class CustomerController {

	@Autowired
	private AccountClient accountClient;
	@Autowired
	private ProductClient productClient;

	@Autowired
	CustomerRepository repository;

	protected Logger logger = Logger.getLogger(CustomerController.class.getName());

	@RequestMapping(value = "/customers/pesel/{pesel}", method = RequestMethod.GET)
	public Customer findByPesel(@PathVariable("pesel") String pesel) {
		logger.info(String.format("Customer.findByPesel(%s)", pesel));
		return repository.findByPesel(pesel);
	}

	@RequestMapping(value = "/customers", method = RequestMethod.GET)
	public List<Customer> findAll() {
		logger.info("Customer.findAll()");
		return repository.findAll();
	}

	@RequestMapping(value = "/customers/{id}", method = RequestMethod.GET)
	public Customer findById(@PathVariable("id") String id) {
		logger.info(String.format("Customer.findById(%s)", id));
		Customer customer = repository.findById(id);
		List<Account> accounts =  accountClient.getAccounts(id);
		logger.info(String.format("Customer.findById(): %s", accounts));
		customer.setAccounts(accounts);
		return customer;
	}

	@RequestMapping(value = "/customers/withProducts/{id}", method = RequestMethod.GET)
	public Customer findWithProductsById(@PathVariable("id") String id) {
		logger.info(String.format("Customer.findWithProductsById(%s)", id));
		Customer customer = repository.findById(id);
		List<Product> products =  productClient.getProducts(id);
		logger.info(String.format("Customer.findWithProductsById(): %s", products));
		customer.setProducts(products);
		return customer;
	}

	@RequestMapping(value = "/customers", method = RequestMethod.POST)
	public Customer add(@RequestBody Customer customer) {
		logger.info(String.format("Customer.add(%s)", customer));
		return repository.save(customer);
	}

	@RequestMapping(value = "/customers", method = RequestMethod.PUT)
	public Customer update(@RequestBody Customer customer) {
		logger.info(String.format("Customer.update(%s)", customer));
		return repository.save(customer);
	}

}

To replace external Mongo database with embedded in-memory instance during automated tests we only have to add following dependency to pom.xml.

<dependency>
	<groupId>de.flapdoodle.embed</groupId>
	<artifactId>de.flapdoodle.embed.mongo</artifactId>
	<scope>test</scope>
</dependency>

If we using different addresses and connection credentials also application seetings should be overriden in src/test/resources. Here’s application.yml file for testing. In the bottom there is a configuration for disabling Eureka discovering.

server:
  port: ${PORT:3333}

spring:
  application:
    name: customer-service
  data:
    mongodb:
      host: localhost
      port: 27017
  logging:
    level:
      org.springframework.cloud.contract: TRACE

eureka:
  client:
    enabled: false

In-memory MongoDB instance is started automatically during Spring Boot JUnit test. The next step is to add Spring Cloud Contract dependencies.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-contract-stub-runner</artifactId>
	<scope>test</scope>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-contract-verifier</artifactId>
	<scope>test</scope>
</dependency>

To enable automated tests generation by Spring Cloud Contract we also have to add following plugin into pom.xml.

<plugin>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-contract-maven-plugin</artifactId>
	<version>1.1.0.RELEASE</version>
	<extensions>true</extensions>
	<configuration>
			<packageWithBaseClasses>pl.piomin.microservices.advanced.customer.api</packageWithBaseClasses>
	</configuration>
</plugin>

Property packageWithBaseClasses defines package where base classes extended by generated test classes are stored. Here’s base test class for account service tests. In our sample architecture account service is only a produces it does not consume any services.

@RunWith(SpringRunner.class)
@SpringBootTest(classes = {Application.class})
public class ApiScenario1Base {

	@Autowired
	private WebApplicationContext context;

	@Before
	public void setup() {
		RestAssuredMockMvc.webAppContextSetup(context);
	}

}

As opposed to the account service customer service consumes some services for collecting customer’s account and products. That’s why base test class for customer service needs to define stub artifacts data.

@RunWith(SpringRunner.class)
@SpringBootTest(classes = {Application.class})
@AutoConfigureStubRunner(ids = {"pl.piomin:account-service:+:stubs:2222"}, workOffline = true)
public class ApiScenario1Base {

	@Autowired
	private WebApplicationContext context;

	@Before
	public void setup() {
		RestAssuredMockMvc.webAppContextSetup(context);
	}

}

Test classes are generated on the basis of contracts defined in src/main/resources/contracts. Such contracts can be implemented using Groovy language. Here’s sample contract for adding new account.

org.springframework.cloud.contract.spec.Contract.make {
  request {
    method 'POST'
    url '/accounts'
	body([
	  id: "1234567890",
          number: "12345678909",
          balance: 1234,
	  customerId: "123456789"
	])
	headers {
	  contentType('application/json')
	}
  }
response {
  status 200
  body([
    id: "1234567890",
    number: "12345678909",
    balance: 1234,
    customerId: "123456789"
  ])
  headers {
    contentType('application/json')
  }
 }
}

Test class are generated under target/generated-test-sources catalog. Here’s generated class for the code above.

@FixMethodOrder(MethodSorters.NAME_ASCENDING)
public class Scenario1Test extends ApiScenario1Base {

	@Test
	public void validate_1_postAccount() throws Exception {
		// given:
			MockMvcRequestSpecification request = given()
					.header("Content-Type", "application/json")
					.body("{\"id\":\"1234567890\",\"number\":\"12345678909\",\"balance\":1234,\"customerId\":\"123456789\"}");

		// when:
			ResponseOptions response = given().spec(request)
					.post("/accounts");

		// then:
			assertThat(response.statusCode()).isEqualTo(200);
			assertThat(response.header("Content-Type")).matches("application/json.*");
		// and:
			DocumentContext parsedJson = JsonPath.parse(response.getBody().asString());
			assertThatJson(parsedJson).field("id").isEqualTo("1234567890");
			assertThatJson(parsedJson).field("number").isEqualTo("12345678909");
			assertThatJson(parsedJson).field("balance").isEqualTo(1234);
			assertThatJson(parsedJson).field("customerId").isEqualTo("123456789");
	}

	@Test
	public void validate_2_postAccount() throws Exception {
		// given:
			MockMvcRequestSpecification request = given()
					.header("Content-Type", "application/json")
					.body("{\"id\":\"1234567891\",\"number\":\"12345678910\",\"balance\":4675,\"customerId\":\"123456780\"}");

		// when:
			ResponseOptions response = given().spec(request)
					.post("/accounts");

		// then:
			assertThat(response.statusCode()).isEqualTo(200);
			assertThat(response.header("Content-Type")).matches("application/json.*");
		// and:
			DocumentContext parsedJson = JsonPath.parse(response.getBody().asString());
			assertThatJson(parsedJson).field("id").isEqualTo("1234567891");
			assertThatJson(parsedJson).field("customerId").isEqualTo("123456780");
			assertThatJson(parsedJson).field("number").isEqualTo("12345678910");
			assertThatJson(parsedJson).field("balance").isEqualTo(4675);
	}

	@Test
	public void validate_3_getAccounts() throws Exception {
		// given:
			MockMvcRequestSpecification request = given();

		// when:
			ResponseOptions response = given().spec(request)
					.get("/accounts");

		// then:
			assertThat(response.statusCode()).isEqualTo(200);
			assertThat(response.header("Content-Type")).matches("application/json.*");
		// and:
			DocumentContext parsedJson = JsonPath.parse(response.getBody().asString());
			assertThatJson(parsedJson).array().contains("balance").isEqualTo(1234);
			assertThatJson(parsedJson).array().contains("customerId").isEqualTo("123456789");
			assertThatJson(parsedJson).array().contains("id").matches("[0-9]{10}");
			assertThatJson(parsedJson).array().contains("number").isEqualTo("12345678909");
	}

}

In the generated class there are three JUnit tests because I used scenario mechanisms available in Spring Cloud Contract. There are three groovy files inside scenario1 catalog like we can see in the picture below. The number in every file’s prefix defines tests order. Second scenario has only one definition file and is also used in the customer service (findById API method). Third scenario has four definition files and is used in the transfer service (execute API method).

scenarios

Like I mentioned before interaction between microservices is realized by @FeignClient. WireMock used by Spring Cloud Contract records request/response defined in scenario2 inside account service. Then recorded interaction is used by @FeignClient during tests instead of calling real service which is not available.

@FeignClient("account-service")
public interface AccountClient {

	@RequestMapping(method = RequestMethod.GET, value = "/accounts/customer/{customerId}", consumes = {MediaType.APPLICATION_JSON_VALUE})
	List<Account> getAccounts(@PathVariable("customerId") String customerId);

}

All the tests are generated and run during Maven build, for example mvn clean install command. If you are interested in more details and features of Spring Cloud Contract you can it here.

Finally, we can define Continuous Integration pipeline for our microservices. Each of them should be build independently. More about Continuous Integration / Continuous Delivery environment could be read in one of previous post How to setup Continuous Delivery environment. Here’s sample pipeline created with Jenkins Pipeline Plugin for account service. In Checkout stage we are updating our source code working for the newest version from repository. In the Build stage we are starting from checking out project version set inside pom.xml, then we build application using mvn clean install command. Finally, we are recording unit tests result using junit pipeline method. Same pipelines can be configured for all other microservices. In described sample all microservices are placed in the same Git repository with one Maven version for simplicity. But we can imagine that every microservice could be inside different repository with independent version in pom.xml. Tests will always be run with the newest version of stubs, which is set in that fragment of base test class with +: @AutoConfigureStubRunner(ids = {“pl.piomin:account-service:+:stubs:2222”}, workOffline = true)

node {

    withMaven(maven: 'Maven') {

        stage ('Checkout') {
            git url: 'https://github.com/piomin/sample-spring-microservices-advanced.git', credentialsId: 'github-piomin', branch: 'testing'
        }

        stage ('Build') {
            def pom = readMavenPom file: 'pom.xml'
            def version = pom.version.replace("-SNAPSHOT", ".${currentBuild.number}")
            env.pom_version = version
            print 'Build version: ' + version
            currentBuild.description = "v${version}"

            dir('account-service') {
                bat "mvn clean install -Dmaven.test.failure.ignore=true"
            }

            junit '**/target/surefire-reports/TEST-*.xml'
        }

    }

}

Here’s pipeline vizualization on Jenkins Management Dashboard.

account-pipeline

 

Microservices API Documentation with Swagger2

Swagger is the most popular tool for designing, building and documenting RESTful APIs. It has nice integration with Spring Boot. To use it in conjunction with Spring we need to add following two dependencies to Maven pom.xml.

<dependency>
	<groupId>io.springfox</groupId>
	<artifactId>springfox-swagger2</artifactId>
	<version>2.6.1</version>
</dependency>
<dependency>
	<groupId>io.springfox</groupId>
	<artifactId>springfox-swagger-ui</artifactId>
	<version>2.6.1</version>
</dependency>

Swagger configuration for single Spring Boot service is pretty simple. The level of complexity is greater if you want to create one documentation for several separated microservices. Such documentation should be available on API gateway. In the picture below you can see the architecture of our sample solution.

swagger

First, we should configure Swagger on every microservice. To enable it we have to declare @EnableSwagger2 on the main class. API documentation will be automatically generated from source code by Swagger library during application startup. The process is controlled by Docket @Bean which is also declared in the main class. API version is read from pom.xml file using MavenXpp3Reader. We also set some other properties like title, author and description using apiInfo method. By default, Swagger generates documentation for all REST services including those created by Spring Boot. We would like to limit documentation only to our @RestController located inside pl.piomin.microservices.advanced.account.api package.

    @Bean
    public Docket api() throws IOException, XmlPullParserException {
        MavenXpp3Reader reader = new MavenXpp3Reader();
        Model model = reader.read(new FileReader("pom.xml"));
        return new Docket(DocumentationType.SWAGGER_2)
          .select()
          .apis(RequestHandlerSelectors.basePackage("pl.piomin.microservices.advanced.account.api"))
          .paths(PathSelectors.any())
          .build().apiInfo(new ApiInfo("Account Service Api Documentation", "Documentation automatically generated", model.getParent().getVersion(), null, new Contact("Piotr Mińkowski", "piotrminkowski.wordpress.com", "piotr.minkowski@gmail.com"), null, null));
}

Here’s our API RESTful controller.

@RestController
public class AccountController {

	@Autowired
	AccountRepository repository;

	protected Logger logger = Logger.getLogger(AccountController.class.getName());

	@RequestMapping(value = "/accounts/{number}", method = RequestMethod.GET)
	public Account findByNumber(@PathVariable("number") String number) {
		logger.info(String.format("Account.findByNumber(%s)", number));
		return repository.findByNumber(number);
	}

	@RequestMapping(value = "/accounts/customer/{customer}", method = RequestMethod.GET)
	public List findByCustomer(@PathVariable("customer") String customerId) {
		logger.info(String.format("Account.findByCustomer(%s)", customerId));
		return repository.findByCustomerId(customerId);
	}

	@RequestMapping(value = "/accounts", method = RequestMethod.GET)
	public List findAll() {
		logger.info("Account.findAll()");
		return repository.findAll();
	}

	@RequestMapping(value = "/accounts", method = RequestMethod.POST)
	public Account add(@RequestBody Account account) {
		logger.info(String.format("Account.add(%s)", account));
		return repository.save(account);
	}

	@RequestMapping(value = "/accounts", method = RequestMethod.PUT)
	public Account update(@RequestBody Account account) {
		logger.info(String.format("Account.update(%s)", account));
		return repository.save(account);
	}

}

The similar Swagger’s configuration exists on every microservice. API documentation is available under http://localhost:/swagger-ui.html. Now, we would like to enable one documentation embedded on the gateway for all microservices. Here’s Spring @Component implementing SwaggerResourcesProvider interface which overrides default provider configuration exists in Spring context.

@Component
@Primary
@EnableAutoConfiguration
public class DocumentationController implements SwaggerResourcesProvider {

	@Override
	public List get() {
		List resources = new ArrayList<>();
		resources.add(swaggerResource("account-service", "/api/account/v2/api-docs", "2.0"));
		resources.add(swaggerResource("customer-service", "/api/customer/v2/api-docs", "2.0"));
		resources.add(swaggerResource("product-service", "/api/product/v2/api-docs", "2.0"));
		resources.add(swaggerResource("transfer-service", "/api/transfer/v2/api-docs", "2.0"));
		return resources;
	}

	private SwaggerResource swaggerResource(String name, String location, String version) {
		SwaggerResource swaggerResource = new SwaggerResource();
		swaggerResource.setName(name);
		swaggerResource.setLocation(location);
		swaggerResource.setSwaggerVersion(version);
		return swaggerResource;
	}

}

All microservices api-docs are added as Swagger resources. The location address is proxied via Zuul gateway. Here’s gateway route configuration.

zuul:
  prefix: /api
  routes:
    account:
      path: /account/**
      serviceId: account-service
    customer:
      path: /customer/**
      serviceId: customer-service
    product:
      path: /product/**
      serviceId: product-service
    transfer:
      path: /transfer/**
      serviceId: transfer-service

Now, API documentation is available under gateway address http://localhost:8765/swagger-ui.html. You can see how it looks for account service in the picture below. We can select source service in the combo box placed inside title panel.

swagger-1

Documentation appearence can be easily customized by providing UIConfiguration @Bean. In the code below I changed default operations expansion level by setting “list” as a second constructor parameter – docExpansion.

	@Bean
	UiConfiguration uiConfig() {
		return new UiConfiguration("validatorUrl", "list", "alpha", "schema",
				UiConfiguration.Constants.DEFAULT_SUBMIT_METHODS, false, true, 60000L);
	}

You can expand every operation to see the details. Every operation can be test by providing required parameters and clicking Try it out! button.

swagger-2

swagger-3

Sample application source code is available on GitHub.

Advanced Microservices with Apache Camel

This post is a continuation of my previous microservices sample with Apache Camel described in the post Microservices with Apache Camel. In the picture below you can see the architecture of the proposed solution. All the services will be available behind the API gateway, which is created using Camel Rest DSL component. There is also API documentation available under api-doc context path on gateway. It is created using Swagger framework.

camel_micro

Service discovery and registration was created using Consul. Gateway is interacting with discovery server using Service Call EIP Camel component. Each microservice is registering itself during startup. There is no out of the box mechanisms for service registration in Apache Camel, so that I had to provide custom implementation using EventNotifierSupport class. Service Call EIP is also used inside customer service for discovering and calling account service to enrich returned customer object with its accounts. Microservices communicate with Zipkin to store timing statistics of calling their endpoints.

Sample application source code is available on GitHub. If you are interested in detailed description of introduced solution read my article on DZone. It was also published on Apache Camel site in the Articles section here.

Advanced Microservices Security with OAuth2

In one of my previous posts I described the basic sample illustrating microservices security with Spring Security and OAuth2. You could read there how to create and use authorization and resource server, basic authentication and bearer token with Spring Boot. Now, I would like to introduce more advanced sample with SSO OAuth2 behind Zuul gateway. Architecture of newest sample is rather similar to the previous sample like you can see in the picture below. The difference is in implementation details.

oauth2

Requests to the microservices and authorization server are proxied by the gateway. First request is redirected to the login page. We need to authenticate. User authentication data is stored in MySQL database. After login there is also stored user HTTP session data using Spring Session library. Then you should to perform next steps to obtain OAuth2 authorization token by calling authorization server enpoints via gateway. Finally, you can call concrete microservice providing OAuth2 token as a bearer in Authorization HTTP request header.

If you are interested in technical details of the presented solution you can read my article on DZone. There is also available sample application source code on GitHub.

Apache Karaf Microservices

Apache Karaf is a small OSGi based runtime which provides a lightweight container onto which various components and applications can be deployed.

Apache Karaf can be runned as standalone container and provides some enterprise ready features like shell console, remote access, hot deployment, dynamic configuration. It can be the perfect solution for microservices. The idea of microservices on Apache Karaf has already been introduced a few years ago. “What I am promoting is the idea of µServices, the concepts of an OSGi service as a design primitive.” – Peter Kriens March 2010.

Karaf on Docker

First, we need to run docker container with Apache Karaf. Surprisingly, there is no official repository with such an image. I found image on Docker Hub with Karaf here. Unfortunately, there is no port 8181 exposed – default Karaf web port. We will use this image to create our own with 8181 port available outside. Here’s our Dockerfile.

FROM java:8-jdk
MAINTAINER Piotr Minkowski <piotr.minkowski@gmail.com>
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64

ENV KARAF_VERSION=4.0.8

RUN wget http://www-us.apache.org/dist/karaf/${KARAF_VERSION}/apache-karaf-${KARAF_VERSION}.tar.gz; \
    mkdir /opt/karaf; \
    tar --strip-components=1 -C /opt/karaf -xzf apache-karaf-${KARAF_VERSION}.tar.gz; \
    rm apache-karaf-${KARAF_VERSION}.tar.gz; \
    mkdir /deploy; \
    sed -i 's/^\(felix\.fileinstall\.dir\s*=\s*\).*$/\1\/deploy/' /opt/karaf/etc/org.apache.felix.fileinstall-deploy.cfg

VOLUME ["/deploy"]
EXPOSE 1099 8101 8181 44444
ENTRYPOINT ["/opt/karaf/bin/karaf"]

Then, by running docker commands below we are building our image from Dockerfile and starting new Karaf container.

docker build -t karaf-api .
docker run -d --name karaf -p 1099:1099 -p 8101:8101 -p 8181:8181 -p 44444:44444 karaf-api

Now, we can login to new docker container (1). Karaf is installed in /opt/karaf directory. We should run client by calling ./client in /opt/karaf/bin directory (2). Then we should install Apache Felix web console which is by default available under port 8181 (3). You can check it out by calling on web browser http://192.168.99.100:8181/system/console. Default username and password is karaf. In webconsole you can check full list of features installed on our OSGi cantainer. You can also display that list in karaf console using feature:list command (4). After webconsole installation you decide if you prefer using Karaf command line or Apache Felix console for further actions. For our sample application we need to add some OSGi repositories and features. First, we are adding Apache CXF framework repository (5) and its features for http and RESTful web services (6). Then we are adding repository for jackson framework (7) and some jackson and Jetty server features (8).

docker exec -i -t karaf /bin/bash (1)
cd /opt/karaf/bin
./client (2)
karaf@root()> feature:install webconsole (3)
karaf@root()> feature:list (4)
karaf@root()> feature:repo-add cxf 3.1.10 (5)
karaf@root()> feature:install http cxf-jaxrs cxf (6)
karaf@root()> feature:repo-add mvn:org.code-house.jackson/features/2.7.6/xml/features (7)
karaf@root()> feature:install jackson-jaxrs-json-provider jetty (8)

Microservices

Our environment has been configured. Now, we can take a brief look on sample application. It’s really simple. It has only three modules account-cxf, customer-cxf, sample-api. In the sample-api module we have base service interfaces and model objects. In account-cxf and customer-cxf there service implementations and OSGi services declarations in Blueprint file. Sample application source code is available on GitHub. Here’s account service controller class and its interface below.

public class AccountServiceImpl implements AccountService {

	private List<Account> accounts;

	public AccountServiceImpl() {
		accounts = new ArrayList<>();
		accounts.add(new Account(1, "1234567890", 12345, 1));
		accounts.add(new Account(2, "1234567891", 6543, 2));
		accounts.add(new Account(3, "1234567892", 45646, 3));
	}

	public Account findById(Integer id) {
		return accounts.stream().filter(a -> a.getId().equals(id)).findFirst().get();
	}

	public List<Account> findAll() {
		return accounts;
	}

	public Account add(Account account) {
		accounts.add(account);
		account.setId(accounts.size());
		return account;
	}

	@Override
	public List<Account> findAllByCustomerId(Integer customerId) {
		return accounts.stream().filter(a -> a.getCustomerId().equals(customerId)).collect(Collectors.toList());
	}

}

AccountService interface is in sample-api module. We use JAX-RS annotations for declaring REST endpoints.

public interface AccountService {

	@GET
	@Path("/{id}")
	@Produces("application/json")
	public Account findById(@PathParam("id") Integer id);

	@GET
	@Path("/")
	@Produces("application/json")
	public List<Account> findAll();

	@GET
	@Path("/customer/{customerId}")
	@Produces("application/json")
	public List<Account> findAllByCustomerId(@PathParam("customerId") Integer customerId);

	@POST
	@Path("/")
	@Consumes("application/json")
	@Produces("application/json")
	public Account add(Account account);

}

Here you can see OSGi services declaration in the blueprint.xml file. We have declared AccountServiceIpl bean and set that bean as a service for JAX-RS endpoint. Endpoint uses JacksonJsonProvider as data format provider. There is also important OSGi service declaration with AccountService referencing to AccountServiceImpl. This service will be available for other microservices deployed on Karaf container for example, customer-cxf.

    <cxf:bus id="accountRestBus">
    </cxf:bus>

    <bean id="accountServiceImpl" class="pl.piomin.services.cxf.account.service.AccountServiceImpl"/>
    <service ref="accountServiceImpl" interface="pl.piomin.services.cxf.api.AccountService" />

    <jaxrs:server address="/account" id="accountService">
        <jaxrs:serviceBeans>
            <ref component-id="accountServiceImpl" />
        </jaxrs:serviceBeans>
        <jaxrs:features>
            <cxf:logging />
        </jaxrs:features>
        <jaxrs:providers>
        	<bean class="com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider"/>
        </jaxrs:providers>
    </jaxrs:server>

Now, let’s take a look on customer-cxf microservice. Here’s OSGi blueprint of that service. JAX-RS server declaration is pretty similar as for account-cxf. There is only one addition in comparision with earlier presented OSGi blueprint – reference to AccountService. This reference is injected into CustomerServiceImpl.

	<reference id="accountService" 		interface="pl.piomin.services.cxf.api.AccountService" />

	<bean id="customerServiceImpl" 		class="pl.piomin.services.cxf.customer.service.CustomerServiceImpl">
		<property name="accountService" ref="accountService" />
	</bean>

	<jaxrs:server address="/customer" id="customerService">
		<jaxrs:serviceBeans>
			<ref component-id="customerServiceImpl" />
		</jaxrs:serviceBeans>
		<jaxrs:features>
			<cxf:logging />
		</jaxrs:features>
		<jaxrs:providers>
			<bean class="com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider" />
		</jaxrs:providers>
	</jaxrs:server>

CustomerService uses OSGi reference to AccountService in findById method to collect all accounts belonged to the customer with the specified id path parameter and also exposes some other operations.

public class CustomerServiceImpl implements CustomerService {

	private AccountService accountService;

	private List<Customer> customers;

	public CustomerServiceImpl() {
		customers = new ArrayList<>();
		customers.add(new Customer(1, "XXX", "1234567890"));
		customers.add(new Customer(2, "YYY", "1234567891"));
		customers.add(new Customer(3, "ZZZ", "1234567892"));
	}

	@Override
	public Customer findById(Integer id) {
		Customer c = customers.stream().filter(a -> a.getId().equals(id)).findFirst().get();
		c.setAccounts(accountService.findAllByCustomerId(id));
		return c;
	}

	@Override
	public List<Customer> findAll() {
		return customers;
	}

	@Override
	public Customer add(Customer customer) {
		customers.add(customer);
		customer.setId(customers.size());
		return customer;
	}

	public AccountService getAccountService() {
		return accountService;
	}

	public void setAccountService(AccountService accountService) {
		this.accountService = accountService;
	}

}

Each service has packaging type bundle inside pom.xml and uses maven-bundle-plugin during build process. After running mvn clean install on the root project all bundles will be generated in target catalog.You can install them using Apache Felix web console or Karaf command line client in that order: sample-api, account-cxf, customer-cxf.

Testing

Finally, you can see a list of available CXF endpoints on Karaf by calling http://192.168.99.100:8181/cxf in your web browser. Call http://192.168.99.100:8181/cxf/customer/1 to test findById in CustomerService. You should see JSON with customer data and all accounts collected from account microservice.

Conclusion

Treat this post as a short introduction to microsevices conception on Apache Karaf OSGi container. I presented you how to use CXF endpoints on Karaf container as a some kind of service gateway and OSGi services for inter-communication process between deployed microservices. Instead of OSGi reference we could use JAX-RS proxy client for connecting with account service from customer service. You can find some basic examples of that concept on the web. There are also available more advanced solutions for service registration and discovery on Karaf, for example remore service call with Apahce ZooKeeper. I think we will take a closer look on them in subsequent posts.

Microservices with Apache Camel

Apache Camel, as usual, is a step backwards in comparion with Spring framework and there is no difference in the case of microservices architecture. However, Camel have introduced new set of components for building microservices some months ago. In its newest version 2.18 there is a support for load balancing with Netflix Ribbon, circuit breaking with Netflix Hystrix, distributed tracing with Zipkin and service registration and discovery with Consul. The new key component for microservices support on Camel is ServiceCall EIP which allows to call a remote service in a distributed system where the service is looked up from a service registry. There are four tools which can be used as service registry for Apache Camel: etcd, Kubernetes, Ribbon and Consul. Release 2.18 also comes with a much-improved Spring Boot support.

In this articale I’m going to show you how to develop microservices in Camel with its support for Spring Boot, REST DSL and Consul. Sample application is available on GitHub. Below you see a picture with our application architecture.

camel-arch

To enable Spring Boot support in Camel application we need to add following dependency to pom.xml. After that we have to annotate our main class with @SpringBootApplication and set property camel.springboot.main-run-controller=true in application configuration file (application.properties or application.yml).

<dependency>
	<groupId>org.apache.camel</groupId>
	<artifactId>camel-spring-boot-starter</artifactId>
	<version>${camel.version}</version>
</dependency>

Then we just have to create Spring @Component extending Camel’s RouteBuilder. Inside route builder configuration we declare REST endpoint using Camel REST DSL. It’s really simple and intuitive. In the code visible below I exposed four REST endpoints: three for GET method and an single one for POST.  We are using netty4-http component as a web container for exposing REST endpoints and JSON binding. We also have to add to dependencies to pom.xml: camel-netty4-http for Netty framework and camel-jackson library for enabling consuming and producing JSON data. All routes are forwarding input requests to different methods inside Spring service @Component.

@Component
public class AccountRoute extends RouteBuilder {

	@Value("${port}")
	private int port;

	@Override
	public void configure() throws Exception {
		restConfiguration()
			.component("netty4-http")
			.bindingMode(RestBindingMode.json)
			.port(port);

		rest("/account")
			.get("/{id}")
				.to("bean:accountService?method=findById(${header.id})")
			.get("/customer/{customerId}")
				.to("bean:accountService?method=findByCustomerId(${header.customerId})")
			.get("/")
				.to("bean:accountService?method=findAll")
			.post("/").consumes("application/json").type(Account.class)
				.to("bean:accountService?method=add(${body})");
	}

}

Next element in our architecture is service registry component. We decided to use Consul. The simplest way to run it locally is to pull its docker image and run using docker command below. Consul provides UI management console and REST API for registering and searching services and key/value objects. REST API is available under v1 path and is well documented here.

docker run -d --name consul -p 8500:8500 -p 8600:8600 consul

Well, we have account microservice implemented and running Consul instance, so we would like to register our service there. And here we’ve got a problem. There is no mechanisms out of the box in Camel for service registration, there is only component for searching service. To be more precise I didn’t find any description about such a mechanism in Camel documentation… However, it may exists… somewhere. Maybe, you know how to find it? Here’s interesting solution for Camel Consul registry, but I didn’t check it out. I decided to rather simpler solution implemented by myself. I added two next routes to AccountRoute class.

from("direct:start").marshal().json(JsonLibrary.Jackson)
	.setHeader(Exchange.HTTP_METHOD, constant("PUT"))
	.setHeader(Exchange.CONTENT_TYPE, constant("application/json"))
	.to("http://192.168.99.100:8500/v1/agent/service/register");
from("direct:stop").shutdownRunningTask(ShutdownRunningTask.CompleteAllTasks)
	.toD("http://192.168.99.100:8500/v1/agent/service/deregister/${header.id}");

Route direct:start is running after Camel context startup and direct:stop before shutdown. Here’s EventNotifierSupport implementation for calling routes during startup and shutdown process. You can also try with camel-consul component, but in my opinion it is not well described in Camel documentation. List of services registered on Consul is available here: http://192.168.99.100:8500/v1/agent/services. I launch my account service with VM argument -Dport and it should be registered on Consul with account${port} ID.

@Component
public class EventNotifier extends EventNotifierSupport {

	@Value("${port}")
	private int port;

	@Override
	public void notify(EventObject event) throws Exception {
		if (event instanceof CamelContextStartedEvent) {
			CamelContext context = ((CamelContextStartedEvent) event).getContext();
			ProducerTemplate t = context.createProducerTemplate();
			t.sendBody("direct:start", new Register("account" + port, "account", "127.0.0.1", port));
		}
		if (event instanceof CamelContextStoppingEvent) {
			CamelContext context = ((CamelContextStoppingEvent) event).getContext();
			ProducerTemplate t = context.createProducerTemplate();
			t.sendBodyAndHeader("direct:stop", null, "id", "account" + port);
		}
	}

	@Override
	public boolean isEnabled(EventObject event) {
		return (event instanceof CamelContextStartedEvent || event instanceof CamelContextStoppingEvent);
	}

}

The last (but not least) element of our architecture is gateway. We also use netty for exposing REST services on port 8000.

restConfiguration()
	.component("netty4-http")
	.bindingMode(RestBindingMode.json)
	.port(8000);

We also have to provide configuration for connection with Consul registry and set it on CamelContext calling setServiceCallConfiguration method.

ConsulConfigurationDefinition config = new ConsulConfigurationDefinition();
config.setComponent("netty4-http");
config.setUrl("http://192.168.99.100:8500");
context.setServiceCallConfiguration(config);

Finally, we are defining routes which are mapping paths set on gateway to services registered on Consul using ServiceCall EIP. Now you call in your web browser one of those URLs, for example http://localhost:8000/account/1. If you would like to map path also while serviceCall EIP you need to put ‘//‘ instead of sinle slash ‘/‘ described in the Camel documentation. For example from(“rest:get:account”).serviceCall(“account//all”), not serviceCall(“account/all”).

from("rest:get:account:/{id}").serviceCall("account");
from("rest:get:account:/customer/{customerId}").serviceCall("account");
from("rest:get:account:/").serviceCall("account");
from("rest:post:account:/").serviceCall("account");

Conclusion

I was positively surprised by Camel. Before I started working on the sample described in this post I didn’t expect that Camel has such many features for building microservice solutions and working with them will be simple and fast. Of cource I can also find some disadvantages like inaccuracies or errors in documentation, only short description of some new components in developer guide or no registration process in discovery server like Consul. In these areas, I see an advantage of Spring Framework. But on the other hand Camel has support for some useful tools like etcd or Kubernetes which is not available in Spring. In conclusion, I’m looking forward to further improvements in Camel components for building microservices.

Reactive microservices with Spring 5

Spring team has announced support for reactive programming model from 5.0 release. New Spring version will probably be released on March. Fortunately, milestone and snapshot versions with these changes are now available on public spring repositories. There is new Spring Web Reactive project with support for reactive @Controller and also new WebClient with client-side reactive support. Today I’m going to take a closer look on solutions suggested by Spring team.

Following Spring WebFlux documentation  the Spring Framework uses Reactor internally for its own reactive support. Reactor is a Reactive Streams implementation that further extends the basic Reactive Streams Publisher contract with the Flux and Mono composable API types to provide declarative operations on data sequences of 0..N and 0..1. On the server-side Spring supports annotation based and functional programming models. Annotation model use @Controller and the other annotations supported also with Spring MVC. Reactive controller will be very similar to standard REST controller for synchronous services instead of it uses Flux, Mono and Publisher objects. Today I’m going to show you how to develop simple reactive microservices using annotation model and MongoDB reactive module. Sample application source code is available on GitHub.

For our example we need to use snapshots of Spring Boot 2.0.0 and Spring Web Reactive 0.1.0. Here are main pom.xml fragment and single microservice pom.xml below. In our microservices we use Netty instead of default Tomcat server.

	<parent>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-parent</artifactId>
		<version>2.0.0.BUILD-SNAPSHOT</version>
	</parent>
	<dependencyManagement>
		<dependencies>
			<dependency>
				<groupId>org.springframework.boot.experimental</groupId>
				<artifactId>spring-boot-dependencies-web-reactive</artifactId>
				<version>0.1.0.BUILD-SNAPSHOT</version>
				<type>pom</type>
				<scope>import</scope>
			</dependency>
		</dependencies>
	</dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-data-mongodb-reactive</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework.boot.experimental</groupId>
			<artifactId>spring-boot-starter-web-reactive</artifactId>
			<exclusions>
				<exclusion>
					<groupId>org.springframework.boot</groupId>
					<artifactId>spring-boot-starter-tomcat</artifactId>
				</exclusion>
			</exclusions>
		</dependency>
		<dependency>
			<groupId>io.projectreactor.ipc</groupId>
			<artifactId>reactor-netty</artifactId>
		</dependency>
		<dependency>
			<groupId>io.netty</groupId>
			<artifactId>netty-all</artifactId>
		</dependency>
		<dependency>
			<groupId>pl.piomin.services</groupId>
			<artifactId>common</artifactId>
			<version>${project.version}</version>
		</dependency>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-test</artifactId>
			<scope>test</scope>
		</dependency>
		<dependency>
			<groupId>io.projectreactor.addons</groupId>
			<artifactId>reactor-test</artifactId>
			<scope>test</scope>
		</dependency>
	</dependencies>

We have two microservices: account-service and customer-service. Each of them have its own MongoDB database and they are exposing simple reactive API for searching and saving data. Also customer-service interacting with account-service to get all customer accounts and return them in customer-service method. Here’s our account controller code.

@RestController
public class AccountController {

	@Autowired
	private AccountRepository repository;

	@GetMapping(value = "/account/customer/{customer}")
	public Flux<Account> findByCustomer(@PathVariable("customer") Integer customerId) {
		return repository.findByCustomerId(customerId)
				.map(a -> new Account(a.getId(), a.getCustomerId(), a.getNumber(), a.getAmount()));
	}

	@GetMapping(value = "/account")
	public Flux<Account> findAll() {
		return repository.findAll().map(a -> new Account(a.getId(), a.getCustomerId(), a.getNumber(), a.getAmount()));
	}

	@GetMapping(value = "/account/{id}")
	public Mono<Account> findById(@PathVariable("id") Integer id) {
		return repository.findById(id)
				.map(a -> new Account(a.getId(), a.getCustomerId(), a.getNumber(), a.getAmount()));
	}

	@PostMapping("/person")
	public Mono<Account> create(@RequestBody Publisher<Account> accountStream) {
		return repository
				.save(Mono.from(accountStream)
						.map(a -> new pl.piomin.services.account.model.Account(a.getNumber(), a.getCustomerId(),
								a.getAmount())))
				.map(a -> new Account(a.getId(), a.getCustomerId(), a.getNumber(), a.getAmount()));
	}

}

In all API methods we also perform mapping from Account entity (MongoDB @Document) to Account DTO available in our common module. Here’s account repository class. It uses ReactiveMongoTemplate for interacting with Mongo collections.

@Repository
public class AccountRepository {

	@Autowired
	private ReactiveMongoTemplate template;

	public Mono<Account> findById(Integer id) {
		return template.findById(id, Account.class);
	}

	public Flux<Account> findAll() {
		return template.findAll(Account.class);
	}

	public Flux<Account> findByCustomerId(String customerId) {
		return template.find(query(where("customerId").is(customerId)), Account.class);
	}

	public Mono<Account> save(Mono<Account> account) {
		return template.insert(account);
	}

}

In our Spring Boot main or @Configuration class we should declare spring beans for MongoDB with connection settings.

@SpringBootApplication
public class Application {

	public static void main(String[] args) {
		SpringApplication.run(Application.class, args);
	}

	public @Bean MongoClient mongoClient() {
		return MongoClients.create("mongodb://192.168.99.100");
	}

	public @Bean ReactiveMongoTemplate reactiveMongoTemplate() {
		return new ReactiveMongoTemplate(mongoClient(), "account");
	}

}

I used docker MongoDB container for working on this sample.

docker run -d --name mongo -p 27017:27017 mongo

In customer service we call endpoint /account/customer/{customer} from account service. I declared @Bean WebClient in our main class.

	public @Bean WebClient webClient() {
		return WebClient.builder().clientConnector(new ReactorClientHttpConnector()).baseUrl("http://localhost:2222").build();
	}

Here’s customer controller fragment. @Autowired WebClient calls account service after getting customer from MongoDB.

	@Autowired
	private WebClient webClient;

	@GetMapping(value = "/customer/accounts/{pesel}")
	public Mono<Customer> findByPeselWithAccounts(@PathVariable("pesel") String pesel) {
		return repository.findByPesel(pesel).flatMap(customer -> webClient.get().uri("/account/customer/{customer}", customer.getId()).accept(MediaType.APPLICATION_JSON)
				.exchange().flatMap(response -> response.bodyToFlux(Account.class))).collectList().map(l -> {return new Customer(pesel, l);});
	}

We can test GET calls using web browser or REST clients. With POST it’s not so simple. Here are two simple test cases for adding new customer and getting customer with accounts. Test getCustomerAccounts need account service running on port 2222.

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
public class CustomerTest {

	private static final Logger logger = Logger.getLogger("CustomerTest");

	private WebClient webClient;

	@LocalServerPort
	private int port;

	@Before
	public void setup() {
		this.webClient = WebClient.create("http://localhost:" + this.port);
	}

	@Test
	public void getCustomerAccounts() {
		Customer customer = this.webClient.get().uri("/customer/accounts/234543647565")
				.accept(MediaType.APPLICATION_JSON).exchange().then(response -> response.bodyToMono(Customer.class))
				.block();
		logger.info("Customer: " + customer);
	}

	@Test
	public void addCustomer() {
		Customer customer = new Customer(null, "Adam", "Kowalski", "123456787654");
		customer = webClient.post().uri("/customer").accept(MediaType.APPLICATION_JSON)
				.exchange(BodyInserters.fromObject(customer)).then(response -> response.bodyToMono(Customer.class))
				.block();
		logger.info("Customer: " + customer);
	}

}

Conclusion

Spring initiative with support for reactive programming seems promising, but now it’s on early stage of development. There is no availibility to use it with popular projects from Spring Cloud like Eureka, Ribbon or Hystrix. When I tried to add this dependencies to pom.xml my service failed to start. I hope that in the near future such functionalities like service discovery and load balancing will be available also for reactive microservices same as for synchronous REST microservices. Spring has also support for reactive model in Spring Cloud Stream project. It’s more stable than WebFlux framework. I’ll try use it in the future.

Launch microservice in Docker container

Docker, Microservices and Continuous Delivery are increasingly popular topics among modern development teams. Today I’m going to create simple microservice and present you how to run it in Docker container using Maven plugin or Jenkins pipeline. Let’s start from application code which is available on https://github.com/piomin/sample-docker-microservice.git. It has only one endpoint for searching all persons and single person by id. Here’s controller code:

package pl.piomin.microservices.person;

import java.util.ArrayList;
import java.util.List;
import java.util.logging.Logger;

import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class Api {

protected Logger logger = Logger.getLogger(Api.class.getName());

private List<Person> persons;

public Api() {
persons = new ArrayList<>();
persons.add(new Person(1, "Jan", "Kowalski", 22));
persons.add(new Person(1, "Adam", "Malinowski", 33));
persons.add(new Person(1, "Tomasz", "Janowski", 25));
persons.add(new Person(1, "Alina", "Iksińska", 54));
}

@RequestMapping("/person")
public List<Person> findAll() {
logger.info("Api.findAll()");
return persons;
}

@RequestMapping("/person/{id}")
public Person findById(@PathVariable("id") Integer id) {
logger.info(String.format("Api.findById(%d)", id));
return persons.stream().filter(p -> (p.getId().intValue() == id)).findAny().get();
}

}

We need to have Docker installed on our machine and Docker Registry container running on port 5000. If you are interested in commercial support, there is also Docker Trusted Registry provides an image registry and same other features like LDAP/Active Directory integration, security certificates.

docker run -d --name registry -p 5000:5000 registry:latest

We use openjdk as a base image for our new microservice image defined in Dockerfile. Application JAR file will be launched in java command and exposed on port 2222.

FROM openjdk
MAINTAINER Piotr Minkowski <piotr.minkowski@gmail.com>
ADD sample-docker-microservice-1.0-SNAPSHOT.jar person-service.jar
ENTRYPOINT ["java", "-jar", "/person-service.jar"]
EXPOSE 2222

We use docker-maven-plugin to configure image building process inside pom.xml. There is no need for using Dockerfile with that plugin. It has equivalent tags in configuration which could be use instead of Dockerfile entries. Our example is based on Dockerfile.

<plugin>
<groupId>com.spotify</groupId>
<artifactId>docker-maven-plugin</artifactId>
<version>0.4.13</version>
<configuration>
<imageName>${docker.image.prefix}/${project.artifactId}</imageName>
<imageTags>${project.version}</imageTags>
<dockerDirectory>src/main/docker</dockerDirectory>
<dockerHost>https://192.168.99.100:2376</dockerHost>
<dockerCertPath>C:\Users\minkowp\.docker\machine\machines\default</dockerCertPath>
<resources>
<resource>
<targetPath>/</targetPath>
<directory>${project.build.directory}</directory>
<include>${project.build.finalName}.jar</include>
</resource>
</resources>
</configuration>
</plugin>

Finally, we can build our code using Maven command.

mvn clean package docker:build

After running maven command the images is tagged and pushed to local repository.

docker tag e106e5bf3d57 localhost:5000/microservices/sample-docker-microservice:1.0-SNAPSHOT
docker push localhost:5000/microservices/sample-docker-microservice:1.0-SNAPSHOT

Application images now is registered in local Docker Registry. Optionally, we could push it docker.io or to enterprise Docker Trusted Registry. We can check it out using API available at http://192.168.99.100:5000/v2/_catalog. Here’s Docker command for running with newly created image stored in local register. Service is available at http://192.168.99.100:2222/person/.

docker run -d --name sample1 -p 2222:2222 microservice/sample-docker-microservice:1.0-SNAPSHOT

 

Part 1: Creating microservice using Spring Cloud, Eureka and Zuul

Spring framework provides set of libraries for creating microservices in Java. They are a part of Spring Cloud project. Today I’m going to show you how to create simple microservices using Spring Boot and following technologies:

  • Zuul –  gateway service that provides dynamic routing, monitoring, resiliency, security, and more
  • Ribbon – client side load balancer
  • Feign – declarative REST client
  • Eureka – service registration and discovery
  • Sleuth – distributed tracing via logs
  • Zipkin – distributed tracing system with request visualization.

Sample application is available at https://github.com/piomin/sample-spring-microservices.git. Here’s picture with application architecture. Client calls endpoint available inside customer-service which stores basic customer data via Zuul gateway. This endpoint interacts with account-service to collect information about customer accounts served by endpoint in account-service. Each service registering itself on Eureka discovery service and sending its logs to Zipkin using spring-cloud-sleuth.

san1s57hfsas5v53ms53

This is account-service controller. We use findByCustomer method for collecting customer accounts by his id.

@RestController
public class Api {
	private List<Account> accounts;

	protected Logger logger = Logger.getLogger(Api.class.getName());

	public Api() {
		accounts = new ArrayList<>();
		accounts.add(new Account(1, 1, "111111"));
		accounts.add(new Account(2, 2, "222222"));
		accounts.add(new Account(3, 3, "333333"));
		accounts.add(new Account(4, 4, "444444"));
		accounts.add(new Account(5, 1, "555555"));
		accounts.add(new Account(6, 2, "666666"));
		accounts.add(new Account(7, 2, "777777"));
	}

	@RequestMapping("/accounts/{number}")
	public Account findByNumber(@PathVariable("number") String number) {
		logger.info(String.format("Account.findByNumber(%s)", number));
		return accounts.stream().filter(it -> it.getNumber().equals(number)).findFirst().get();
	}

	@RequestMapping("/accounts/customer/{customer}")
	public List<Account> findByCustomer(@PathVariable("customer") Integer customerId) {
		logger.info(String.format("Account.findByCustomer(%s)", customerId));
		return accounts.stream().filter(it -> it.getCustomerId().intValue()==customerId.intValue()).collect(Collectors.toList());
	}

	@RequestMapping("/accounts")
	public List<Account> findAll() {
		logger.info("Account.findAll()");
		return accounts;
	}
}

This is customer-service controller. There is findById method which interacts with account-service using Feign client.

@RestController
public class Api {

	@Autowired
	private AccountClient accountClient;

	protected Logger logger = Logger.getLogger(Api.class.getName());

	private List<Customer> customers;

	public Api() {
		customers = new ArrayList<>();
		customers.add(new Customer(1, "12345", "Adam Kowalski", CustomerType.INDIVIDUAL));
		customers.add(new Customer(2, "12346", "Anna Malinowska", CustomerType.INDIVIDUAL));
		customers.add(new Customer(3, "12347", "Paweł Michalski", CustomerType.INDIVIDUAL));
		customers.add(new Customer(4, "12348", "Karolina Lewandowska", CustomerType.INDIVIDUAL));
	}

	@RequestMapping("/customers/pesel/{pesel}")
	public Customer findByPesel(@PathVariable("pesel") String pesel) {
		logger.info(String.format("Customer.findByPesel(%s)", pesel));
		return customers.stream().filter(it -> it.getPesel().equals(pesel)).findFirst().get();
	}

	@RequestMapping("/customers")
	public List<Customer> findAll() {
		logger.info("Customer.findAll()");
		return customers;
	}

	@RequestMapping("/customers/{id}")
	public Customer findById(@PathVariable("id") Integer id) {
		logger.info(String.format("Customer.findById(%s)", id));
		Customer customer = customers.stream().filter(it -> it.getId().intValue()==id.intValue()).findFirst().get();
		List<Account> accounts = accountClient.getAccounts(id);
		customer.setAccounts(accounts);
		return customer;
	}
}
@FeignClient("account-service")
public interface AccountClient {

	@RequestMapping(method = RequestMethod.GET, value = "/accounts/customer/{customerId}")
	List<Account> getAccounts(@PathVariable("customerId") Integer customerId);

}

To be able to using Feign client we only have to enable it in our main class.


package pl.piomin.microservices.customer;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.netflix.feign.EnableFeignClients;

@SpringBootApplication
@EnableDiscoveryClient
@EnableFeignClients
public class Application {

	public static void main(String[] args) {
		SpringApplication.run(Application.class, args);
	}
}

There is also important configuration inside application.yml in customer-service. The ribbon load balancer needs to be enabled and also I suggest to set lease reneval and expiration on Eureka client to enable unregistration from discovery service when our service is shutting down.


server:
	port: ${PORT:3333}

eureka:
	client:
		serviceUrl:
			defaultZone: ${vcap.services.discovery-service.credentials.uri:http://127.0.0.1:8761}/eureka/
	instance:
		leaseRenewalIntervalInSeconds: 1
		leaseExpirationDurationInSeconds: 2

ribbon:
	eureka:
		enabled: true

Ok, fine. We’ve got our two microservices implemented and configured. But first we have to create and run discovery service based on Eureka server. This functionality is provided by our discovery-service. We only have to import one dependency in our pom.xml called spring-cloud-starter-eureka-server and enable it in application main class using @EnableEurekaServer annotation. Here is configuration of Eureka server in application.yml file:


server:
	port: ${PORT:8761}

eureka:
	instance:
		hostname: localhost
	client:
		registerWithEureka: false
		fetchRegistry: false
	server:
		enableSelfPreservation: false

After running discovery-service we see its monitoring console available on 8761 port. And now let’s run our two microservices on default ports set in their application.yml configuration file and more two instances of them on another ports using -DPORT VM argument, for example account-service on port 2223, and customer-service on port 3334. Now we take o look on Eureka monitoring console: we’ve got two instances of account-service running on 2222, 2223 ports and two instances of customer-service running on 3333, 3334 ports.

eureka

We have two instances of each microservice registered on discovery server. But we need to hide our system complexity to the outside world. There should be only one IP address exposed on one port available for inbound clients. That’s why we need API gateway – Zuul. Zuul will forward our request to the specific microservice based on its proxy configuration. Such request will also be load balances by ribbon client. To enable Zuul gateway dependency spring-cloud-starter-zuul should be added inside pom.xml and annotation @EnableZuulProxy in the main class. This is Zuul configuration for ourservices set in application.yml.


server:
	port: 8765

zuul:
	prefix: /api
	routes:
		account:
			path: /account/**
			serviceId: account-service
		customer:
			path: /customer/**
			serviceId: customer-service 				data-mce-type="bookmark" 				id="mce_SELREST_start" 				data-mce-style="overflow:hidden;line-height:0" 				style="overflow:hidden;line-height:0" 			></span>

...

Like we see Zuul is configured to be available under its default port 8765 and it forwards requests from /api/account/ path to account-service and from /api/customer/ to customer-service. When URL http://localhost:8765/api/customer/customers/1 is call several times we’ll see its load balanced between two instances of each microservice. Also when we shut down one of microservice instance we can take o look that it is unregistered from Eureka server.

In the second part of article I’ll present how to use Spring Cloud Sleuth, Zipkin and ELK. If you are interested in see Part 2: Creating microservices – monitoring with Spring Cloud Sleuth, ELK and Zipkin.