Circuit breaker and retries on Kubernetes with Istio and Spring Boot

An ability to handle communication failures in an inter-service communication is an absolute necessity for every single service mesh framework. It includes handling of timeouts and HTTP error codes. In this article I’m going to show how to configure retry and circuit breaker mechanisms using Istio. The same as for the previous article about Istio Service mesh on Kubernetes with Istio and Spring Boot we will analyze a communication between two simple Spring Boot applications deployed on Kubernetes. But instead of very basic example we are going to discuss more advanced topics. Continue reading “Circuit breaker and retries on Kubernetes with Istio and Spring Boot”

Service mesh on Kubernetes with Istio and Spring Boot

Istio is currently the leading solution for building service mesh on Kubernetes. Thanks to Istio you can take control of a communication process between microservices. It also lets you to secure and observe your services. Spring Boot is still the most popular JVM framework for building microservice applications. In this article I’m going to show how to use both these tools to build applications and provide communication between them over HTTP on Kubernetes. Continue reading “Service mesh on Kubernetes with Istio and Spring Boot”

Running Kotlin Microservice on Google Kubernetes Engine

In this article I’ll guide you through the steps required for building and running simple Kotlin microservice on Google Kubernetes Engine. We will use such and framework like Spring Boot, Skaffold and Jib. Continue reading “Running Kotlin Microservice on Google Kubernetes Engine”

Kubernetes ConfigMap Versioning for Spring Boot Apps

Kubernetes doesn’t provide built-in support for ConfigMap or Secret versioning. Sometimes such a feature may be useful, when we are deciding to rollback current version of our application. In Kubernetes we are able to rollback just a version of Deployment without any additional configuration properties stored in ConfigMap or Secret. Continue reading “Kubernetes ConfigMap Versioning for Spring Boot Apps”

Using Spring Cloud Kubernetes External Library

In this article I’m going to introduce my newest library for registering Spring Boot applications running outside Kubernetes cluster. The motivation for creating this library has already been described in the details in my article Spring Cloud Kubernetes for Hybrid Microservices Architecture. Since Spring Cloud Kubernetes doesn’t implement registration in service registry in any way, and just delegates it to the platform, it will not provide many benefits to applications running outside Kubernetes cluster. To take an advantage of Spring Cloud Kubernetes Discovery you may just include library spring-cloud-kubernetes-discovery-ext-client to your Spring Boot application running externally. Continue reading “Using Spring Cloud Kubernetes External Library”

Best Practices For Microservices on Kubernetes

There are several best practices for building microservices architecture properly. You may find many articles about it online. One of them is my previous article Spring Boot Best Practices For Microservices. I focused there on the most important aspects that should be considered when running microservice applications built on top of Spring Boot on production. I didn’t assumed there any platform used for orchestration or management, but just a group of independent applications. In this article I’m going to extend the list of already introduced best practices with some new rules dedicated especially for microservices deployed on Kubernetes platform. Continue reading “Best Practices For Microservices on Kubernetes”

Spring Boot Admin on Kubernetes

The main goal of this article is to show how to monitor Spring Boot applications running on Kubernetes with Spring Boot Admin. I have already written about Spring Boot Admin more than two years ago in the article Monitoring Microservices With Spring Boot Admin. You can find there a detailed description of its main features. During this time some new features have been added. They have also changed a look of the application to more modern. But the principles of working have not been changes anymore, so you can still refer to my previous article to understand the main concept around Spring Boot Admin. Continue reading “Spring Boot Admin on Kubernetes”

Local Java Development on Kubernetes

There are many tools, which may simplify your local development on Kubernetes. For Java applications you may also take an advantage of integration between popular runtime frameworks and Kubernetes. In this article I’m going to present some of available solutions. Continue reading “Local Java Development on Kubernetes”

Hazelcast with Spring Boot on Kubernetes

Hazelcast is the leading in-memory data grid (IMDG) solution. The main idea behind IMDG is to distribute data across many nodes inside cluster. Therefore, it seems to be an ideal solution for running on a cloud platform like Kubernetes, where you can easily scale up or scale down a number of running instances. Since Hazelcast is written in Java you can easily integrate it with your Java application using standard libraries. Something what can also simplify a start with Hazelcast is Spring Boot. You may also use an unofficial library implementing Spring Repositories pattern for Hazelcast – Spring Data Hazelcast. Continue reading “Hazelcast with Spring Boot on Kubernetes”

Kubernetes Messaging with Java and KubeMQ

Have you ever tried to run any message broker on Kubernetes? KubeMQ is relatively new solution and is not as popular as competitive tools like RabbitMQ, Kafka or ActiveMQ. However, it has one big advantage over them – it is Kubernetes native message broker, which may be deployed there using a single command without preparing any additional templates or manifests. This convinced me to take a closer look at KubeMQ. Continue reading “Kubernetes Messaging with Java and KubeMQ”

Guide To Micronaut Kubernetes

Micronaut provides a library that eases development of applications deployed on Kubernetes or on a local single-node cluster like Minikube. The project Micronaut Kubernetes is relatively new in Micronaut family, its current release version is 1.0.3. It allows you to integrate Micronaut application with Kubernetes discovery, and use Micronaut Configuration Client to read Kubernetes ConfigMap and Secret as a property sources. Additionally it provides health check indicator based on communication with Kubernetes API. Continue reading “Guide To Micronaut Kubernetes”

Spring Cloud Kubernetes For Hybrid Microservices Architecture

You might use Spring Cloud Kubernetes to build applications running both inside and outside Kubernetes cluster. The only problem with starting application outside Kubernetes is that there is no auto-configured registration mechanism. Spring Cloud Kubernetes delegates registration to the platform, what is an obvious behaviour if you are deploying your application internally using Kubernetes objects. With external application the situation is different. In fact, you should guarantee registration by yourself on the application side. Continue reading “Spring Cloud Kubernetes For Hybrid Microservices Architecture”

Microservices With Spring Cloud Kubernetes

Spring Cloud and Kubernetes are the popular products applicable to various different use cases. However, when it comes to microservices architecture they are sometimes described as competitive solutions. They are both implementing popular patterns in microservices architecture like service discovery, distributed configuration, load balancing or circuit breaking. Of course, they are doing it differently. Continue reading “Microservices With Spring Cloud Kubernetes”

Part 1: Testing Kafka Microservices With Micronaut

I have already described how to build microservices architecture entirely based on message-driven communication through Apache Kafka in one of my previous articles Kafka In Microservices With Micronaut. As you can see in the article title the sample applications and integration with Kafka has been built on top of Micronaut Framework. I described some interesting features of Micronaut, that can be used for building message-driven microservices, but I specially didn’t write anything about testing. In this article I’m going to show you how to test your Kafka microservice using Micronaut Test core features (Component Tests), Testcontainers (Integration Tests) and Pact (Contract Tests).

Continue reading “Part 1: Testing Kafka Microservices With Micronaut”

Deploying Spring Boot Application on OpenShift with Dekorate

More advanced deployments to Kubernetes or OpenShift are a bit troublesome for developers. In comparison to Kubernetes OpenShift provides S2I (Source-2-Image) mechanism, which may help reduce a time required for preparation of application deployment descriptors. Although S2I is quite useful for developers, it solves only simple use cases and does not provide unified approach to building deployment configuration from a source code. Dekorate (https://dekorate.io), the recently created open-source project, tries to solve that problem. This project seems to be very interesting. It appears to be confirmed by RedHat, which has already announced a decision on including Dekorate to Red Hat OpenShift Application Runtimes as a “Tech Preview”. Continue reading “Deploying Spring Boot Application on OpenShift with Dekorate”

Quick Guide to Microservices with Quarkus on Openshift

You had an opportunity to read many articles about building microservices with such frameworks like Spring Boot or Micronaut on my blog. There is another one very interesting framework dedicated for microservices architecture, which is becoming increasing popular – Quarkus. It is being introduced as a next-generation Kubernetes/Openshift native Java framework. It is built on top of well-known Java standards like CDI, JAX-RS and Eclipse MicroProfile which distinguishes it from Spring Boot. Continue reading “Quick Guide to Microservices with Quarkus on Openshift”

Continuous Delivery with OpenShift and Jenkins: A/B Testing

One of the reason you could decide to use OpenShift instead of some other containerized platforms (for example Kubernetes) is out-of-the-box support for continuous delivery pipelines. Without proper tools the process of releasing software in your organization may be really time-consuming and painful. The quickness of that process becoming especially important if you deliver software to production frequently. Currently, the most popular use case for it is microservices-based architecture, where you have many small, independent applications.

Continue reading “Continuous Delivery with OpenShift and Jenkins: A/B Testing”

Elasticsearch with Spring Boot

Elasticsearch is a full-text search engine especially designed for working with large data sets. Following this description it is a natural choice to use it for storing and searching application logs. Together with Logstash and Kibana it is a part of powerful solution called Elastic Stack, that has already been described in some of my previous articles.
Keeping application logs is not the only one use case for Elasticsearch. It is often used as a secondary database for the application, that has primary relational database. Such an approach can be especially useful if you have to perform full-text search over large data set or just store many historical records that are no longer modified by the application. Of course there is always question about advantages and disadvantages of that approach.
When you are working with two different data sources that contain the same data, you have to first think about synchronization. You have several options. Depending on the relational database vendor, you can leverage binary or transaction logs, which contain the history of SQL updates. This approach requires some middleware that reads logs and then puts data to Elasticsearch. You can always move the whole responsibility to the database side (trigger) or into Elasticsearch side (JDBC plugins). Continue reading “Elasticsearch with Spring Boot”

Microservices Integration Tests with Hoverfly and Testcontainers

Building good integration tests of a system consisting of several microservices may be quite a challenge. Today I’m going to show you how to use such tools like Hoverfly and Testcontainers to implement such the tests. I have already written about Hoverfly in my previous articles, as well as about Testcontainers. If you are interested in some intro to these framework you may take a look on the following articles:

Continue reading “Microservices Integration Tests with Hoverfly and Testcontainers”

Testing Spring Boot Integration with Vault and Postgres using Testcontainers Framework

I have already written many articles, where I was using Docker containers for running some third-party solutions integrated with my sample applications. Building integration tests for such applications may not be an easy task without Docker containers. Especially, if our application integrates with databases, message brokers or some other popular tools. If you are planning to build such integration tests you should definitely take a look on Testcontainers (https://www.testcontainers.org/). Testcontainers is a Java library that supports JUnit tests, providing fast and lightweight way for running instances of common databases, Selenium web browsers, or anything else that can run in a Docker container. It provides modules for the most popular relational and NoSQL databases like Postgres, MySQL, Cassandra or Neo4j. It also allows to run popular products like Elasticsearch, Kafka, Nginx or HashiCorp’s Vault. Today I’m going to show you more advanced sample of JUnit tests that use Testcontainers to check out an integration between Spring Boot/Spring Cloud application, Postgres database and Vault. For the purposes of that example we will use the case described in one of my previous articles Secure Spring Cloud Microservices with Vault and Nomad. Let us recall that use case. Continue reading “Testing Spring Boot Integration with Vault and Postgres using Testcontainers Framework”

Running Java Microservices on OpenShift using Source-2-Image

One of the reason you would prefer OpenShift instead of Kubernetes is the simplicity of running new applications. When working with plain Kubernetes you need to provide already built image together with the set of descriptor templates used for deploying it. OpenShift introduces Source-2-Image feature used for building reproducible Docker images from application source code. With S2I you don’t have provide any Kubernetes YAML templates or build Docker image by yourself, OpenShift will do it for you. Let’s see how it works. The best way to test it locally is via Minishift. But the first step is to prepare sample applications source code. Continue reading “Running Java Microservices on OpenShift using Source-2-Image”

RabbitMQ Cluster with Consul and Vault

Almost two years ago I wrote an article about RabbitMQ clustering RabbitMQ in cluster. It was one of the first post on my blog, and it’s really hard to believe it has been two years since I started this blog. Anyway, one of the question about the topic described in the mentioned article inspired me to return to that subject one more time. That question pointed to the problem of an approach to setting up the cluster. This approach assumes that we are manually attaching new nodes to the cluster by executing the command rabbitmqctl join_cluster with cluster name as a parameter. If I remember correctly it was the only one available method of creating cluster at that time. Today we have more choices, what illustrates an evolution of RabbitMQ during last two years. Continue reading “RabbitMQ Cluster with Consul and Vault”

Integration tests on OpenShift using Arquillian Cube and Istio

Building integration tests for applications deployed on Kubernetes/OpenShift platforms seems to be quite a big challenge. With Arquillian Cube, an Arquillian extension for managing Docker containers, it is not complicated. Kubernetes extension, being a part of Arquillian Cube, helps you write and run integration tests for your Kubernetes/Openshift application. It is responsible for creating and managing temporary namespace for your tests, applying all Kubernetes resources required to setup your environment and once everything is ready it will just run defined integration tests.
The one very good information related to Arquillian Cube is that it supports Istio framework. You can apply Istio resources before executing tests. One of the most important features of Istio is an ability to control of traffic behavior with rich routing rules, retries, delays, failovers, and fault injection. It allows you to test some unexpected situations during network communication between microservices like server errors or timeouts.
If you would like to run some tests using Istio resources on Minishift you should first install it on your platform. To do that you need to change some privileges for your OpenShift user. Let’s do that.

1. Enabling Istio on Minishift

Istio requires some high-level privileges to be able to run on OpenShift. To add those privileges to the current user we need to login as an user with cluster admin role. First, we should enable admin-user addon on Minishift by executing the following command.

$ minishift addons enable admin-user

After that you would be able to login as system:admin user, which has cluster-admin role. With this user you can also add cluster-admin role to other users, for example admin. Let’s do that.

$ oc login -u system:admin
$ oc adm policy add-cluster-role-to-user cluster-admin admin
$ oc login -u admin -p admin

Now, let’s create new project dedicated especially for Istio and then add some required privileges.

$ oc new-project istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-ingress-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z default -n istio-system
$ oc adm policy add-scc-to-user anyuid -z prometheus -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-egressgateway-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-citadel-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-ingressgateway-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-cleanup-old-ca-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-mixer-post-install-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-mixer-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-pilot-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-sidecar-injector-service-account -n istio-system
$ oc adm policy add-scc-to-user anyuid -z istio-galley-service-account -n istio-system
$ oc adm policy add-scc-to-user privileged -z default -n myproject

Finally, we may proceed to Istio components installation. I downloaded the current newest version of Istio – 1.0.1. Installation file is available under install/kubernetes directory. You just have to apply it to your Minishift instance by calling oc apply command.

$ oc apply -f install/kubernetes/istio-demo.yaml

2. Enabling Istio for Arquillian Cube

I have already described how to use Arquillian Cube to run tests with OpenShift in the article Testing microservices on OpenShift using Arquillian Cube. In comparison with the sample described in that article we need to include dependency responsible for enabling Istio features.

<dependency>
	<groupId>org.arquillian.cube</groupId>
	<artifactId>arquillian-cube-istio-kubernetes</artifactId>
	<version>1.17.1</version>
	<scope>test</scope>
</dependency>

Now, we can use @IstioResource annotation to apply Istio resources into OpenShift cluster or IstioAssistant bean to be able to use some additional methods for adding, removing resources programmatically or polling an availability of URLs.
Let’s take a look on the following JUnit test class using Arquillian Cube with Istio support. In addition to the standard test created for running on OpenShift instance I have added Istio resource file customer-to-account-route.yaml. Then I have invoked method await provided by IstioAssistant. First test test1CustomerRoute creates new customer, so it needs to wait until customer-route is deployed on OpenShift. The next test test2AccountRoute adds account for the newly created customer, so it needs to wait until account-route is deployed on OpenShift. Finally, the test test3GetCustomerWithAccounts is ran, which calls the method responsible for finding customer by id with list of accounts. In that case customer-service calls method endpoint by account-service. As you have probably find out the last line of that test method contains an assertion to empty list of accounts: Assert.assertTrue(c.getAccounts().isEmpty()). Why? We will simulate the timeout in communication between customer-service and account-service using Istio rules.

@Category(RequiresOpenshift.class)
@RequiresOpenshift
@Templates(templates = {
        @Template(url = "classpath:account-deployment.yaml"),
        @Template(url = "classpath:deployment.yaml")
})
@RunWith(ArquillianConditionalRunner.class)
@IstioResource("classpath:customer-to-account-route.yaml")
@FixMethodOrder(MethodSorters.NAME_ASCENDING)
public class IstioRuleTest {

    private static final Logger LOGGER = LoggerFactory.getLogger(IstioRuleTest.class);
    private static String id;

    @ArquillianResource
    private IstioAssistant istioAssistant;

    @RouteURL(value = "customer-route", path = "/customer")
    private URL customerUrl;
    @RouteURL(value = "account-route", path = "/account")
    private URL accountUrl;

    @Test
    public void test1CustomerRoute() {
        LOGGER.info("URL: {}", customerUrl);
        istioAssistant.await(customerUrl, r -> r.isSuccessful());
        LOGGER.info("URL ready. Proceeding to the test");
        OkHttpClient httpClient = new OkHttpClient();
        RequestBody body = RequestBody.create(MediaType.parse("application/json"), "{\"name\":\"John Smith\", \"age\":33}");
        Request request = new Request.Builder().url(customerUrl).post(body).build();
        try {
            Response response = httpClient.newCall(request).execute();
            ResponseBody b = response.body();
            String json = b.string();
            LOGGER.info("Test: response={}", json);
            Assert.assertNotNull(b);
            Assert.assertEquals(200, response.code());
            Customer c = Json.decodeValue(json, Customer.class);
            this.id = c.getId();
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

    @Test
    public  void test2AccountRoute() {
        LOGGER.info("Route URL: {}", accountUrl);
        istioAssistant.await(accountUrl, r -> r.isSuccessful());
        LOGGER.info("URL ready. Proceeding to the test");
        OkHttpClient httpClient = new OkHttpClient();
        RequestBody body = RequestBody.create(MediaType.parse("application/json"), "{\"number\":\"01234567890\", \"balance\":10000, \"customerId\":\"" + this.id + "\"}");
        Request request = new Request.Builder().url(accountUrl).post(body).build();
        try {
            Response response = httpClient.newCall(request).execute();
            ResponseBody b = response.body();
            String json = b.string();
            LOGGER.info("Test: response={}", json);
            Assert.assertNotNull(b);
            Assert.assertEquals(200, response.code());
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

    @Test
    public void test3GetCustomerWithAccounts() {
        String url = customerUrl + "/" + id;
        LOGGER.info("Calling URL: {}", customerUrl);
        OkHttpClient httpClient = new OkHttpClient();
        Request request = new Request.Builder().url(url).get().build();
        try {
            Response response = httpClient.newCall(request).execute();
            String json = response.body().string();
            LOGGER.info("Test: response={}", json);
            Assert.assertNotNull(response.body());
            Assert.assertEquals(200, response.code());
            Customer c = Json.decodeValue(json, Customer.class);
            Assert.assertTrue(c.getAccounts().isEmpty());
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

}

3. Creating Istio rules

On of the interesting features provided by Istio is an availability of injecting faults to the route rules. we can specify one or more faults to inject while forwarding HTTP requests to the rule’s corresponding request destination. The faults can be either delays or aborts. We can define a percentage level of error using percent field for the both types of fault. In the following Istio resource I have defines 2 seconds delay for every single request sent to account-service.

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: account-service
spec:
  hosts:
    - account-service
  http:
  - fault:
      delay:
        fixedDelay: 2s
        percent: 100
    route:
    - destination:
        host: account-service
        subset: v1

Besides VirtualService we also need to define DestinationRule for account-service. It is really simple – we have just define version label of the target service.

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: account-service
spec:
  host: account-service
  subsets:
  - name: v1
    labels:
      version: v1

Before running the test we should also modify OpenShift deployment templates of our sample applications. We need to inject some Istio resources into the pods definition using istioctl kube-inject command as shown below.

$ istioctl kube-inject -f deployment.yaml -o customer-deployment-istio.yaml
$ istioctl kube-inject -f account-deployment.yaml -o account-deployment-istio.yaml

Finally, we may rewrite generated files into OpenShift templates. Here’s the fragment of Openshift template containing DeploymentConfig definition for account-service.

kind: Template
apiVersion: v1
metadata:
  name: account-template
objects:
  - kind: DeploymentConfig
    apiVersion: v1
    metadata:
      name: account-service
      labels:
        app: account-service
        name: account-service
        version: v1
    spec:
      template:
        metadata:
          annotations:
            sidecar.istio.io/status: '{"version":"364ad47b562167c46c2d316a42629e370940f3c05a9b99ccfe04d9f2bf5af84d","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs"],"imagePullSecrets":null}'
          name: account-service
          labels:
            app: account-service
            name: account-service
            version: v1
        spec:
          containers:
          - env:
            - name: DATABASE_NAME
              valueFrom:
                secretKeyRef:
                  key: database-name
                  name: mongodb
            - name: DATABASE_USER
              valueFrom:
                secretKeyRef:
                  key: database-user
                  name: mongodb
            - name: DATABASE_PASSWORD
              valueFrom:
                secretKeyRef:
                  key: database-password
                  name: mongodb
            image: piomin/account-vertx-service
            name: account-vertx-service
            ports:
            - containerPort: 8095
            resources: {}
          - args:
            - proxy
            - sidecar
            - --configPath
            - /etc/istio/proxy
            - --binaryPath
            - /usr/local/bin/envoy
            - --serviceCluster
            - account-service
            - --drainDuration
            - 45s
            - --parentShutdownDuration
            - 1m0s
            - --discoveryAddress
            - istio-pilot.istio-system:15007
            - --discoveryRefreshDelay
            - 1s
            - --zipkinAddress
            - zipkin.istio-system:9411
            - --connectTimeout
            - 10s
            - --statsdUdpAddress
            - istio-statsd-prom-bridge.istio-system:9125
            - --proxyAdminPort
            - "15000"
            - --controlPlaneAuthPolicy
            - NONE
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: INSTANCE_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: ISTIO_META_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: ISTIO_META_INTERCEPTION_MODE
              value: REDIRECT
            image: gcr.io/istio-release/proxyv2:1.0.1
            imagePullPolicy: IfNotPresent
            name: istio-proxy
            resources:
              requests:
                cpu: 10m
            securityContext:
              readOnlyRootFilesystem: true
              runAsUser: 1337
            volumeMounts:
            - mountPath: /etc/istio/proxy
              name: istio-envoy
            - mountPath: /etc/certs/
              name: istio-certs
              readOnly: true
          initContainers:
          - args:
            - -p
            - "15001"
            - -u
            - "1337"
            - -m
            - REDIRECT
            - -i
            - '*'
            - -x
            - ""
            - -b
            - 8095,
            - -d
            - ""
            image: gcr.io/istio-release/proxy_init:1.0.1
            imagePullPolicy: IfNotPresent
            name: istio-init
            resources: {}
            securityContext:
              capabilities:
                add:
                - NET_ADMIN
          volumes:
          - emptyDir:
              medium: Memory
            name: istio-envoy
          - name: istio-certs
            secret:
              optional: true
              secretName: istio.default

4. Building applications

The sample applications are implemented using Eclipse Vert.x framework. They use Mongo database for storing data. The connection settings are injected into pods using Kubernetes Secrets.

public class MongoVerticle extends AbstractVerticle {

	private static final Logger LOGGER = LoggerFactory.getLogger(MongoVerticle.class);

	@Override
	public void start() throws Exception {
		ConfigStoreOptions envStore = new ConfigStoreOptions()
				.setType("env")
				.setConfig(new JsonObject().put("keys", new JsonArray().add("DATABASE_USER").add("DATABASE_PASSWORD").add("DATABASE_NAME")));
		ConfigRetrieverOptions options = new ConfigRetrieverOptions().addStore(envStore);
		ConfigRetriever retriever = ConfigRetriever.create(vertx, options);
		retriever.getConfig(r -> {
			String user = r.result().getString("DATABASE_USER");
			String password = r.result().getString("DATABASE_PASSWORD");
			String db = r.result().getString("DATABASE_NAME");
			JsonObject config = new JsonObject();
			LOGGER.info("Connecting {} using {}/{}", db, user, password);
			config.put("connection_string", "mongodb://" + user + ":" + password + "@mongodb/" + db);
			final MongoClient client = MongoClient.createShared(vertx, config);
			final CustomerRepository service = new CustomerRepositoryImpl(client);
			ProxyHelper.registerService(CustomerRepository.class, vertx, service, "customer-service");	
		});
	}
}

MongoDB should be started on OpenShift before starting any applications, which connect to it. To achieve it we should insert Mongo deployment resource into Arquillian configuration file as env.config.resource.name field.
The configuration of Arquillian Cube is visible below. We will use an existing namespace myproject, which has already granted the required privileges (see Step 1). We also need to pass authentication token of user admin. You can collect it using command oc whoami -t after login to OpenShift cluster.

<extension qualifier="openshift">
	<property name="namespace.use.current">true</property>
	<property name="namespace.use.existing">myproject</property>
	<property name="kubernetes.master">https://192.168.99.100:8443</property>
	<property name="cube.auth.token">TYYccw6pfn7TXtH8bwhCyl2tppp5MBGq7UXenuZ0fZA</property>
	<property name="env.config.resource.name">mongo-deployment.yaml</property>
</extension>

The communication between customer-service and account-service is realized by Vert.x WebClient. We will set read timeout for the client to 1 second. Because Istio injects 2 seconds delay into the route, the communication is going to end with timeout.

public class AccountClient {

	private static final Logger LOGGER = LoggerFactory.getLogger(AccountClient.class);
	private Vertx vertx;

	public AccountClient(Vertx vertx) {
		this.vertx = vertx;
	}
	
	public AccountClient findCustomerAccounts(String customerId, Handler<AsyncResult<List>> resultHandler) {
		WebClient client = WebClient.create(vertx);
		client.get(8095, "account-service", "/account/customer/" + customerId).timeout(1000).send(res2 -> {
			if (res2.succeeded()) {
				LOGGER.info("Response: {}", res2.result().bodyAsString());
				List accounts = res2.result().bodyAsJsonArray().stream().map(it -> Json.decodeValue(it.toString(), Account.class)).collect(Collectors.toList());
				resultHandler.handle(Future.succeededFuture(accounts));
			} else {
				resultHandler.handle(Future.succeededFuture(new ArrayList()));
			}
		});
		return this;
	}
}

The full code of sample applications is available on GitHub in the repository https://github.com/piomin/sample-vertx-kubernetes/tree/openshift-istio-tests.

5. Running tests

You can the tests during Maven build or just using your IDE. As the first test1CustomerRoute test is executed. It adds new customer and save generated id for two next tests.

arquillian-istio-3

The next test is test2AccountRoute. It adds an account for the customer created during previous test.

arquillian-istio-2

Finally, the test responsible for verifying communication between microservices is running. It verifies if the list of accounts is empty, what is a result of timeout in communication with account-service.

arquillian-istio-1

Quick Guide to Microservices with Kubernetes, Spring Boot 2.0 and Docker

Here’s the next article in a series of “Quick Guide to…”. This time we will discuss and run examples of Spring Boot microservices on Kubernetes. The structure of that article will be quite similar to this one Quick Guide to Microservices with Spring Boot 2.0, Eureka and Spring Cloud, as they are describing the same aspects of applications development. I’m going to focus on showing you the differences and similarities in development between for Spring Cloud and for Kubernetes. The topics covered in this article are:

  • Using Spring Boot 2.0 in cloud-native development
  • Providing service discovery for all microservices using Spring Cloud Kubernetes project
  • Injecting configuration settings into application pods using Kubernetes Config Maps and Secrets
  • Building application images using Docker and deploying them on Kubernetes using YAML configuration files
  • Using Spring Cloud Kubernetes together with Zuul proxy to expose a single Swagger API documentation for all microservices

Spring Cloud and Kubernetes may be threaten as a competitive solutions when you build microservices environment. Such components like Eureka, Spring Cloud Config or Zuul provided by Spring Cloud may be replaced by built-in Kubernetes objects like services, config maps, secrets or ingresses. But even if you decide to use Kubernetes components instead of Spring Cloud you can take advantage of some interesting features provided throughout the whole Spring Cloud project.

The one raelly interesting project that helps us in development is Spring Cloud Kubernetes (https://github.com/spring-cloud-incubator/spring-cloud-kubernetes). Although it is still in incubation stage it is definitely worth to dedicating some time to it. It integrates Spring Cloud with Kubernetes. I’ll show you how to use implementation of discovery client, inter-service communication with Ribbon client and Zipkin discovery using Spring Cloud Kubernetes.

Before we proceed to the source code, let’s take a look on the following diagram. It illustrates the architecture of our sample system. It is quite similar to the architecture presented in the already mentioned article about microservices on Spring Cloud. There are three independent applications (employee-service, department-service, organization-service), which communicate between each other through REST API. These Spring Boot microservices use some build-in mechanisms provided by Kubernetes: config maps and secrets for distributed configuration, etcd for service discovery, and ingresses for API gateway.

micro-kube-1

Let’s proceed to the implementation. Currently, the newest stable version of Spring Cloud is Finchley.RELEASE. This version of spring-cloud-dependencies should be declared as a BOM for dependency management.

<dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-dependencies</artifactId>
			<version>Finchley.RELEASE</version>
			<type>pom</type>
			<scope>import</scope>
		</dependency>
	</dependencies>
</dependencyManagement>

Spring Cloud Kubernetes is not released under Spring Cloud Release Trains. So, we need to explicitly define its version. Because we use Spring Boot 2.0 we have to include the newest SNAPSHOT version of spring-cloud-kubernetes artifacts, which is 0.3.0.BUILD-SNAPSHOT.

The source code of sample applications presented in this article is available on GitHub in repository https://github.com/piomin/sample-spring-microservices-kubernetes.git.

Pre-requirements

In order to be able to deploy and test our sample microservices we need to prepare a development environment. We can realize that in the following steps:

  • You need at least a single node cluster instance of Kubernetes (Minikube) or Openshift (Minishift) running on your local machine. You should start it and expose embedded Docker client provided by both of them. The detailed intruction for Minishift may be found there: Quick guide to deploying Java apps on OpenShift. You can also use that description to run Minikube – just replace word ‘minishift’ with ‘minikube’. In fact, it does not matter if you choose Kubernetes or Openshift – the next part of this tutorial would be applicable for both of them
  • Spring Cloud Kubernetes requires access to Kubernetes API in order to be able to retrieve a list of address of pods running for a single service. If you use Kubernetes you should just execute the following command:
$ kubectl create clusterrolebinding admin --clusterrole=cluster-admin --serviceaccount=default:default

If you deploy your microservices on Minishift you should first enable admin-user addon, then login as a cluster admin, and grant required permissions.

$ minishift addons enable admin-user
$ oc login -u system:admin
$ oc policy add-role-to-user cluster-reader system:serviceaccount:myproject:default
  • All our sample microservices use MongoDB as a backend store. So, you should first run an instance of this database on your node. With Minishift it is quite simple, as you can use predefined templates just by selecting service Mongo on the Catalog list. With Kubernetes the task is more difficult. You have to prepare deployment configuration files by yourself and apply it to the cluster. All the configuration files are available under kubernetes directory inside sample Git repository. To apply the following YAML definition to the cluster you should execute command kubectl apply -f kubernetes\mongo-deployment.yaml. After it Mongo database would be available under the name mongodb inside Kubernetes cluster.
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mongodb
  labels:
    app: mongodb
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mongodb
  template:
    metadata:
      labels:
        app: mongodb
    spec:
      containers:
      - name: mongodb
        image: mongo:latest
        ports:
        - containerPort: 27017
        env:
        - name: MONGO_INITDB_DATABASE
          valueFrom:
            configMapKeyRef:
              name: mongodb
              key: database-name
        - name: MONGO_INITDB_ROOT_USERNAME
          valueFrom:
            secretKeyRef:
              name: mongodb
              key: database-user
        - name: MONGO_INITDB_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mongodb
              key: database-password
---
apiVersion: v1
kind: Service
metadata:
  name: mongodb
  labels:
    app: mongodb
spec:
  ports:
  - port: 27017
    protocol: TCP
  selector:
    app: mongodb

1. Inject configuration with Config Maps and Secrets

When using Spring Cloud the most obvious choice for realizing distributed configuration in your system is Spring Cloud Config. With Kubernetes you can use Config Map. It holds key-value pairs of configuration data that can be consumed in pods or used to store configuration data. It is used for storing and sharing non-sensitive, unencrypted configuration information. To use sensitive information in your clusters, you must use Secrets. An usage of both these Kubernetes objects can be perfectly demonstrated basing on the example of MongoDB connection settings. Inside Spring Boot application we can easily inject it using environment variables. Here’s fragment of application.yml file with URI configuration.

spring:
  data:
    mongodb:
      uri: mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@mongodb/${MONGO_DATABASE}

While username or password are a sensitive fields, a database name is not. So we can place it inside config map.

apiVersion: v1
kind: ConfigMap
metadata:
  name: mongodb
data:
  database-name: microservices

Of course, username and password are defined as secrets.

apiVersion: v1
kind: Secret
metadata:
  name: mongodb
type: Opaque
data:
  database-password: MTIzNDU2
  database-user: cGlvdHI=

To apply the configuration to Kubernetes cluster we run the following commands.

$ kubectl apply -f kubernetes/mongodb-configmap.yaml
$ kubectl apply -f kubernetes/mongodb-secret.yaml

After it we should inject the configuration properties into application’s pods. When defining container configuration inside Deployment YAML file we have to include references to environment variables and secrets as shown below

apiVersion: apps/v1
kind: Deployment
metadata:
  name: employee
  labels:
    app: employee
spec:
  replicas: 1
  selector:
    matchLabels:
      app: employee
  template:
    metadata:
      labels:
        app: employee
    spec:
      containers:
      - name: employee
        image: piomin/employee:1.0
        ports:
        - containerPort: 8080
        env:
        - name: MONGO_DATABASE
          valueFrom:
            configMapKeyRef:
              name: mongodb
              key: database-name
        - name: MONGO_USERNAME
          valueFrom:
            secretKeyRef:
              name: mongodb
              key: database-user
        - name: MONGO_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mongodb
              key: database-password

2. Building service discovery with Kubernetes

We usually running microservices on Kubernetes using Docker containers. One or more containers are grouped by pods, which are the smallest deployable units created and managed in Kubernetes. A good practice is to run only one container inside a single pod. If you would like to scale up your microservice you would just have to increase a number of running pods. All running pods that belong to a single microservice are logically grouped by Kubernetes Service. This service may be visible outside the cluster, and is able to load balance incoming requests between all running pods. The following service definition groups all pods labelled with field app equaled to employee.

apiVersion: v1
kind: Service
metadata:
  name: employee
  labels:
    app: employee
spec:
  ports:
  - port: 8080
    protocol: TCP
  selector:
    app: employee

Service can be used for accessing application outside Kubernetes cluster or for inter-service communication inside a cluster. However, the communication between microservices can be implemented more comfortable with Spring Cloud Kubernetes. First we need to include the following dependency to project pom.xml.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-kubernetes</artifactId>
	<version>0.3.0.BUILD-SNAPSHOT</version>
</dependency>

Then we should enable discovery client for an application – the same as we have always done for discovery Spring Cloud Netflix Eureka. This allows you to query Kubernetes endpoints (services) by name. This discovery feature is also used by the Spring Cloud Kubernetes Ribbon or Zipkin projects to fetch respectively the list of the pods defined for a microservice to be load balanced or the Zipkin servers available to send the traces or spans.

@SpringBootApplication
@EnableDiscoveryClient
@EnableMongoRepositories
@EnableSwagger2
public class EmployeeApplication {

	public static void main(String[] args) {
		SpringApplication.run(EmployeeApplication.class, args);
	}
	
	// ...
}

The last important thing in this section is to guarantee that Spring application name would be exactly the same as Kubernetes service name for the application. For application employee-service it is employee.

spring:
  application:
    name: employee

3. Building microservice using Docker and deploying on Kubernetes

There is nothing unusual in our sample microservices. We have included some standard Spring dependencies for building REST-based microservices, integrating with MongoDB and generating API documentation using Swagger2.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
	<groupId>io.springfox</groupId>
	<artifactId>springfox-swagger2</artifactId>
	<version>2.9.2</version>
</dependency>
<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-data-mongodb</artifactId>
</dependency>

In order to integrate with MongoDB we should create interface that extends standard Spring Data CrudRepository.

public interface EmployeeRepository extends CrudRepository {
	
	List findByDepartmentId(Long departmentId);
	List findByOrganizationId(Long organizationId);
	
}

Entity class should be annotated with Mongo @Document and a primary key field with @Id.

@Document(collection = "employee")
public class Employee {

	@Id
	private String id;
	private Long organizationId;
	private Long departmentId;
	private String name;
	private int age;
	private String position;
	
	// ...
	
}

The repository bean has been injected to the controller class. Here’s the full implementation of our REST API inside employee-service.

@RestController
public class EmployeeController {

	private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeController.class);
	
	@Autowired
	EmployeeRepository repository;
	
	@PostMapping("/")
	public Employee add(@RequestBody Employee employee) {
		LOGGER.info("Employee add: {}", employee);
		return repository.save(employee);
	}
	
	@GetMapping("/{id}")
	public Employee findById(@PathVariable("id") String id) {
		LOGGER.info("Employee find: id={}", id);
		return repository.findById(id).get();
	}
	
	@GetMapping("/")
	public Iterable findAll() {
		LOGGER.info("Employee find");
		return repository.findAll();
	}
	
	@GetMapping("/department/{departmentId}")
	public List findByDepartment(@PathVariable("departmentId") Long departmentId) {
		LOGGER.info("Employee find: departmentId={}", departmentId);
		return repository.findByDepartmentId(departmentId);
	}
	
	@GetMapping("/organization/{organizationId}")
	public List findByOrganization(@PathVariable("organizationId") Long organizationId) {
		LOGGER.info("Employee find: organizationId={}", organizationId);
		return repository.findByOrganizationId(organizationId);
	}
	
}

In order to run our microservices on Kubernetes we should first build the whole Maven project with mvn clean install command. Each microservice has Dockerfile placed in the root directory. Here’s Dockerfile definition for employee-service.

FROM openjdk:8-jre-alpine
ENV APP_FILE employee-service-1.0-SNAPSHOT.jar
ENV APP_HOME /usr/apps
EXPOSE 8080
COPY target/$APP_FILE $APP_HOME/
WORKDIR $APP_HOME
ENTRYPOINT ["sh", "-c"]
CMD ["exec java -jar $APP_FILE"]

Let’s build Docker images for all three sample microservices.

$ cd employee-service
$ docker build -t piomin/employee:1.0 .
$ cd department-service
$ docker build -t piomin/department:1.0 .
$ cd organization-service
$ docker build -t piomin/organization:1.0 .

The last step is to deploy Docker containers with applications on Kubernetes. To do that just execute commands kubectl apply on YAML configuration files. The sample deployment file for employee-service has been demonstrated in step 1. All required deployment fields are available inside project repository in kubernetes directory.

$ kubectl apply -f kubernetes\employee-deployment.yaml
$ kubectl apply -f kubernetes\department-deployment.yaml
$ kubectl apply -f kubernetes\organization-deployment.yaml

4. Communication between microservices with Spring Cloud Kubernetes Ribbon

All the microservice are deployed on Kubernetes. Now, it’s worth to discuss some aspects related to inter-service communication. Application employee-service in contrast to other microservices did not invoke any other microservices. Let’s take a look on to other microservices that calls API exposed by employee-service and communicates between each other (organization-service calls department-service API).
First we need to include some additional dependencies to the project. We use Spring Cloud Ribbon and OpenFeign. Alternatively you can also use Spring @LoadBalanced RestTemplate.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-netflix-ribbon</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-kubernetes-ribbon</artifactId>
	<version>0.3.0.BUILD-SNAPSHOT</version>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-openfeign</artifactId>
</dependency>

Here’s the main class of department-service. It enables Feign client using @EnableFeignClients annotation. It works the same as with discovery based on Spring Cloud Netflix Eureka. OpenFeign uses Ribbon for client-side load balancing. Spring Cloud Kubernetes Ribbon provides some beans that forces Ribbon to communicate with Kubernetes API through Fabric8 KubernetesClient.

@SpringBootApplication
@EnableDiscoveryClient
@EnableFeignClients
@EnableMongoRepositories
@EnableSwagger2
public class DepartmentApplication {
	
	public static void main(String[] args) {
		SpringApplication.run(DepartmentApplication.class, args);
	}
	
	// ...
	
}

Here’s implementation of Feign client for calling method exposed by employee-service.

@FeignClient(name = "employee")
public interface EmployeeClient {

	@GetMapping("/department/{departmentId}")
	List findByDepartment(@PathVariable("departmentId") String departmentId);
	
}

Finally, we have to inject Feign client’s beans to the REST controller. Now, we may call the method defined inside EmployeeClient, which is equivalent to calling REST endpoints.

@RestController
public class DepartmentController {

	private static final Logger LOGGER = LoggerFactory.getLogger(DepartmentController.class);
	
	@Autowired
	DepartmentRepository repository;
	@Autowired
	EmployeeClient employeeClient;
	
	// ...
	
	@GetMapping("/organization/{organizationId}/with-employees")
	public List findByOrganizationWithEmployees(@PathVariable("organizationId") Long organizationId) {
		LOGGER.info("Department find: organizationId={}", organizationId);
		List departments = repository.findByOrganizationId(organizationId);
		departments.forEach(d -> d.setEmployees(employeeClient.findByDepartment(d.getId())));
		return departments;
	}
	
}

5. Building API gateway using Kubernetes Ingress

An Ingress is a collection of rules that allow incoming requests to reach the downstream services. In our microservices architecture ingress is playing a role of an API gateway. To create it we should first prepare YAML description file. The descriptor file should contain the hostname under which the gateway will be available and mapping rules to the downstream services.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: gateway-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  backend:
    serviceName: default-http-backend
    servicePort: 80
  rules:
  - host: microservices.info
    http:
      paths:
      - path: /employee
        backend:
          serviceName: employee
          servicePort: 8080
      - path: /department
        backend:
          serviceName: department
          servicePort: 8080
      - path: /organization
        backend:
          serviceName: organization
          servicePort: 8080

You have to execute the following command to apply the configuration visible above to the Kubernetes cluster.

$ kubectl apply -f kubernetes\ingress.yaml

For testing this solution locally we have to insert the mapping between IP address and hostname set in ingress definition inside hosts file as shown below. After it we can services through ingress using defined hostname just like that: http://microservices.info/employee.

192.168.99.100 microservices.info

You can check the details of created ingress just by executing command kubectl describe ing gateway-ingress.
micro-kube-2

6. Enabling API specification on gateway using Swagger2

Ok, what if we would like to expose single swagger documentation for all microservices deployed on Kubernetes? Well, here the things are getting complicated… We can run container with Swagger UI, and map all paths exposed by the ingress manually, but it is rather not a good solution…
In that case we can use Spring Cloud Kubernetes Ribbon one more time – this time together with Spring Cloud Netflix Zuul. Zuul will act as gateway only for serving Swagger API.
Here’s the list of dependencies used in my gateway-service project.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-netflix-zuul</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-kubernetes</artifactId>
	<version>0.3.0.BUILD-SNAPSHOT</version>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-netflix-ribbon</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-kubernetes-ribbon</artifactId>
	<version>0.3.0.BUILD-SNAPSHOT</version>
</dependency>
<dependency>
	<groupId>io.springfox</groupId>
	<artifactId>springfox-swagger-ui</artifactId>
	<version>2.9.2</version>
</dependency>
<dependency>
	<groupId>io.springfox</groupId>
	<artifactId>springfox-swagger2</artifactId>
	<version>2.9.2</version>
</dependency>

Kubernetes discovery client will detect all services exposed on cluster. We would like to display documentation only for our three microservices. That’s why I defined the following routes for Zuul.

zuul:
  routes:
    department:
      path: /department/**
    employee:
      path: /employee/**
    organization:
      path: /organization/**

Now we can use ZuulProperties bean to get routes addresses from Kubernetes discovery, and configure them as Swagger resources as shown below.

@Configuration
public class GatewayApi {

	@Autowired
	ZuulProperties properties;

	@Primary
	@Bean
	public SwaggerResourcesProvider swaggerResourcesProvider() {
		return () -> {
			List resources = new ArrayList();
			properties.getRoutes().values().stream()
					.forEach(route -> resources.add(createResource(route.getId(), "2.0")));
			return resources;
		};
	}

	private SwaggerResource createResource(String location, String version) {
		SwaggerResource swaggerResource = new SwaggerResource();
		swaggerResource.setName(location);
		swaggerResource.setLocation("/" + location + "/v2/api-docs");
		swaggerResource.setSwaggerVersion(version);
		return swaggerResource;
	}

}

Application gateway-service should be deployed on cluster the same as other applications. You can the list of running service by executing command kubectl get svc. Swagger documentation is available under address http://192.168.99.100:31237/swagger-ui.html.
micro-kube-3

Conclusion

I’m actually rooting for Spring Cloud Kubernetes project, which is still at the incubation stage. Kubernetes popularity as a platform is rapidly growing during some last months, but it still has some weaknesses. One of them is inter-service communication. Kubernetes doesn’t give us many mechanisms out-of-the-box, which allows configure more advanced rules. This a reason for creating frameworks for service mesh on Kubernetes like Istio or Linkerd. While these projects are still relatively new solutions, Spring Cloud is stable, opinionated framework. Why not to use to provide service discovery, inter-service communication or load balancing? Thanks to Spring Cloud Kubernetes it is possible.

Local Continuous Delivery Environment with Docker and Jenkins

In this article I’m going to show you how to setup continuous delivery environment for building Docker images of our Java applications on the local machine. Our environment will consists of Gitlab (optional, otherwise you can use hosted GitHub), Jenkins master, Jenkins JNLP slave with Docker, and private Docker registry. All those tools will be run locally using their Docker images. Thanks to that you will be able to easily test it on your laptop, and then configure the same environment on production deployed on multiple servers or VMs. Let’s take a look on the architecture of the proposed solution.

art-docker-1

1. Running Jenkins Master

We use the latest Jenkins LTS image. Jenkins Web Dashboard is exposed on port 38080. Slave agents may connect master on default 50000 JNLP (Java Web Start) port.

$ docker run -d --name jenkins -p 38080:8080 -p 50000:50000 jenkins/jenkins:lts

After starting, you have to execute command docker logs jenkins in order to obtain an initial admin password. Find the following fragment in the logs, copy your generated password and paste in Jenkins start page available at http://192.168.99.100:38080.

art-docker-2

We have to install some Jenkins plugins to be able to checkout project from Git repository, build application from source code using Maven, and finally build and push Docker image to a private registry. Here’s a list of required plugins:

  • Git Plugin – this plugin allows to use Git as a build SCM
  • Maven Integration Plugin – this plugin provides advanced integration for Maven 2/3
  • Pipeline Plugin – this is a suite of plugins that allows you to create continuous delivery pipelines as a code, and run them in Jenkins
  • Docker Pipeline Plugin – this plugin allows you to build and use Docker containers from pipelines

2. Building Jenkins Slave

Pipelines are usually run on different machine than machine with master node. Moreover, we need to have Docker engine installed on that slave machine to be able to build Docker images. Although, there are some ready Docker images with Docker-in-Docker and Jenkins client agent, I have never find the image with JDK, Maven, Git and Docker installed. This is most commonly used tools when building images for your microservices, so it is definitely worth to have such an image with Jenkins image prepared.

Here’s the Dockerfile with Jenkins Docker-in-Docker slave with Git, Maven and OpenJDK installed. I used Docker-in-Docker as a base image (1). We can override some properties when running our container. You will probably have to override default Jenkins master address (2) and slave secret key (3). The rest of parameters is optional, but you can even decide to use external Docker daemon by overriding DOCKER_HOST environment variable. We also download and install Maven (4) and create user with special sudo rights for running Docker (5). Finally we run entrypoint.sh script, which starts Docker daemon and Jenkins agent (6).

FROM docker:18-dind # (1)
MAINTAINER Piotr Minkowski
ENV JENKINS_MASTER http://localhost:8080 # (2)
ENV JENKINS_SLAVE_NAME dind-node
ENV JENKINS_SLAVE_SECRET "" # (3)
ENV JENKINS_HOME /home/jenkins
ENV JENKINS_REMOTING_VERSION 3.17
ENV DOCKER_HOST tcp://0.0.0.0:2375
RUN apk --update add curl tar git bash openjdk8 sudo

ARG MAVEN_VERSION=3.5.2 # (4)
ARG USER_HOME_DIR="/root"
ARG SHA=707b1f6e390a65bde4af4cdaf2a24d45fc19a6ded00fff02e91626e3e42ceaff
ARG BASE_URL=https://apache.osuosl.org/maven/maven-3/${MAVEN_VERSION}/binaries

RUN mkdir -p /usr/share/maven /usr/share/maven/ref \
  && curl -fsSL -o /tmp/apache-maven.tar.gz ${BASE_URL}/apache-maven-${MAVEN_VERSION}-bin.tar.gz \
  && echo "${SHA}  /tmp/apache-maven.tar.gz" | sha256sum -c - \
  && tar -xzf /tmp/apache-maven.tar.gz -C /usr/share/maven --strip-components=1 \
  && rm -f /tmp/apache-maven.tar.gz \
  && ln -s /usr/share/maven/bin/mvn /usr/bin/mvn

ENV MAVEN_HOME /usr/share/maven
ENV MAVEN_CONFIG "$USER_HOME_DIR/.m2"
# (5)
RUN adduser -D -h $JENKINS_HOME -s /bin/sh jenkins jenkins && chmod a+rwx $JENKINS_HOME
RUN echo "jenkins ALL=(ALL) NOPASSWD: /usr/local/bin/dockerd" > /etc/sudoers.d/00jenkins && chmod 440 /etc/sudoers.d/00jenkins
RUN echo "jenkins ALL=(ALL) NOPASSWD: /usr/local/bin/docker" > /etc/sudoers.d/01jenkins && chmod 440 /etc/sudoers.d/01jenkins
RUN curl --create-dirs -sSLo /usr/share/jenkins/slave.jar http://repo.jenkins-ci.org/public/org/jenkins-ci/main/remoting/$JENKINS_REMOTING_VERSION/remoting-$JENKINS_REMOTING_VERSION.jar && chmod 755 /usr/share/jenkins && chmod 644 /usr/share/jenkins/slave.jar

COPY entrypoint.sh /usr/local/bin/entrypoint
VOLUME $JENKINS_HOME
WORKDIR $JENKINS_HOME
USER jenkins
ENTRYPOINT ["/usr/local/bin/entrypoint"] # (6)

Here’s the script entrypoint.sh.

#!/bin/sh
set -e
echo "starting dockerd..."
sudo dockerd --host=unix:///var/run/docker.sock --host=$DOCKER_HOST --storage-driver=vfs &
echo "starting jnlp slave..."
exec java -jar /usr/share/jenkins/slave.jar \
	-jnlpUrl $JENKINS_URL/computer/$JENKINS_SLAVE_NAME/slave-agent.jnlp \
	-secret $JENKINS_SLAVE_SECRET

The source code with image definition is available on GitHub. You can clone the repository https://github.com/piomin/jenkins-slave-dind-jnlp.git, build image and then start container using the following commands.

$ docker build -t piomin/jenkins-slave-dind-jnlp .
$ docker run --privileged -d --name slave -e JENKINS_SLAVE_SECRET=5664fe146104b89a1d2c78920fd9c5eebac3bd7344432e0668e366e2d3432d3e -e JENKINS_SLAVE_NAME=dind-node-1 -e JENKINS_URL=http://192.168.99.100:38080 piomin/jenkins-slave-dind-jnlp

Building it is just an optional step, because image is already available on my Docker Hub account.

art-docker-3

3. Enabling Docker-in-Docker Slave

To add new slave node you need to navigate to section Manage Jenkins -> Manage Nodes -> New Node. Then define permanent node with name parameter filled. The most suitable name is default name declared inside Docker image definition – dind-node. You also have to set remote root directory, which should be equal to path defined inside container for JENKINS_HOME environment variable. In my case it is /home/jenkins. The slave node should be launched via Java Web Start (JNLP).

art-docker-4

New node is visible on the list of nodes as disabled. You should click in order to obtain its secret key.

art-docker-5

Finally, you may run your slave container using the following command containing secret copied from node’s panel in Jenkins Web Dashboard.

$ docker run --privileged -d --name slave -e JENKINS_SLAVE_SECRET=fd14247b44bb9e03e11b7541e34a177bdcfd7b10783fa451d2169c90eb46693d -e JENKINS_URL=http://192.168.99.100:38080 piomin/jenkins-slave-dind-jnlp

If everything went according to plan you should see enabled node dind-node in the node’s list.

art-docker-6

4. Setting up Docker Private Registry

After deploying Jenkins master and slave, there is the last required element in architecture that has to be launched – private Docker registry. Because we will access it remotely (from Docker-in-Docker container) we have to configure secure TLS/SSL connection. To achieve it we should first generate TLS certificate and key. We can use openssl tool for it. We begin from generating a private key.

$ openssl genrsa -des3 -out registry.key 1024

Then, we should generate a certificate request file (CSR) by executing the following command.

$ openssl req -new -key registry.key -out registry.csr

Finally, we can generate a self-signed SSL certificate that is valid for 1 year using openssl command as shown below.

$ openssl x509 -req -days 365 -in registry.csr -signkey registry.key -out registry.crt

Don’t forget to remove passphrase from your private key.

$ openssl rsa -in registry.key -out registry-nopass.key -passin pass:123456

You should copy generated .key and .crt files to your docker machine. After that you may run Docker registry using the following command.

docker run -d -p 5000:5000 --restart=always --name registry -v /home/docker:/certs -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/registry.crt -e REGISTRY_HTTP_TLS_KEY=/certs/registry-nopass.key registry:2

If a registry has been successfully started you should able to access it over HTTPS by calling address https://192.168.99.100:5000/v2/_catalog from your web browser.

5. Creating application Dockerfile

The sample applications source code is available on GitHub in repository sample-spring-microservices-new (https://github.com/piomin/sample-spring-microservices-new.git). There are some modules with microservices. Each of them has Dockerfile created in the root directory. Here’s typical Dockerfile for our microservice built on top of Spring Boot.

FROM openjdk:8-jre-alpine
ENV APP_FILE employee-service-1.0-SNAPSHOT.jar
ENV APP_HOME /app
EXPOSE 8090
COPY target/$APP_FILE $APP_HOME/
WORKDIR $VERTICLE_HOME
ENTRYPOINT ["sh", "-c"]
CMD ["exec java -jar $APP_FILE"]

6. Building pipeline through Jenkinsfile

This step is the most important phase of our exercise. We will prepare pipeline definition, which combines together all the currently discussed tools and solutions. This pipeline definition is a part of every sample application source code. The change in Jenkinsfile is treated the same as a change in the source code responsible for implementing business logic.
Every pipeline is divided into stages. Every stage defines a subset of tasks performed through the entire pipeline. We can select the node, which is responsible for executing pipeline’s steps or leave it empty to allow random selection of the node. Because we have already prepared dedicated node with Docker, we force pipeline to being built by that node. In the first stage called Checkout we pull the source code from Git repository (1). Then we build an application binary using Maven command (2). Once the fat JAR file has been prepared we may proceed to building application’s Docker image (3). We use methods provided by Docker Pipeline Plugin. Finally, we push the Docker image with fat JAR file to secure private Docker registry (4). Such an image may be accessed by any machine that has Docker installed and has access to our Docker registry. Here’s the full code of Jenkinsfile prepared for module config-service.

node('dind-node') {
    stage('Checkout') { # (1)
      git url: 'https://github.com/piomin/sample-spring-microservices-new.git', credentialsId: 'piomin-github', branch: 'master'
    }
    stage('Build') { # (2)
      dir('config-service') {
        sh 'mvn clean install'
        def pom = readMavenPom file:'pom.xml'
        print pom.version
        env.version = pom.version
        currentBuild.description = "Release: ${env.version}"
      }
    }
    stage('Image') {
      dir ('config-service') {
        docker.withRegistry('https://192.168.99.100:5000') {
          def app = docker.build "piomin/config-service:${env.version}" # (3)
          app.push() # (4)
        }
      }
    }
}

7. Creating Pipeline in Jenkins Web Dashboard

After preparing application’s source code, Dockerfile and Jenkinsfile the only thing left is to create pipeline using Jenkins UI. We need to select New Item -> Pipeline and type the name of our first Jenkins pipeline. Then go to Configure panel and select Pipeline script from SCM in Pipeline section. Inside the following form we should fill an address of Git repository, user credentials and a location of Jenkinsfile.

art-docker-7

8. Configure GitLab WebHook (Optionally)

If you run GitLab locally using its Docker image you will be able to configure webhook, which triggers run of your pipeline after pushing changes to Git repository. To run GitLab using Docker execute the following command.

$ docker run -d --name gitlab -p 10443:443 -p 10080:80 -p 10022:22
gitlab/gitlab-ce:latest

Before configuring webhook in GitLab Dashboard we need to enable this feature for Jenkins pipeline. To achieve it we should first install GitLab Plugin.

art-docker-8

Then, you should come back to the pipeline’s configuration panel and enable GitLab build trigger. After that, webhook will be available for our sample pipeline called config-service-pipeline under URL http://192.168.99.100:38080/project/config-service-pipeline as shown in the following picture.

art-docker-9

Before proceeding to configuration of webhook in GitLab Dashboard you should retrieve your Jenkins user API token. To achieve it go to profile panel, select Configure and click button Show API Token.

art-docker-10

To add a new WebHook for your Git repository, you need to go to the section Settings -> Integrations and then fill the URL field with webhook address copied from Jenkins pipeline. Then paste Jenkins user API token into field Secret Token. Leave the Push events checkbox selected.

art-docker-11

9. Running pipeline

Now, we may finally run our pipeline. If you use GitLab Docker container as Git repository platform you just have to push changes in the source code. Otherwise you have to manually start build of pipeline. The first build will take a few minutes, because Maven has to download dependencies required for building an application. If everything will end with success you should see the following result on your pipeline dashboard.

art-docker-13

You can check out the list of images stored in your private Docker registry by calling the following HTTP API endpoint in your web browser: https://192.168.99.100:5000/v2/_catalog.

art-docker-12

Testing microservices on OpenShift using Arquillian Cube

I had a touch with Arquillian framework for the first time when I was building the automated end-to-end tests for JavaEE based applications. At that time testing applications deployed on JavaEE servers was not very comfortable. Arquillian came with nice solution for that problem. It has been providing useful mechanisms for testing EJBs deployed on an embedded application server.
Currently, Arquillian provides multiple modules dedicated for different technologies and use cases. One of these modules is Arquillian Cube. With this extension you can create integration/functional tests running on Docker containers or even more advanced orchestration platforms like Kubernetes or OpenShift.
In this article I’m going to show you how to use Arquillian Cube for building integration tests for applications running on OpenShift platform. All the examples would be deployed locally on Minishift. Here’s the full list of topics covered in this article:

  • Using Arquillian Cube for deploying, and running applications on Minishift
  • Testing applications deployed on Minishift by calling their REST API exposed using OpenShift routes
  • Testing inter-service communication between deployed applications basing on Kubernetes services

Before reading this article it is worth to consider reading two of my previous articles about Kubernetes and OpenShift:

The following picture illustrates the architecture of currently discussed solution. We will build and deploy two sample applications on Minishift. They integrate with NoSQL database, which is also ran as a service on OpenShift platform.

arquillian-1

Now, we may proceed to the development.

1. Including Arquillian Cube dependencies

Before including dependencies to Arquillian Cube libraries we should define dependency management section in our pom.xml. It should contain BOM of Arquillian framework and also of its Cube extension.

<dependencyManagement>
     <dependencies>
          <dependency>
                <groupId>org.arquillian.cube</groupId>
                <artifactId>arquillian-cube-bom</artifactId>
                <version>1.15.3</version>
                <scope>import</scope>
                <type>pom</type>
          </dependency>
          <dependency>
                <groupId>org.jboss.arquillian</groupId>
                <artifactId>arquillian-bom</artifactId>
                <version>1.4.0.Final</version>
                <scope>import</scope>
                <type>pom</type>
          </dependency>
     </dependencies>
</dependencyManagement>

Here’s the list of libraries used in my sample project. The most important thing is to include starter for Arquillian Cube OpenShift extension, which contains all required dependencies. It is also worth to include arquillian-cube-requirement artifact if you would like to annotate test class with @RunWith(ArquillianConditionalRunner.class), and openshift-client in case you would like to use Fabric8 OpenShiftClient.

<dependency>
     <groupId>org.jboss.arquillian.junit</groupId>
     <artifactId>arquillian-junit-container</artifactId>
     <version>1.4.0.Final</version>
     <scope>test</scope>
</dependency>
<dependency>
     <groupId>org.arquillian.cube</groupId>
     <artifactId>arquillian-cube-requirement</artifactId>
     <scope>test</scope>
</dependency>
<dependency>
     <groupId>org.arquillian.cube</groupId>
     <artifactId>arquillian-cube-openshift-starter</artifactId>
     <scope>test</scope>
</dependency>
<dependency>
     <groupId>io.fabric8</groupId>
     <artifactId>openshift-client</artifactId>
     <version>3.1.12</version>
     <scope>test</scope>
</dependency>

2. Running Minishift

I gave you a detailed instruction how to run Minishift locally in my previous articles about OpenShift. Here’s the full list of commands that should be executed in order to start Minishift, reuse Docker daemon managed by Minishift and create test namespace (project).

$ minishift start --vm-driver=virtualbox --memory=2G
$ minishift docker-env
$ minishift oc-env
$ oc login -u developer -p developer
$ oc new-project sample-deployment

We also have to create Mongo database service on OpenShift. OpenShift platform provides an easily way of deploying built-in services via web console available at https://192.168.99.100:8443. You can select there the required service on main dashboard, and just confirm the installation using default properties. Otherwise, you would have to provide YAML template with deployment configuration, and apply it to Minishift using oc command. YAML file will be also required if you decide to recreate namespace on every single test case (explained in the subsequent text in Step 3). I won’t paste here content of the template with configuration for creating MongoDB service on Minishift. This file is available in my GitHub repository in the /openshift/mongo-deployment.yaml file. To access that file you need to clone repository sample-vertx-kubernetes and switch to branch openshift (https://github.com/piomin/sample-vertx-kubernetes/tree/openshift-tests). It contains definitions of secret, persistentVolumeClaim, deploymentConfig and service.

arquillian-2

3. Configuring connection with Minishift for Arquillian

All the Arquillian configuration settings should be provided in arquillian.xml file located in src/test/resources directory. When running Arquillian tests on Minishift you generally have two approaches that may be applied. You can create new namespace per every test suite and then remove it after the test or just use the existing one, and then remove all the created components within the selected namespace. First approach is set by default for every test until you modify it inside Arquillian configuration file using namespace.use.existing and namespace.use.current properties.

<extension qualifier="openshift">
	<property name="namespace.use.current">true</property>
	<property name="namespace.use.existing">sample-deployment</property>
	<property name="kubernetes.master">https://192.168.99.100:8443</property>
	<property name="cube.auth.token">EMNHP8QIB4A_VU4kE_vQv8k9he_4AV3GTltrzd06yMU</property>
</extension>

You also have to set Kubernetes master address and API token. In order to obtain token just run the following command.

$ oc whoami -t
EMNHP8QIB4A_VU4kE_vQv8k9he_4AV3GTltrzd06yMU

4. Building Arquillian JUnit test

Every JUnit test class should be annotated with @RequiresOpenshift. It should also have runner set. In this case it is ArquillianConditionalRunner. The test method testCustomerRoute applies the configuration passed inside file deployment.yaml, which is assigned to the method using @Template annotation.
The important part of this unit test is route’s URL declaration. We have to annotate it with the following annotation:

  • @RouteURL – it searches for a route with a name defined using value parameter and inject it into URL object instance
  • @AwaitRoute – if you do not declare this annotation the test will finish just after running, because deployment on OpenShift is processed asynchronously. @AwaitRoute will force test to wait until route is available on Minishift. We can set the timeout of waiting for route (in this case it is 2 minutes) and route’s path. Especially route’s path is very important here, without it our test won’t locate the route and finished with 2 minutes timeout.

The test method is very simple. In fact, I only send POST request with JSON object to the endpoint assigned to the customer-route route and verify if HTTP status code is 200. Because I had a problem with injecting route’s URL (in fact it doesn’t work for my sample with Minishift v3.9.0, while it works with Minishift v3.7.1) I needed to prepare it manually in the code. If it works properly we could use URL url instance for that.

@Category(RequiresOpenshift.class)
@RequiresOpenshift
@RunWith(ArquillianConditionalRunner.class)
public class CustomerServiceApiTest {

    private static final Logger LOGGER = LoggerFactory.getLogger(CustomerServiceApiTest.class);

    @ArquillianResource
    OpenShiftAssistant assistant;
    @ArquillianResource
    OpenShiftClient client;

    @RouteURL(value = "customer-route")
    @AwaitRoute(timeoutUnit = TimeUnit.MINUTES, timeout = 2, path = "/customer")
    private URL url;

    @Test
    @Template(url = "classpath:deployment.yaml")
    public void testCustomerRoute() {
        OkHttpClient httpClient = new OkHttpClient();
        RequestBody body = RequestBody.create(MediaType.parse("application/json"), "{\"name\":\"John Smith\", \"age\":33}");
        Request request = new Request.Builder().url("http://customer-route-sample-deployment.192.168.99.100.nip.io/customer").post(body).build();
        try {
            Response response = httpClient.newCall(request).execute();
            LOGGER.info("Test: response={}", response.body().string());
            Assert.assertNotNull(response.body());
            Assert.assertEquals(200, response.code());
        } catch (IOException e) {
            e.printStackTrace();
        }
    }
}

5. Preparing deployment configuration

Before running the test we have to prepare template with configuration, which is loaded by Arquillian Cube using @Template annotation. We need to create deploymentConfig, inject there MongoDB credentials stored in secret object, and finally expose the service outside container using route object.

kind: Template
apiVersion: v1
metadata:
  name: customer-template
objects:
  - kind: ImageStream
    apiVersion: v1
    metadata:
      name: customer-image
    spec:
      dockerImageRepository: piomin/customer-vertx-service
  - kind: DeploymentConfig
    apiVersion: v1
    metadata:
      name: customer-service
    spec:
      template:
        metadata:
          labels:
            name: customer-service
        spec:
          containers:
          - name: customer-vertx-service
            image: piomin/customer-vertx-service
            ports:
            - containerPort: 8090
              protocol: TCP
            env:
            - name: DATABASE_USER
              valueFrom:
                secretKeyRef:
                  key: database-user
                  name: mongodb
            - name: DATABASE_PASSWORD
              valueFrom:
                secretKeyRef:
                  key: database-password
                  name: mongodb
            - name: DATABASE_NAME
              valueFrom:
                secretKeyRef:
                  key: database-name
                  name: mongodb
      replicas: 1
      triggers:
      - type: ConfigChange
      - type: ImageChange
        imageChangeParams:
          automatic: true
          containerNames:
          - customer-vertx-service
          from:
            kind: ImageStreamTag
            name: customer-image:latest
      strategy:
        type: Rolling
      paused: false
      revisionHistoryLimit: 2
      minReadySeconds: 0
  - kind: Service
    apiVersion: v1
    metadata:
      name: customer-service
    spec:
      ports:
      - name: "web"
        port: 8090
        targetPort: 8090
      selector:
        name: customer-service
  - kind: Route
    apiVersion: v1
    metadata:
      name: customer-route
    spec:
      path: "/customer"
      to:
        kind: Service
        name: customer-service

6. Testing inter-service communication

In the sample project the communication with other microservices is realized by Vert.x WebClient. It takes Kubernetes service name and its container port as parameters. It is implemented inside customer-service by AccountClient, which is then invoked inside Vert.x HTTP route implementation. Here’s AccountClient implementation.

public class AccountClient {

	private static final Logger LOGGER = LoggerFactory.getLogger(AccountClient.class);
	
	private Vertx vertx;

	public AccountClient(Vertx vertx) {
		this.vertx = vertx;
	}
	
	public AccountClient findCustomerAccounts(String customerId, Handler<AsyncResult<List>> resultHandler) {
		WebClient client = WebClient.create(vertx);
		client.get(8095, "account-service", "/account/customer/" + customerId).send(res2 -> {
			LOGGER.info("Response: {}", res2.result().bodyAsString());
			List accounts = res2.result().bodyAsJsonArray().stream().map(it -> Json.decodeValue(it.toString(), Account.class)).collect(Collectors.toList());
			resultHandler.handle(Future.succeededFuture(accounts));
		});
		return this;
	}
	
}

Endpoint GET /account/customer/:customerId exposed by account-service is called within implementation of method GET /customer/:id exposed by customer-service. This time we create new namespace instead using the existing one. That’s why we have to apply MongoDB deployment configuration before applying configuration of sample services. We also need to upload configuration of account-service that is provided inside account-deployment.yaml file. The rest part of JUnit test is pretty similar to the test described in Step 4. It waits until customer-route is available on Minishift. The only differences are in calling URL and dynamic injection of namespace into route’s URL.

@Category(RequiresOpenshift.class)
@RequiresOpenshift
@RunWith(ArquillianConditionalRunner.class)
@Templates(templates = {
        @Template(url = "classpath:mongo-deployment.yaml"),
        @Template(url = "classpath:deployment.yaml"),
        @Template(url = "classpath:account-deployment.yaml")
})
public class CustomerCommunicationTest {

    private static final Logger LOGGER = LoggerFactory.getLogger(CustomerCommunicationTest.class);

    @ArquillianResource
    OpenShiftAssistant assistant;

    String id;
    
    @RouteURL(value = "customer-route")
    @AwaitRoute(timeoutUnit = TimeUnit.MINUTES, timeout = 2, path = "/customer")
    private URL url;

    // ...

    @Test
    public void testGetCustomerWithAccounts() {
        LOGGER.info("Route URL: {}", url);
        String projectName = assistant.getCurrentProjectName();
        OkHttpClient httpClient = new OkHttpClient();
        Request request = new Request.Builder().url("http://customer-route-" + projectName + ".192.168.99.100.nip.io/customer/" + id).get().build();
        try {
            Response response = httpClient.newCall(request).execute();
            LOGGER.info("Test: response={}", response.body().string());
            Assert.assertNotNull(response.body());
            Assert.assertEquals(200, response.code());
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

}

You can run the test using your IDE or just by executing command mvn clean install.

Conclusion

Arquillian Cube comes with gentle solution for integration testing over Kubernetes and OpenShift platforms. It is not difficult to prepare and upload configuration with database and microservices and then deploy it on OpenShift node. You can event test communication between microservices just by deploying dependent application with OpenShift template.

Quick guide to deploying Java apps on OpenShift

In this article I’m going to show you how to deploy your applications on OpenShift (Minishift), connect them with other services exposed there or use some other interesting deployment features provided by OpenShift. Openshift is built on top of Docker containers and the Kubernetes container cluster orchestrator. Currently, it is the most popular enterprise platform basing on those two technologies, so it is definitely worth examining it in more details.

1. Running Minishift

We use Minishift to run a single-node OpenShift cluster on the local machine. The only prerequirement before installing MiniShift is the necessity to have a virtualization tool installed. I use Oracle VirtualBox as a hypervisor, so I should set --vm-driver parameter to virtualbox in my running command.

$  minishift start --vm-driver=virtualbox --memory=3G

2. Running Docker

It turns out that you can easily reuse the Docker daemon managed by Minishift, in order to be able to run Docker commands directly from your command line, without any additional installations. To achieve this just run the following command after starting Minishift.

@FOR /f "tokens=* delims=^L" %i IN ('minishift docker-env') DO @call %i

3. Running OpenShift CLI

The last tool, that is required before starting any practical exercise with Minishift is CLI. CLI is available under command oc. To enable it on your command-line run the following commands.

$ minishift oc-env
$ SET PATH=C:\Users\minkowp\.minishift\cache\oc\v3.9.0\windows;%PATH%
$ REM @FOR /f "tokens=*" %i IN ('minishift oc-env') DO @call %i

Alternatively you can use OpenShift web console which is available under port 8443. On my Windows machine it is by default launched under address 192.168.99.100.

4. Building Docker images of the sample applications

I prepared the two sample applications that are used for the purposes of presenting OpenShift deployment process. These are simple Java, Vert.x applications that provide HTTP API and store data in MongoDB. However, a technology is not very important now. We need to build Docker images with these applications. The source code is available on GitHub (https://github.com/piomin/sample-vertx-kubernetes.git) in branch openshift (https://github.com/piomin/sample-vertx-kubernetes/tree/openshift). Here’s sample Dockerfile for account-vertx-service.

FROM openjdk:8-jre-alpine
ENV VERTICLE_FILE account-vertx-service-1.0-SNAPSHOT.jar
ENV VERTICLE_HOME /usr/verticles
ENV DATABASE_USER mongo
ENV DATABASE_PASSWORD mongo
ENV DATABASE_NAME db
EXPOSE 8095
COPY target/$VERTICLE_FILE $VERTICLE_HOME/
WORKDIR $VERTICLE_HOME
ENTRYPOINT ["sh", "-c"]
CMD ["exec java -jar $VERTICLE_FILE"]

Go to account-vertx-service directory and run the following command to build image from a Dockerfile visible above.

$ docker build -t piomin/account-vertx-service .

The same step should be performed for customer-vertx-service. After it you have two images built, both in the same version latest, which now can be deployed and ran on Minishift.

5. Preparing OpenShift deployment descriptor

When working with OpenShift, the first step of application’s deployment is to create YAML configuration file. This file contains basic information about deployment like containers used for running applications (1), scaling (2), triggers that drive automated deployments in response to events (3) or a strategy of deploying your pods on the platform (4).

Deployment configurations can be managed with the oc command like any other resource. You can create new configuration or update the existing one by using oc apply command.

$ oc apply -f account-deployment.yaml

You can be surprised a little, but this command does not trigger any build and does not start the pods. In fact, you have only created a resource of type deploymentConfig, which may be describes deployment process. You can start this process using some other oc commands, but first let’s take a closer look on the resources required by our application.

6. Injecting environment variables

As I have mentioned before, our sample applications uses external datasource. They need to open the connection to the existing MongoDB instance in order to store there data passed using HTTP endpoints exposed by the application. Here’s MongoVerticle class, which is responsible for establishing client connection with MongoDB. It uses environment variables for setting security credentials and database name.

public class MongoVerticle extends AbstractVerticle {

	@Override
	public void start() throws Exception {
		ConfigStoreOptions envStore = new ConfigStoreOptions()
				.setType("env")
				.setConfig(new JsonObject().put("keys", new JsonArray().add("DATABASE_USER").add("DATABASE_PASSWORD").add("DATABASE_NAME")));
		ConfigRetrieverOptions options = new ConfigRetrieverOptions().addStore(envStore);
		ConfigRetriever retriever = ConfigRetriever.create(vertx, options);
		retriever.getConfig(r -> {
			String user = r.result().getString("DATABASE_USER");
			String password = r.result().getString("DATABASE_PASSWORD");
			String db = r.result().getString("DATABASE_NAME");
			JsonObject config = new JsonObject();
			config.put("connection_string", "mongodb://" + user + ":" + password + "@mongodb/" + db);
			final MongoClient client = MongoClient.createShared(vertx, config);
			final AccountRepository service = new AccountRepositoryImpl(client);
			ProxyHelper.registerService(AccountRepository.class, vertx, service, "account-service");
		});
	}

}

MongoDB is available in the OpenShift’s catalog of predefined Docker images. You can easily deploy it on your Minishift instance just by clicking “MongoDB” icon in “Catalog” tab. Username and password will be automatically generated if you do not provide them during deployment setup. All the properties are available as deployment’s environment variables and are stored as secrets/mongodb, where mongodb is the name of the deployment.

openshift-1

Environment variables can be easily injected into any other deployment using oc set command, and therefore they are injected into the pod after performing deployment process. The following command inject all secrets assigned to mongodb deployment to the configuration of our sample application’s deployment.

$ oc set env --from=secrets/mongodb dc/account-service

7. Importing Docker images to OpenShift

A deployment configuration is ready. So, in theory we could have start deployment process. However, we have back for a moment to the deployment config defined in the Step 5. We defined there two triggers that causes a new replication controller to be created, what results in deploying new version of pod. First of them is a configuration change trigger that fires whenever changes are detected in the pod template of the deployment configuration (ConfigChange). The second of them, image change trigger (ImageChange) fires when a new version of the Docker image is pushed to the repository. To be able to watch if an image in repository has been changed, we have to define and create image stream. Such an image stream does not contain any image data, but present a single virtual view of related images, something similar to an image repository. Inside deployment config file we referred to image stream account-vertx-service, so the same name should be provided inside image stream definition. In turn, when setting the spec.dockerImageRepository field we define the Docker pull specification for the image.

Finally, we can create resource on OpenShift platform.

$ oc apply -f account-image.yaml

8. Running deployment

Once a deployment configuration has been prepared, and Docker images has been succesfully imported into repository managed by OpenShift instance, we may trigger the build using the following oc command.

$ oc rollout latest dc/account-service
$ oc rollout latest dc/customer-service

If everything goes fine the new pods should be started for the defined deployments. You can easily check it out using OpenShift web console.

9. Updating image stream

We have already created two image streams related to the Docker repositories. Here’s the screen from OpenShift web console that shows the list of available image streams.

openshift-images

To be able to push a new version of an image to OpenShift internal Docker registry we should first perform docker login against this registry using user’s authentication token. To obtain the token from OpenShift use oc whoami command, and then pass it to your docker login command with -p parameter.

$ oc whoami -t
Sz9_TXJQ2nyl4fYogR6freb3b0DGlJ133DVZx7-vMFM
$ docker login -u developer -p Sz9_TXJQ2nyl4fYogR6freb3b0DGlJ133DVZx7-vMFM https://172.30.1.1:5000

Now, if you perform any change in your application and rebuild your Docker image with latest tag, you have to push that image to image stream on OpenShift. The address of internal registry has been automatically generated by OpenShift, and you can check it out in the image stream’s details. For me, it is 172.30.1.1:5000.

$ docker tag piomin/account-vertx-service 172.30.1.1:5000/sample-deployment/account-vertx-service:latest
$ docker push 172.30.1.1:5000/sample-deployment/account-vertx-service

After pushing new version of Docker image to image stream, a rollout of application is started automatically. Here’s the screen from OpenShift web console that shows the history of account-service application deployments.

openshift-2

Conclusion

I have shown you the further steps of deploying your application on the OpenShift platform. Basing on sample Java application that connects to a database, I illustrated how to inject credentials to that application’s pod entirely transparently for a developer. I also perform an update of application’s Docker image, in order to show how to trigger a new version deployment on image change.

openshift-3

Microservices traffic management using Istio on Kubernetes

I have already described a simple example of route configuration between two microservices deployed on Kubernetes in one of my previous articles: Service Mesh with Istio on Kubernetes in 5 steps. You can refer to this article if you are interested in the basic information about Istio, and its deployment on Kubernetes via Minikube. Today we will create some more advanced traffic management rules basing on the same sample applications as used in the previous article about Istio.

The source code of sample applications is available on GitHub in repository sample-istio-services (https://github.com/piomin/sample-istio-services.git). There are two sample application callme-service and caller-service deployed in two different versions 1.0 and 2.0. Version 1.0 is available in branch v1 (https://github.com/piomin/sample-istio-services/tree/v1), while version 2.0 in the branch v2 (https://github.com/piomin/sample-istio-services/tree/v2). Using these sample applications in different versions I’m going to show you different strategies of traffic management depending on a HTTP header set in the incoming requests.

We may force caller-service to route all the requests to the specific version of callme-service by setting header x-version to v1 or v2. We can also do not set this header in the request what results in splitting traffic between all existing versions of service. If the request comes to version v1 of caller-service the traffic is splitted 50-50 between two instances of callme-service. If the request is received by v2 instance of caller-service 75% traffic is forwarded to version v2 of callme-service, while only 25% to v1. The scenario described above has been illustrated on the following diagram.

istio-advanced-1

Before we proceed to the example, I should say some words about traffic management with Istio. If you have read my previous article about Istio, you would probably know that each rule is assigned to a destination. Rules control a process of requests routing within a service mesh. The one very important information about them,especially for the purposes of the example illustrated on the diagram above, is that multiple rules can be applied to the same destination. The priority of every rule is determined by the precedence field of the rule. There is one principle related to a value of this field: the higher value of this integer field, the greater priority of the rule. As you may probably guess, if there is more than one rule with the same precedence value the order of rules evaluation is undefined. In addition to a destination, we may also define a source of the request in order to restrict a rule only to a specific caller. If there are multiple deployments of a calling service, we can even filter them out by setting source’s label field. Of course, we can also specify the attributes of an HTTP request such as uri, scheme or headers that are used for matching a request with defined rule.

Ok, now let’s take a look on the rule with the highest priority. Its name is callme-service-v1 (1). It applies to callme-service (2),  and has the highest priority in comparison to other rules (3). It is applies only to requests sent by caller-service (4), that contain HTTP header x-version with value v1 (5). This route rule applies only to version v1 of callme-service (6).

apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
  name: callme-service-v1 # (1)
spec:
  destination:
    name: callme-service # (2)
  precedence: 4 # (3)
  match:
    source:
      name: caller-service # (4)
    request:
      headers:
        x-version:
          exact: "v1" # (5)
  route:
  - labels:
      version: v1 # (6)

Here’s the fragment of the first diagram, which is handled by this route rule.

istio-advanced-7

The next rule callme-service-v2 (1) has a lower priority (2). However, it does not conflicts with first rule, because it applies only to the requests containing x-version header with value v2 (3). It forwards all requests to version v2 of callme-service (4).

apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
  name: callme-service-v2 # (1)
spec:
  destination:
    name: callme-service
  precedence: 3 # (2)
  match:
    source:
      name: caller-service
    request:
      headers:
        x-version:
          exact: "v2" # (3)
  route:
  - labels:
      version: v2 # (4)

As before, here’s the fragment of the first diagram, which is handled by this route rule.

istio-advanced-6

The rule callme-service-v1-default (1) visible in the code fragment below has a lower priority (2) than two previously described rules. In practice it means that it is executed only if conditions defined in two previous rules were not fulfilled. Such a situation occurs if you do not pass the header x-version inside HTTP request, or it would have diferent value than v1 or v2. The rule visible below applies only to the instance of service labeled with v1 version (3). Finally, the traffic to callme-service is load balanced in propertions 50-50 between two versions of that service (4).

apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
  name: callme-service-v1-default # (1)
spec:
  destination:
    name: callme-service
  precedence: 2 # (2)
  match:
    source:
      name: caller-service
      labels:
        version: v1 # (3)
  route: # (4)
  - labels:
      version: v1
    weight: 50
  - labels:
      version: v2
    weight: 50

Here’s the fragment of the first diagram, which is handled by this route rule.

istio-advanced-4

The last rule is pretty similar to the previously described callme-service-v1-default. Its name is callme-service-v2-default (1), and it applies only to version v2 of caller-service (3). It has the lowest priority (2), and splits traffic between two version of callme-service in proportions 75-25 in favor of version v2 (4).

apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
  name: callme-service-v2-default # (1)
spec:
  destination:
    name: callme-service
  precedence: 1 # (2)
  match:
    source:
      name: caller-service
      labels:
        version: v2 # (3)
  route: # (4)
  - labels:
      version: v1
    weight: 25
  - labels:
      version: v2
    weight: 75

The same as before, I have also included the diagram illustrated a behaviour of this rule.

istio-advanced-5

All the rules may be placed inside a single file. In that case they should be separated with line ---. This file is available in code’s repository inside callme-service module as multi-rule.yaml. To deploy all defined rules on Kubernetes just execute the following command.

$ kubectl apply -f multi-rule.yaml

After successful deploy you may check out the list of available rules by running command istioctl get routerule.

istio-advanced-2

Before we will start any tests, we obviously need to have sample applications deployed on Kubernetes. This applications are really simple and pretty similar to the applications used for tests in my previous article about Istio. The controller visible below implements method GET /callme/ping, which prints version of application taken from pom.xml and value of x-version HTTP header received in the request.

@RestController
@RequestMapping("/callme")
public class CallmeController {

	private static final Logger LOGGER = LoggerFactory.getLogger(CallmeController.class);

	@Autowired
	BuildProperties buildProperties;

	@GetMapping("/ping")
	public String ping(@RequestHeader(name = "x-version", required = false) String version) {
		LOGGER.info("Ping: name={}, version={}, header={}", buildProperties.getName(), buildProperties.getVersion(), version);
		return buildProperties.getName() + ":" + buildProperties.getVersion() + " with version " + version;
	}

}

Here’s the controller class that implements method GET /caller/ping. It prints version of caller-service taken from pom.xml and calls method GET callme/ping exposed by callme-service. It needs to include x-version header to the request when sending it to the downstream service.

@RestController
@RequestMapping("/caller")
public class CallerController {

	private static final Logger LOGGER = LoggerFactory.getLogger(CallerController.class);

	@Autowired
	BuildProperties buildProperties;
	@Autowired
	RestTemplate restTemplate;

	@GetMapping("/ping")
	public String ping(@RequestHeader(name = "x-version", required = false) String version) {
		LOGGER.info("Ping: name={}, version={}, header={}", buildProperties.getName(), buildProperties.getVersion(), version);
		HttpHeaders headers = new HttpHeaders();
		if (version != null)
			headers.set("x-version", version);<span id="mce_SELREST_start" style="overflow:hidden;line-height:0;"></span>
		HttpEntity entity = new HttpEntity(headers);
		ResponseEntity response = restTemplate.exchange("http://callme-service:8091/callme/ping", HttpMethod.GET, entity, String.class);
		return buildProperties.getName() + ":" + buildProperties.getVersion() + ". Calling... " + response.getBody() + " with header " + version;
	}

}

Now, we may proceeed to applications build and deployment on Kubernetes. Here are are the further steps.

1. Building appplication

First, switch to branch v1 and build the whole project sample-istio-services by executing mvn clean install command.

2. Building Docker image

The Dockerfiles are placed in the root directory of every application. Build their Docker images by executing the following commands.

$ docker build -t piomin/callme-service:1.0 .
$ docker build -t piomin/caller-service:1.0 .

Alternatively, you may omit this step, because images piomin/callme-service and piomin/caller-service are available on my Docker Hub account.

3. Inject Istio components to Kubernetes deployment file

Kubernetes YAML deployment file is available in the root directory of every application as deployment.yaml. The result of the following command should be saved as separated file, for example deployment-with-istio.yaml.

$ istioctl kube-inject -f deployment.yaml

4. Deployment on Kubernetes

Finally, you can execute well-known kubectl command in order to deploy Docker container with our sample application.

$ kubectl apply -f deployment-with-istio.yaml

Then switch to branch v2, and repeat the steps described above for version 2.0 of the sample applications. The final deployment result is visible on picture below.

istio-advanced-3

One very useful thing when running Istio on Kubernetes is out-of-the-box integration with such tools like Zipkin, Grafana or Prometheus. Istio automatically sends some metrics, that are collected by Prometheus, for example total number of requests in metric istio_request_count. YAML deployment files for these plugins ara available inside directory ${ISTIO_HOME}/install/kubernetes/addons. Before installing Prometheus using kubectl command I suggest to change service type from default ClusterIP to NodePort by adding the line type: NodePort.

apiVersion: v1
kind: Service
metadata:
  annotations:
    prometheus.io/scrape: 'true'
  labels:
    name: prometheus
  name: prometheus
  namespace: istio-system
spec:
  type: NodePort
  selector:
    app: prometheus
  ports:
  - name: prometheus
    protocol: TCP
    port: 9090

Then we should run command kubectl apply -f prometheus.yaml in order to deploy Prometheus on Kubernetes. The deployment is available inside istio-system namespace. To check the external port of service run the following command. For me, it is available under address http://192.168.99.100:32293.

istio-advanced-14

In the following diagram visualized using Prometheus I filtered out only the requests sent to callme-service. Green color points to requests received by version v2 of the service, while red color points to requests processed by version v1 of the service. Like you can see in this diagram, in the beginning I have sent the requests to caller-service with HTTP header x-version set to value v2, then I didn’t set this header and traffic has been splitted between to deployed instances of the service. Finally I set it to v1. I defined an expression rate(istio_request_count{callme-service.default.svc.cluster.local}[1m]), which returns per-second rate of requests received by callme-service.

istio-advanced-13

Testing

Before sending some test requests to caller-service we need to obtain its address on Kubernetes. After executing the following command you see that it is available under address http://192.168.99.100:32237/caller/ping.

istio-services-16

We have four possible scenarios. In first, when we set header x-version to v1 the request will be always routed to callme-service-v1.

istio-advanced-10

If a header x-version is not included in the requests the traffic will be splitted between callme-service-v1

istio-advanced-11

… and callme-service-v2.

istio-advanced-12

Finally, if we set header x-version to v2 the request will be always routed to callme-service-v2.

istio-advanced-14

Conclusion

Using Istio you can easily create and apply simple and more advanced traffic management rules to the applications deployed on Kubernetes. You can also monitor metrics and traces through the integration between Istio and Zipkin, Prometheus and Grafana.

Service Mesh with Istio on Kubernetes in 5 steps

In this article I’m going to show you some basic and more advanced samples that illustrate how to use Istio platform in order to provide communication between microservices deployed on Kubernetes. Following the description on Istio website it is:

An open platform to connect, manage, and secure microservices. Istio provides an easy way to create a network of deployed services with load balancing, service-to-service authentication, monitoring, and more, without requiring any changes in service code.

Istio provides mechanisms for traffic management like request routing, discovery, load balancing, handling failures and fault injection. Additionally you may enable istio-auth that provides RBAC (Role-Based Access Control) and Mutual TLS Authentication. In this article we will discuss only about traffic management mechanisms.

Step 1. Installing Istio on Minikube platform

The most comfortable way to test Istio locally on Kubernetes is through Minikube. I have already described how to configure Minikube on your local machine in this article: Microservices with Kubernetes and Docker. When installing Istio on Minikube we should first enable some Minikube’s plugins during startup.

minikube start --extra-config=controller-manager.ClusterSigningCertFile="/var/lib/localkube/certs/ca.crt" --extra-config=controller-manager.ClusterSigningKeyFile="/var/lib/localkube/certs/ca.key" --extra-config=apiserver.Admission.PluginNames=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota

Istio is installed in dedicated namespace called istio-system, but is able to manage services from all other namespaces. First, you should go to release page and download installation file corresponding to your OS. For me it is Windows, and all the next steps will be described with the assumption that we are using exactly this OS. After running Minikube it would be useful to enable Docker on Minikube’s VM. Thanks to that you will be able to execute docker commands.

@FOR /f "tokens=* delims=^L" %i IN ('minikube docker-env') DO @call %i

Now, extract Istio files to your local filesystem. File istioctl.exe, which is available under ${ISTIO_HOME}/bin directory should be added to your PATH. Istio contains some installation files for Kubernetes platform in ${ISTIO_HOME}/install/kubernetes. To install Istio’s core components on Minikube just apply the following YAML definition file.

kubectl apply -f install/kubernetes/istio.yaml

Now, you have Istio’s core components deployed on your Minikube instance. These components are:

Envoy – it is an open-source edge and service proxy, designed for cloud-native application. Istio uses an extended version of the Envoy proxy. If you are interested in some details about Envoy and microservices read my article Envoy Proxy with Microservices, that describes how to integrate Envoy gateway with service discovery.

Mixer – it is a platform-independent component responsible for enforcing access control and usage policies across the service mesh.

Pilot – it provides service discovery for the Envoy sidecars, traffic management capabilities for intelligent routing and resiliency.

The configuration provided inside istio.yaml definition file deploys some pods and services related to the components mentioned above. You can verify the installation using kubectl command or just by visiting Web Dashboard available after executing command minikube dashboard.

istio-2

Step 2. Building sample applications based on Spring Boot

Before we start configure any traffic rules with Istio, we need to create sample applications that will communicate with each other. These are really simple services. The source code of these applications is available on my GitHub account inside repository sample-istio-services. There are two services: caller-service and callme-service. Both of them expose endpoint ping which prints application’s name and version. Both of these values are taken from Spring Boot build-info file, which is generated during application build. Here’s implementation of endpoint GET /callme/ping.

@RestController
@RequestMapping("/callme")
public class CallmeController {

	private static final Logger LOGGER = LoggerFactory.getLogger(CallmeController.class);

	@Autowired
	BuildProperties buildProperties;

	@GetMapping("/ping")
	public String ping() {
		LOGGER.info("Ping: name={}, version={}", buildProperties.getName(), buildProperties.getVersion());
		return buildProperties.getName() + ":" + buildProperties.getVersion();
	}

}

And here’s implementation of endpoint GET /caller/ping. It calls GET /callme/ping endpoint using Spring RestTemplate. We are assuming that callme-service is available under address callme-service:8091 on Kubernetes. This service is will be exposed inside Minikube node under port 8091.

@RestController
@RequestMapping("/caller")
public class CallerController {

	private static final Logger LOGGER = LoggerFactory.getLogger(CallerController.class);

	@Autowired
	BuildProperties buildProperties;
	@Autowired
	RestTemplate restTemplate;

	@GetMapping("/ping")
	public String ping() {
		LOGGER.info("Ping: name={}, version={}", buildProperties.getName(), buildProperties.getVersion());
		String response = restTemplate.getForObject("http://callme-service:8091/callme/ping", String.class);
		LOGGER.info("Calling: response={}", response);
		return buildProperties.getName() + ":" + buildProperties.getVersion() + ". Calling... " + response;
	}

}

The sample applications have to be started on Docker container. Here’s Dockerfile that is responsible for building image with caller-service application.

FROM openjdk:8-jre-alpine
ENV APP_FILE caller-service-1.0.0-SNAPSHOT.jar
ENV APP_HOME /usr/app
EXPOSE 8090
COPY target/$APP_FILE $APP_HOME/
WORKDIR $APP_HOME
ENTRYPOINT ["sh", "-c"]
CMD ["exec java -jar $APP_FILE"]

The similar Dockerfile is available for callme-service. Now, the only thing we have to is to build Docker images.

docker build -t piomin/callme-service:1.0 .
docker build -t piomin/caller-service:1.0 .

There is also version 2.0.0-SNAPSHOT of callme-service available in branch v2. Switch to this branch, build the whole application, and then build docker image with 2.0 tag. Why we need version 2.0? I’ll describe it in the next section.

docker build -t piomin/callme-service:2.0 .

Step 3. Deploying sample applications on Minikube

Before we start deploying our applications on Minikube, let’s take a look on the sample system architecture visible on the following diagram. We are going to deploy callme-service in two versions: 1.0 and 2.0. Application caller-service is just calling callme-service, so I does not know anything about different versions of the target service. If we would like to route traffic between two versions of callme-service in proportions 20% to 80%, we have to configure the proper Istio’s routerule. And also one thing. Because Istio Ingress is not supported on Minikube, we will just Kubernetes Service. If we need to expose it outside Minikube cluster we should set type to NodePort.

istio-1

Let’s proceed to the deployment phase. Here’s deployment definition for callme-service in version 1.0.

apiVersion: v1
kind: Service
metadata:
  name: callme-service
  labels:
    app: callme-service
spec:
  type: NodePort
  ports:
  - port: 8091
    name: http
  selector:
    app: callme-service
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: callme-service
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: callme-service
        version: v1
    spec:
      containers:
      - name: callme-service
        image: piomin/callme-service:1.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8091

Before deploying it on Minikube we have to inject some Istio properties. The command visible below prints a new version of deployment definition enriched with Istio configuration. We may copy it and save as deployment-with-istio.yaml file.

istioctl kube-inject -f deployment.yaml

Now, let’s apply the configuration to Kubernetes.

kubectl apply -f deployment-with-istio.yaml

The same steps should be performed for caller-service, and also for version 2.0 of callme-service. All YAML configuration files are committed together with applications, and are located in the root directory of every application’s module. If you have succesfully deployed all the required components you should see the following elements in your Minikube’s dashboard.

istio-3

Step 4. Applying Istio routing rules

Istio provides a simple Domain-specific language (DSL) that allows you configure some interesting rules that control how requests are routed within your service mesh. I’m going to show you the following rules:

  • Split traffic between different service versions
  • Injecting the delay in the request path
  • Injecting HTTP error as a reponse from service

Here’s sample route rule definition for callme-service. It splits traffic in proportions 20:80 between versions 1.0 and 2.0 of the service. It also adds 3 seconds delay in 10% of the requests, and returns an HTTP 500 error code for 10% of the requests.

apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
  name: callme-service
spec:
  destination:
    name: callme-service
  route:
  - labels:
      version: v1
    weight: 20
  - labels:
      version: v2
    weight: 80
  httpFault:
    delay:
      percent: 10
      fixedDelay: 3s
    abort:
      percent: 10
      httpStatus: 500

Let’s apply a new route rule to Kubernetes.

kubectl apply -f routerule.yaml

Now, we can easily verify that rule by executing command istioctl get routerule.

istio-6

Step 5. Testing the solution

Before we start testing let’s deploy Zipkin on Minikube. Istio provides deployment definition file zipkin.yaml inside directory ${ISTIO_HOME}/install/kubernetes/addons.

kubectl apply -f zipkin.yaml

Let’s take a look on the list of services deployed on Minikube. API provided by application caller-service is available under port 30873.

istio-4

We may easily test the service for a web browser by calling URL http://192.168.99.100:30873/caller/ping. It prints the name and version of the service, and also the name and version of callme-service invoked by caller-service. Because 80% of traffic is routed to version 2.0 of callme-service you will probably see the following response.

istio-7

However, sometimes version 1.0 of callme-service may be called…

istio-8

… or Istio can simulate HTTP 500 code.

istio-9

You can easily analyze traffic statistics with Zipkin console.

istio-10

Or just take a look on the logs generated by pods.

istio-11

Running Vert.x Microservices on Kubernetes/OpenShift

Automatic deployment, scaling, container orchestration, self-healing are a few of very popular topics in some recent months. This is reflected in the rapidly growing popularity of such tools like Docker, Kubernetes or OpenShift. It’s hard to find any developer who didn’t heard about these technologies. How many of you did setup and run all those tools locally?

Despite appearances, it is not very hard thing to do. Both Kubernetes and OpenShift provide simplified, single-node versions of their platform that allows you to create and try a local cluster, even on Windows.

In this article I’m going to guide you through the all steps that result in deploying and running microservices that communicates with each other and use MongoDB as a data source.

Technologies

Eclipse Vert.x – a toolkit for building reactive applications (and more) on the JVM. It’s a polyglot, event-driven, non blocking and fast framework what makes it the perfect choice for creating light-weight, high-performance microservices.

Kubernetes – is an open-source system for automating deployment, scaling, and management of containerized applications. Now, even Docker platform decided to get support for Kubernetes, although they are promoting their own clustering solution – Docker Swarm. You may easily run it locally using Minikube. However, we won’t use it this time. You can read interesting article about creating Spring Boot microservices and running them on Minikube here: Microservices with Kubernetes and Docker.

RedHat OpenShift – is an open source container application platform build on top of Docker containers and Kubernetes. It is also available online on website https://www.openshift.com/. You may easily run it locally with Minishift.

Getting started with Minishift

Of cource, you can read some tutorials available on RedHat website, but I’ll try to condense an instruction of installation and configuration in a few words. Firstly, I would like to point out that all the instructions will be applied to Windows OS.

Minishift requires a hyper-visor to start the virtual machine, so first you should download and install one of these tools. If you use other solution than Hyper-V, like I do, you would have to pass that driver name during Minishift starting. The command visible below launches it on Oracle VirtualBox and allocates 3GB of RAM memory for VM.

$  minishift start --vm-driver=virtualbox --memory=3G

The executable minishift.exe should be included in the system path. You should also have Docker client binary installed on your machine. Docker daemon is in turn managed by Minishift, so you can reuse it for other use-cases as well. All what you need to do to take an advantage of this functionality is to run the following command in your shell.

$ @FOR /f "tokens=* delims=^L" %i IN ('minishift docker-env') DO @call %i

OpenShift platform my be managed using CLI or web console. To enable CLI on Windows you should add it to the path and then run one command to configure your shell. The description of required steps is displayed after running the following command.

$ minishift oc-env
SET PATH=C:\Users\minkowp\.minishift\cache\oc\v3.7.1\windows;%PATH%
REM Run this command to configure your shell:
REM @FOR /f "tokens=*" %i IN ('minishift oc-env') DO @call %i

In order to use web console just run command $ minishift console, which automatically opens it in your web browser. For me, it is available under address https://192.168.99.100:8443/console. To check your ip just execute $ minishift ip.

Sample applications

The source code of sample applications is available on GitHub (https://github.com/piomin/sample-vertx-kubernetes.git). In fact, the similar application have been ran locally and described in the article Asynchronous Microservices with Vert.x. This article can be treated as an introduction to building microservices with Vert.x framework and to to Vert.x framework in general. The current application is even simpler, because it does not have to integrate with any external discovery server like Consul.

Now, let’s take a look on the code below. It declares a verticle that establishes a client connection to MongoDB and registers repository object as a proxy service. Such a service may be easily accessed by another verticle. MongoDB network address is managed by Minishift.

public class MongoVerticle extends AbstractVerticle {

	@Override
	public void start() throws Exception {
		JsonObject config = new JsonObject();
		config.put("connection_string", "mongodb://micro:micro@mongodb/microdb");
		final MongoClient client = MongoClient.createShared(vertx, config);
		final AccountRepository service = new AccountRepositoryImpl(client);
		ProxyHelper.registerService(AccountRepository.class, vertx, service, "account-service");
	}

}

That verticle can be deployed in the application’s main method. It is also important to set property vertx.disableFileCPResolving to true, if you would like to run your application on Minishift. It forces Vert.x to resolve file from the its classloader in addition from the file system.

public static void main(String[] args) throws Exception {
	System.setProperty("vertx.disableFileCPResolving", "true");
	Vertx vertx = Vertx.vertx();
	vertx.deployVerticle(new MongoVerticle());
	vertx.deployVerticle(new AccountServer());
}

AccountServer verticle contains simple API methods that performs CRUD operations on MongoDB.

Building Docker image

Assuming you have successfully installed and configured Minishift, and cloned my sample Maven project shared on GitHub, you may proceed to the build and deploy stage. The first step is to build the applications from source code by executing mvn clean install command on the root project. It consists of two independent modules: account-vert-service, customer-vertx-service. Each of these modules contains Dockerfile with image definition. Here’s the one created for customer-vertx-service. It is based openjdk:8-jre-alpine image. Alpine Linux is much smaller than most distribution base images, so our result image would have around 100MB, instead around 600MB if using standard OpenJDK image. Because we are generating Fat JAR files during Maven build we only have to run application inside container using java -jar command.

FROM openjdk:8-jre-alpine
ENV VERTICLE_FILE customer-vertx-service-1.0-SNAPSHOT.jar
ENV VERTICLE_HOME /usr/verticles
EXPOSE 8090
COPY target/$VERTICLE_FILE $VERTICLE_HOME/
WORKDIR $VERTICLE_HOME
ENTRYPOINT ["sh", "-c"]
CMD ["exec java -jar $VERTICLE_FILE"]

Once we have successfully build the project, we should navigate to the main directory of every module. The sample command visible below builds Docker image of customer-vertx-service.

$ docker build -t microservices/customer-vertx-service:1.0 .

In fact, there are some different approaches of building and deploying microservices on OpenShift. For example, we could use Maven plugin or OpenShift definition file. Currently discussed way of deploying application is obviously one the simplest, and it assumes using CLI and web console for configuring deployments and services.

Deploy application on Minishift

Before proceeding to the main part of that article including deploy and run application on Minishift we have to provide some pre-configuration. We have to begin from logging into OpenShift and creating new project with oc command. Here are two required CLI commands. The name of our first OpenShift project is microservices.

$ oc login -u developer -p developer
$ oc new-project microservices

We might as well perform the same actions using web console. After succesfully login there first you will see a dashboard with all available services brokered by Minishift. Let’s initialize a container with MongoDB. All the provided container settings should the same as configured inside application. After creating MongoDB service would available for all other services under mongodb name.

minishift-1

Creating MongoDB container managed by Minishift is only a part of a success. The most important thing is to deploy containers with two sample microservices, where each of them would have access to the database. Here as well, we may leverage two methods of resources creation: by CLI or via web console. Here are some CLI commands for creating deployment on OpenShift.

$ oc new-app --docker-image microservices/customer-vertx-service:1.0
$ oc new-app --docker-image microservices/account-vertx-service:1.0

The commands visible above create not only deployment, but also creates pods, and exposes each of them as a service. Now yoiu may easily scale number of running pods by executing the following command.

oc scale --replicas=2 dc customer-vertx-service
oc scale --replicas=2 dc account-vertx-service

The next step is to expose your service outside a container to make it publicly visible. We can achieve it by creating a route. OpenShift route is in fact Kubernetes ingress. OpenShift web console provides an interface for creating routes available under section Applications -> Routes. When defining new route you should enter its name, a name of a service, and a path on the basis of which requets are proxied. If hostname is not specified, it is automatically generated by OpenShift.

minishift-2

Now, let’s take a look on web console dashboard. There are three applications deployed: mongodb-persistent, account-vertx-service and customer-vertx-service. Both Vert.x microservices are scaled up with two running instances (Kubernetes pods), and are exposed under automatically generated hostname with given context path, for example http://account-route-microservices.192.168.99.100.nip.io/account.

minishift-3

You may check the details of every deployment by expanding it on the list view.

minishift-4

HTTP API is available outside and can be easily tested. Here’s the source code with REST API implementation for account-vertx-service.

AccountRepository repository = AccountRepository.createProxy(vertx, "account-service");
Router router = Router.router(vertx);
router.route("/account/*").handler(ResponseContentTypeHandler.create());
router.route(HttpMethod.POST, "/account").handler(BodyHandler.create());
router.get("/account/:id").produces("application/json").handler(rc -> {
	repository.findById(rc.request().getParam("id"), res -> {
		Account account = res.result();
		LOGGER.info("Found: {}", account);
		rc.response().end(account.toString());
	});
});
router.get("/account/customer/:customer").produces("application/json").handler(rc -> {
	repository.findByCustomer(rc.request().getParam("customer"), res -> {
		List accounts = res.result();
		LOGGER.info("Found: {}", accounts);
		rc.response().end(Json.encodePrettily(accounts));
	});
});
router.get("/account").produces("application/json").handler(rc -> {
	repository.findAll(res -> {
		List accounts = res.result();
		LOGGER.info("Found all: {}", accounts);
		rc.response().end(Json.encodePrettily(accounts));
	});
});
router.post("/account").produces("application/json").handler(rc -> {
	Account a = Json.decodeValue(rc.getBodyAsString(), Account.class);
	repository.save(a, res -> {
		Account account = res.result();
		LOGGER.info("Created: {}", account);
		rc.response().end(account.toString());
	});
});
router.delete("/account/:id").handler(rc -> {
	repository.remove(rc.request().getParam("id"), res -> {
		LOGGER.info("Removed: {}", rc.request().getParam("id"));
		rc.response().setStatusCode(200);
	});
});
vertx.createHttpServer().requestHandler(router::accept).listen(8095);

Inter-service communication

All the microservices are deployed and exposed outside the container. The last thing that we still have to do is provide a communication between them. In our sample system customer-vertx-service calls endpoint exposed by account-vertx-service. Thanks to Kubernetes services mechanism we may easily call another service from application’s container, for example using simple HTTP client implementation. Let’s take a look on the list of services exposed by Kubernetes.

minishift-6

Here’s client’s implementation responsible for communication with account-vertx-service. Vert.x WebClient takes three parameters when calling GET method: port, hostname and path. We should set a Kubernetes service name as a hostname paramater, and default container’s port as a port.

public class AccountClient {

	private static final Logger LOGGER = LoggerFactory.getLogger(AccountClient.class);

	private Vertx vertx;

	public AccountClient(Vertx vertx) {
		this.vertx = vertx;
	}

	public AccountClient findCustomerAccounts(String customerId, Handler<AsyncResult<List>> resultHandler) {
		WebClient client = WebClient.create(vertx);
		client.get(8095, "account-vertx-service", "/account/customer/" + customerId).send(res2 -> {
			LOGGER.info("Response: {}", res2.result().bodyAsString());
			List accounts = res2.result().bodyAsJsonArray().stream().map(it -> Json.decodeValue(it.toString(), Account.class)).collect(Collectors.toList());
			resultHandler.handle(Future.succeededFuture(accounts));
		});
		return this;
	}

}

AccountClient is invoked inside customer-vertx-service GET /customer/:id endpoint’s implementation.

router.get("/customer/:id").produces("application/json").handler(rc -> {
	repository.findById(rc.request().getParam("id"), res -> {
		Customer customer = res.result();
		LOGGER.info("Found: {}", customer);
		new AccountClient(vertx).findCustomerAccounts(customer.getId(), res2 -> {
			customer.setAccounts(res2.result());
			rc.response().end(customer.toString());
		});
	});
});

Summary

It is no coincidence that OpenShift is considered as the leading enterprise distribution of Kubernetes. It adds several helpful features to Kubernetes that simplify adopting it for developers and operation teams. You can easily try such features like CI/CD for DevOps, multiple projects with collaboration, networking, log aggregation from multiple pods on your local machine with Minishift.

Microservices with Kubernetes and Docker

In one of my previous posts I described an example of continuous delivery configuration for building microservices with Docker and Jenkins. It was a simple configuration where I decided to use only Docker Pipeline Plugin for building and running containers with microservices. That solution had one big disadvantage – we had to link all containers between each other to provide communication between microservices deployed inside those containers. Today I’m going to present you one the smart solution which helps us to avoid that problem – Kubernetes.

Kubernetes is an open-source platform for automating deployment, scaling, and operations of application containers across clusters of hosts, providing container-centric infrastructure. It was originally designed by Google. It has many features especially useful for applications running in production like service naming and discovery, load balancing, application health checking, horizontal auto-scaling or rolling updates. There are several important concepts around Kubernetes we should know before going into the sample.

Pod – this is basic unit in Kubernetes. It can consists of one or more containers that are guaranteed to be co-located on the host machine and share the same resources. All containers deployed inside pod can see other containers via localhost. Each pod has a unique IP address within the cluster

Service – is a set of pods that work together. By default a service is exposed inside a cluster but it can also be exposed onto an external  IP address outside your cluster. We can expose it using one of four available behaviors: ClusterIP, NodePort, LoadBalancer and ExternalName.

Replication Controller – it is specific type of Kubernetes controllers. It handles replication and scaling by running a specified number of copies of a pod across the cluster. It is also responsible for pods replacement if the underlying node fails.

Minikube

Configuration of highly available Kubernetes cluster is rather not easy task to perform. Fortunately, there is a tool that makes it easy to run Kubernetes locally – Minikube. It can run a single-node cluster inside a VM, what is really important for developers who want to try it out. The beginning is really easy. For example on Windows, you have to download minikube.exe and kubectl.exe and add them to PATH environment variable. Then you can start it from command line using minikube start command and use almost all of Kubernetes features available by calling kubectl command.  An alternative for command line option is Kubernetes Dashboard. It can be launched by calling minikube dashboard command. We can create, update or delete deployment from UI dashboard, and also list and view a configuration of all pods, services, ingresses, replication controller etc. Here’s Kubernetes Dashboard with the list of deployments for our sample.

kube1

Application

The concept of microservices architecture for our sample is pretty similar to the concept from my article about continuous delivery with Docker and Jenkins which I mentioned in the beginning of that article. We also have account and customer microservices. Customer service is interacting with account service while searching for customer accounts. We do not use gateway (Zuul) and discovery (Eureka) Spring Boot services, because we have such mechanisms available on Kubernetes out of the box. Here’s the picture illustrating the architecture of presented solution. Each microservice’s pod consists of two containers: first with microservice application and second with Mongo database. Account and customer microservices have their own database where all data is stored. Each pod is exposed as a service and can by searched by name on Kubernetes. We also configure Kubernetes Ingress which acts as a gateway for our microservices.

kube_micro

Sample application source code is available on GitHub. It consists of two modules account-service and customer-service. It is based on Spring Boot framework, but doesn’t use any of Spring Cloud projects except Feign client. Here’s dockerfile from account service. We use small openjdk image – alpine. Thanks to that our result image will have about ~120MB instead of ~650MB when using standard openjdk as an base image.

FROM openjdk:alpine
MAINTAINER Piotr Minkowski <piotr.minkowski@gmail.com>
ADD target/account-service.jar account-service.jar
ENTRYPOINT ["java", "-jar", "/account-service.jar"]
EXPOSE 2222

To enable MongoDB support I add spring-boot-starter-data-mongodb dependency to pom.xml. We also have to provide connection data to application.yml and annotate entity class with @Document. The last think is to declare repository interface extending MongoRepository which has basic CRUD methods implemented. We add two custom find methods.

public interface AccountRepository extends MongoRepository<Account, String> {

    public Account findByNumber(String number);
    public List<Account> findByCustomerId(String customerId);

}

In customer service we are going to call API method from account service. Here’s declarative REST client @FeignClient declaration. All the pods with account service are available under the account-service name and default service port – 2222. Such settings are the results of the service configuration on Kubernetes. I will describe it in the next section.

@FeignClient(name = "account-service", url = "http://account-service:2222")
public interface AccountClient {

	@RequestMapping(method = RequestMethod.GET, value = "/accounts/customer/{customerId}")
	List<Account> getAccounts(@PathVariable("customerId") String customerId);

}

The docker image of our microservices can be build with the command visible below. After build you should push that image to official docker hub or your private registry. In the next section I’ll describe how to use them on Kubernetes. Docker images of the described microservices are also available on my Docker Hub public repository as piomin/account-service and piomin/customer-service.

docker build -t piomin/account-service .
docker push piomin/account-service

Kubernetes deployment

You can create deployment on Kubernetes using kubectl run command, Minikube dashboard or JSON configuration files with kubectl create command. I’m going to show you how to create all resources from JSON configuration files, because we need to create multi-containers deployments in one step. Here’s deployment configuration file for account-service. We have to provide deployment name, image name and exposed port. In the replicas property we are setting requested number of created pods.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: account-service
  labels:
    run: account-service
spec:
  replicas: 1
  template:
    metadata:
      labels:
        run: account-service
    spec:
      containers:
      - name: account-service
        image: piomin/account-service
        ports:
        - containerPort: 2222
          protocol: TCP
      - name: mongo
        image: library/mongo
        ports:
        - containerPort: 27017
          protocol: TCP

We are creating new deployment by running command below. The same command is used for creating services and ingress. Only JSON file format is different.

kubectl create -f deployment-account.json

Now, let’s take o look on service configuration file. We have already created deployment. As you could see in the dashboard image has been pulled from Docker Hub, pod and replica set has been created. Now, we would like to expose our microservice outside. That’s why service is needed. We are also exposing Mongo database on its default port, to be able to connect database and create collections from MongoDB client.

kind: Service
apiVersion: v1
metadata:
  name: account-service
spec:
  selector:
    run: account-service
  ports:
    - name: port1
protocol: TCP
      port: 2222
      targetPort: 2222
    - name: port2
protocol: TCP
      port: 27017
      targetPort: 27017
  type: NodePort

kube-2

After creating similar configuration for customer service we have our microservices exposed. Inside kubernetes they are visible on default ports (2222 and 3333) and service name. That’s why inside customer service REST client (@FeignClient) we declared URL http://account-service:2222. No matter how many pods have been created service will always be available on that URL and requests are load balanced between all pods be Kubernetes out of the box. If we would like to access each service outside Kubernetes, for example in the web browser we need to call it with port visible below container default port – in that sample for account service it is 31638 port and for customer service 31171 port. If you have ran Minikube on Windows your Kubernetes is probably available under 192.168.99.100 address, so you could try to call account service using URL http://192.168.99.100:31638/accounts. Before such test you need to create collection on Mongo database and user micro/micro which is set for that service inside application.yml.

kube-3

Ok, we have our two microservices available under two different ports. It is not exactly what we need. We need some kind of gateway available under on IP which proxies our requests to exact service by matching request path. Fortunately, such an option is also available on Kubernetes. This solution is Ingress. Here’s JSON ingress configuration file. There are two rules defined, first for account-service and second for customer service. Our gateway is available under micro.all host name and default HTTP port.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: gateway-ingress
spec:
  backend:
    serviceName: default-http-backend
    servicePort: 80
  rules:
  - host: micro.all
    http:
      paths:
      - path: /account
        backend:
          serviceName: account-service
          servicePort: 2222
      - path: /customer
        backend:
          serviceName: customer-service
          servicePort: 3333

The last thing that needs to be done to make the gateway working is to add following entry to system hosts file (/etc/hosts for linux and C:\Windows\System32\drivers\etc\hosts for windows). Now, you could try to call from your web browser http://micro.all/accounts or http://micro.all/customers/{id}, which also calls account service in the background.

[MINIKUBE_IP] micro.all

Conclusion

Kubernetes is a great tool for microservices clustering and orchestration. It is still relatively new solution under active development. It can be used together with Spring Boot stack or as an alternative for Spring Cloud Netflix OSS, which seems to be the most popular solution for microservices now.  It has also UI dashboard where you can manage and monitor all resources. Production grade configuration is probably more complicated than single host development configuration with Minikube, but I don’t that it is solid argument against Kubernetes.

Launch microservice in Docker container

Docker, Microservices and Continuous Delivery are increasingly popular topics among modern development teams. Today I’m going to create simple microservice and present you how to run it in Docker container using Maven plugin or Jenkins pipeline. Let’s start from application code which is available on https://github.com/piomin/sample-docker-microservice.git. It has only one endpoint for searching all persons and single person by id. Here’s controller code:

package pl.piomin.microservices.person;

import java.util.ArrayList;
import java.util.List;
import java.util.logging.Logger;

import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class Api {

protected Logger logger = Logger.getLogger(Api.class.getName());

private List<Person> persons;

public Api() {
persons = new ArrayList<>();
persons.add(new Person(1, "Jan", "Kowalski", 22));
persons.add(new Person(1, "Adam", "Malinowski", 33));
persons.add(new Person(1, "Tomasz", "Janowski", 25));
persons.add(new Person(1, "Alina", "Iksińska", 54));
}

@RequestMapping("/person")
public List<Person> findAll() {
logger.info("Api.findAll()");
return persons;
}

@RequestMapping("/person/{id}")
public Person findById(@PathVariable("id") Integer id) {
logger.info(String.format("Api.findById(%d)", id));
return persons.stream().filter(p -> (p.getId().intValue() == id)).findAny().get();
}

}

We need to have Docker installed on our machine and Docker Registry container running on port 5000. If you are interested in commercial support, there is also Docker Trusted Registry provides an image registry and same other features like LDAP/Active Directory integration, security certificates.

docker run -d --name registry -p 5000:5000 registry:latest

We use openjdk as a base image for our new microservice image defined in Dockerfile. Application JAR file will be launched in java command and exposed on port 2222.

FROM openjdk
MAINTAINER Piotr Minkowski <piotr.minkowski@gmail.com>
ADD sample-docker-microservice-1.0-SNAPSHOT.jar person-service.jar
ENTRYPOINT ["java", "-jar", "/person-service.jar"]
EXPOSE 2222

We use docker-maven-plugin to configure image building process inside pom.xml. There is no need for using Dockerfile with that plugin. It has equivalent tags in configuration which could be use instead of Dockerfile entries. Our example is based on Dockerfile.

<plugin>
<groupId>com.spotify</groupId>
<artifactId>docker-maven-plugin</artifactId>
<version>0.4.13</version>
<configuration>
<imageName>${docker.image.prefix}/${project.artifactId}</imageName>
<imageTags>${project.version}</imageTags>
<dockerDirectory>src/main/docker</dockerDirectory>
<dockerHost>https://192.168.99.100:2376</dockerHost>
<dockerCertPath>C:\Users\minkowp\.docker\machine\machines\default</dockerCertPath>
<resources>
<resource>
<targetPath>/</targetPath>
<directory>${project.build.directory}</directory>
<include>${project.build.finalName}.jar</include>
</resource>
</resources>
</configuration>
</plugin>

Finally, we can build our code using Maven command.

mvn clean package docker:build

After running maven command the images is tagged and pushed to local repository.

docker tag e106e5bf3d57 localhost:5000/microservices/sample-docker-microservice:1.0-SNAPSHOT
docker push localhost:5000/microservices/sample-docker-microservice:1.0-SNAPSHOT

Application images now is registered in local Docker Registry. Optionally, we could push it docker.io or to enterprise Docker Trusted Registry. We can check it out using API available at http://192.168.99.100:5000/v2/_catalog. Here’s Docker command for running with newly created image stored in local register. Service is available at http://192.168.99.100:2222/person/.

docker run -d --name sample1 -p 2222:2222 microservice/sample-docker-microservice:1.0-SNAPSHOT