Microservices with Micronaut, KrakenD and Consul

Microservices with Micronaut, KrakenD and Consul

In this article, you will learn how to use the KrakenD API gateway with Consul DNS and Micronaut to build microservices. Micronaut is a modern, JVM framework for building microservice and serverless applications. It provides built-in support for the most popular discovery servers. One of them is Hashicorp’s Consul. We can also easily integrate Micronaut with Zipkin to implement distributed tracing. The only thing that is missing here is the API gateway tool. Especially if we compare it with Spring Boot, where we can run Spring Cloud Gateway. Is it a problem? Of course no, since we may include a third-party API gateway to our system.

We will use KrakenD. Why? Although it is not the most popular API gateway tool, it seems to be very interesting. First of all, it is very fast and lightweight. Also, we can easily integrate it with Zipkin and Consul – which obviously is our goal in this article.

Source Code

In this article, I will use the source code from my previous article Guide to Microservices with Micronaut and Consul. Since it has been written two years ago, I had to update a version of the Micronaut framework. Fortunately, the only thing I also had to change was groupId of the micronaut-openapi artifact. After that change, everything was working perfectly fine.

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should just follow my instructions 🙂

Architecture

Firstly, let’s take a look at the architecture of our sample system. We have three microservices: employee-service, department-service and organization-service. All of them are simple REST applications. The organization-service calls API exposed by the department-service, while the department-service calls API from the employee-service. They use Consul discovery to locate a network address of target microservices. They also send traces to Zipkin. Each application may be started in multiple instances. At the edge of our system, there is an API gateway – KrakenD. Krakend is integrating with Consul discovery through DNS. It also sends traces to Zipkin. The architecture is visible in the picture below.

krakend-consul-micronaut-architecture

Running Consul, Zipkin and microservices

In the first step, we are going to run Consul and Zipkin on Docker containers. The simplest way to start Consul is to run it in the development mode. To do that you should execute the following command. It is important to expose two ports 8500 and 8600. The first of them is responsible for the discovery, while the second for DNS.

$ docker run -d --name=consul \
   -p 8500:8500 -p 8600:8600/udp \
   -e CONSUL_BIND_INTERFACE=eth0 consul

Then, we need to run Zipkin. Don’t forget to expose port 9411.

$ docker run -d --name=zipkin -p 9411:9411 openzipkin/zipkin

Finally, we can run each of our application. They register themselves in Consul on startup. They are listening on the randomly generated port number. Here’s the common. configuration for every single Micronaut application.

micronaut:
  server:
    port: -1
  router:
    static-resources:
      swagger:
        paths: classpath:META-INF/swagger
        mapping: /swagger/**
endpoints:
  info:
    enabled: true
    sensitive: false
consul:
  client:
    registration:
      enabled: true
tracing:
  zipkin:
    enabled: true
    http:
      url: http://localhost:9411
    sampler:
      probability: 1

Consul acts as a configuration server for the applications. We use Micronaut Config Client for fetching property sources on startup.

micronaut:
  application:
    name: employee-service
  config-client:
    enabled: true
consul:
  client:
    defaultZone: "localhost:8500"
    config:
      format: YAML

Using Micronaut framework

In order to expose REST API, integrate with Consul and Zipkin we need to include the following dependencies.

<dependencies>
        <dependency>
            <groupId>io.micronaut</groupId>
            <artifactId>micronaut-http-server-netty</artifactId>
        </dependency>
        <dependency>
            <groupId>io.micronaut</groupId>
            <artifactId>micronaut-tracing</artifactId>
        </dependency>
        <dependency>
            <groupId>io.zipkin.brave</groupId>
            <artifactId>brave-instrumentation-http</artifactId>
            <scope>runtime</scope>
        </dependency>
        <dependency>
            <groupId>io.zipkin.reporter2</groupId>
            <artifactId>zipkin-reporter</artifactId>
            <scope>runtime</scope>
        </dependency>
        <dependency>
            <groupId>io.opentracing.brave</groupId>
            <artifactId>brave-opentracing</artifactId>
        </dependency>
        <dependency>
            <groupId>io.micronaut</groupId>
            <artifactId>micronaut-discovery-client</artifactId>
        </dependency>
<dependencies>

The tracing headers (spans) are propagated across applications. Here’s the endpoint in department-service. It calls endpoint GET /employees/department/{departmentId} exposed by employee-service.

@Get("/organization/{organizationId}/with-employees")
@ContinueSpan
public List<Department> findByOrganizationWithEmployees(@SpanTag("organizationId") Long organizationId) {
   LOGGER.info("Department find: organizationId={}", organizationId);
   List<Department> departments = repository.findByOrganization(organizationId);
   departments.forEach(d -> d.setEmployees(employeeClient.findByDepartment(d.getId())));
   return departments;
}

In order to call employee-service endpoint we use Micronaut declarative REST client.

@Client(id = "employee-service", path = "/employees")
public interface EmployeeClient {
   @Get("/department/{departmentId}")
   List<Employee> findByDepartment(Long departmentId);	
}

Here’s the implementation of the GET /employees/department/{departmentId} endpoint inside employee-service. Micronaut propagates tracing spans between subsequent requests using the @ContinueSpan annotation.

@Get("/department/{departmentId}")
@ContinueSpan
public List<Employee> findByDepartment(@SpanTag("departmentId") Long departmentId) {
    LOGGER.info("Employees find: departmentId={}", departmentId);
    return repository.findByDepartment(departmentId);
}

Configure KrakenD Gateway and Consul DNS

We can configure KrakenD using JSON notation. Firstly, we need to define endpoints. The integration with Consul discovery needs to be configured in the backend section. The host has to be the same as the DNS name of a downstream service in Consul. We also set a target URL (url_pattern) and service discovery type (sd). Let’s take a look at the list of endpoints for department-service. We expose methods for searching by id (GET /department/{id}), adding a new department (POST /department), and searching all departments with a list of employees within a single organization (GET /department-with-employees/{organizationId}).

    {
      "endpoint": "/department/{id}",
      "method": "GET",
      "backend": [
        {
          "url_pattern": "/departments/{id}",
          "sd": "dns",
          "host": [
            "department-service.service.consul"
          ],
          "disable_host_sanitize": true
        }
      ]
    },
    {
      "endpoint": "/department-with-employees/{organizationId}",
      "method": "GET",
      "backend": [
        {
          "url_pattern": "/departments/organization/{organizationId}/with-employees",
          "sd": "dns",
          "host": [
            "department-service.service.consul"
          ],
          "disable_host_sanitize": true
        }
      ]
    },
    {
      "endpoint": "/department",
      "method": "POST",
      "backend": [
        {
          "url_pattern": "/departments",
          "sd": "dns",
          "host": [
            "department-service.service.consul"
          ],
          "disable_host_sanitize": true
        }
      ]
    }

It is also worth mentioning that we cannot create conflicting routes with Krakend. For example, I couldn’t define endpoint GET /department/organization/{organizationId}/with-employees, because it would conflict with the already existing endpoint GET /department/{id}. To avoid it I created a new context path /department-with-employees for my endpoint.

Similarly, I created a following configuration for employee-service endpoints.

    {
      "endpoint": "/employee/{id}",
      "method": "GET",
      "backend": [
        {
          "url_pattern": "/employees/{id}",
          "sd": "dns",
          "host": [
            "employee-service.service.consul"
          ],
          "disable_host_sanitize": true
        }
      ]
    },
    {
      "endpoint": "/employee",
      "method": "POST",
      "backend": [
        {
          "url_pattern": "/employees",
          "sd": "dns",
          "host": [
            "employee-service.service.consul"
          ],
          "disable_host_sanitize": true
        }
      ]
    }

In order to integrate KrakenD with Consul, we need to configure local DNS properly on our machine. It was a quite challenging task for me since I’m not very familiar with network topics. By default, Consul will listen on port 8600 for DNS queries in the consul domain. But DNS is served from port 53. Therefore, we need to configure DNS forwarding for Consul service discovery. There are several ways to do that, and you may read more about it here. I chose the dnsmasq tool for that. Following the guide, we need to create a file e.g. /etc/dnsmasq.d/10-consul with the following single line.

server=/consul/127.0.0.1#8600

Finally we need to start dnsmasq service and add 127.0.0.1 to the list of nameservers. Here’s my configuration of DNS servers.

Testing Consul DNS

Firstly, let’s run all our sample microservices. They are registered in Consul under the following names.

krakend-consul-services

I run two instances of employee-service. Of course, all the applications are listening on a randomly generated port.

krakend-consul-instances

Finally, if you run the dig command with the DNS name of service you should have a similar response to the following. It means we may proceed to the last part of our exercise!

Running KrakenD API Gateway

Before we run KrakenD API gateway let’s configure one additional thing – integration with Zipkin. To do that we need to create section extra_config. Enabling Zipkin only requires us to add the zipkin exporter in the opencensus module. We need to URL (including port and path) where Zipkin is accepting the spans, and a service name for Krakend spans. I have also enabled metrics. Here’s the currently described part of the KrakenD configuration.

  "extra_config": {
    "github_com/devopsfaith/krakend-opencensus": {
      "sample_rate": 100,
      "reporting_period": 1,
      "exporters": {
        "zipkin": {
          "collector_url": "http://localhost:9411/api/v2/spans",
          "service_name": "api-gateway"
        }
      }
    },
    "github_com/devopsfaith/krakend-metrics": {
      "collection_time": "30s",
      "proxy_disabled": false,
      "listen_address": ":8090"
    }
  }

Finally, we can run KrakenD. The only parameter we need to pass is the location of the krakend.json configuration file. You may find a full version of that file in my GitHub repository inside the config directory.

$ krakend run -c krakend.json -d

Testing KrakenD with Consul and Micronaut

Once we started all our microservices, Consul, Zipkin, and KrakenD we may proceed to the tests. So first, let’s add some employees and departments by sending requests through an API gateway. KrakenD is listening on port 8080.

$ curl http://localhost:8080/employee -d '{"name":"John Smith","age":30,"position":"Architect","departmentId":1,"organizationId":1}' -H "Content-Type: application/json"
{"age":30,"departmentId":1,"id":1,"name":"John Smith","organizationId":1,"position":"Architect"}

$ curl http://localhost:8080/employee -d '{"name":"Paul Walker","age":22,"position":"Developer","departmentId":1,"organizationId":1}' -H "Content-Type: application/json"
{"age":22,"departmentId":1,"id":2,"name":"Paul Walker","organizationId":1,"position":"Developer"}

$ curl http://localhost:8080/employee -d '{"name":"Anna Hamilton","age":26,"position":"Developer","departmentId":2,"organizationId":1}' -H "Content-Type: application/json"
{"age":26,"departmentId":2,"id":3,"name":"Anna Hamilton","organizationId":1,"position":"Developer"}

$ curl http://localhost:8080/department -d '{"name":"Test1","organizationId":1}' -H "Content-Type:application/json"
{"id":1,"name":"Test1","organizationId":1}

$ curl http://localhost:8080/department -d '{"name":"Test2","organizationId":1}' -H "Content-Type:application/json"
{"id":2,"name":"Test2","organizationId":1}

Then let’s call a little bit more complex endpoint GET /department-with-employees/{organizationId}. As you probably remember, it is exposed by department-service and calls employee-service to fetch all employees assigned to the particular department.

$ curl http://localhost:8080/department-with-employees/1

However, we received a response with the HTTP 500 error code. We can find more details in the Krakend logs.

krakend-consul-logs

Krakend is unable to parse JSON array returned as a response by department-service. Therefore, we need to declare it explicitly with "is_collection": true so that KrakenD can convert it to an object for further manipulation. Here’s our current configuration for that endpoint.

    {
      "endpoint": "/department-with-employees/{organizationId}",
      "method": "GET",
      "backend": [
        {
          "url_pattern": "/departments/organization/{organizationId}/with-employees",
          "sd": "dns",
          "host": [
            "department-service.service.consul"
          ],
          "disable_host_sanitize": true,
          "is_collection": true
        }
      ]
    }

Now, let’s call the same endpoint once again. It works perfectly fine!

$ curl http://localhost:8080/department-with-employees/1   
{"collection":[{"id":1,"name":"Test1","organizationId":1},{"employees":[{"age":26,"id":3,"name":"Anna Hamilton","position":"Developer"}],"id":2,"name":"Test2","organizationId":1}]}

The last thing we can do here is to check out traces collected by Zipkin. Thanks to Micronaut support for spans propagation (@ContinueSpan), all the subsequent requests are grouped altogether.

The picture visible below shows Zipkin timeline for the latest request.

Conclusion

If you are looking for a gateway for your microservices – KrakenD is an excellent choice. Moreover, if you use Consul as a discovery server, and Zipkin (or Jaeger) as a tracing tool, it is easy to start with KrakenD. It also offers support for service discovery with Netflix Eureka, but it is a little bit more complicated to configure. Of course, you may also run KrakenD on Kubernetes (and integrate with its discovery), which is an absolute “must-have” for the modern API gateway.

2 COMMENTS

comments user
Javier

Hi

Good tutorial. I have just one question.
If you have your micros wrapped in a Docker image, when the service register itself with Consul, it is using its internal port. However, the service is only accesible by the exposed container port.

Is there a way to force Consult to register services using the docker exposed port instead of the port declared by the Spring App?

Thanks in advance

    comments user
    piotr.minkowski

    Hi,
    It seems you can override ip address, but no HTTP port:

    consul:
    client:
    registration:
    ip-addr:
    prefer-ip-address: true
    check:
    http: true

Leave a Reply