More advanced deployments to Kubernetes or OpenShift are a bit troublesome for developers. In comparison to Kubernetes OpenShift provides S2I (Source-2-Image) mechanism, which may help reduce a time required for preparation of application deployment descriptors. Although S2I is quite useful for developers, it solves only simple use cases and does not provide unified approach to building deployment configuration from a source code. Dekorate (https://dekorate.io), the recently created open-source project, tries to solve that problem. This project seems to be very interesting. It appears to be confirmed by RedHat, which has already announced a decision on including Dekorate to Red Hat OpenShift Application Runtimes as a “Tech Preview”. Continue reading “Deploying Spring Boot Application on OpenShift with Dekorate”
Tag: Openshift
Quick Guide to Microservices with Quarkus on Openshift
You had an opportunity to read many articles about building microservices with such frameworks like Spring Boot or Micronaut on my blog. There is another one very interesting framework dedicated for microservices architecture, which is becoming increasing popular – Quarkus. It is being introduced as a next-generation Kubernetes/Openshift native Java framework. It is built on top of well-known Java standards like CDI, JAX-RS and Eclipse MicroProfile which distinguishes it from Spring Boot. Continue reading “Quick Guide to Microservices with Quarkus on Openshift”
Continuous Delivery with OpenShift and Jenkins: A/B Testing
One of the reason you could decide to use OpenShift instead of some other containerized platforms (for example Kubernetes) is out-of-the-box support for continuous delivery pipelines. Without proper tools the process of releasing software in your organization may be really time-consuming and painful. The quickness of that process becoming especially important if you deliver software to production frequently. Currently, the most popular use case for it is microservices-based architecture, where you have many small, independent applications.
Continue reading “Continuous Delivery with OpenShift and Jenkins: A/B Testing”
Running Java Microservices on OpenShift using Source-2-Image
One of the reason you would prefer OpenShift instead of Kubernetes is the simplicity of running new applications. When working with plain Kubernetes you need to provide already built image together with the set of descriptor templates used for deploying it. OpenShift introduces Source-2-Image feature used for building reproducible Docker images from application source code. With S2I you don’t have provide any Kubernetes YAML templates or build Docker image by yourself, OpenShift will do it for you. Let’s see how it works. The best way to test it locally is via Minishift. But the first step is to prepare sample applications source code.
1. Prepare application code
I have already described how to run your Java applications on Kubernetes in one of my previous articles Quick Guide to Microservices with Kubernetes, Spring Boot 2.0 and Docker. We will use the same source code as used in that article now, so you would be able to compare those two different approaches. Our source code is available on GitHub in repository sample-spring-microservices-new. We will modify a little the version used in Kubernetes by removing Spring Cloud Kubernetes library and including some additional resources. The current version is available in the branch openshift.
Our sample system consists of three microservices which communicate with each other and use Mongo database backend. Here’s the diagram that illustrates our architecture.
Every microservice is a Spring Boot application, which uses Maven as a built tool. After including spring-boot-maven-plugin
it is able to generate single fat jar with all dependencies, which is required by source-2-image builder.
<build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build>
Every application includes starters for Spring Web, Spring Actuator and Spring Data MongoDB for integration with Mongo database. We will also include libraries for generating Swagger API documentation, and Spring Cloud OpenFeign for these applications which call REST endpoints exposed by other microservices.
<dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger2</artifactId> <version>2.9.2>/version< </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger-ui</artifactId> <version>2.9.2</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-mongodb</artifactId> </dependency> </dependencies>
Every Spring Boot application exposes REST API for simple CRUD operations on a given resource. The Spring Data repository bean is injected into the controller.
@RestController @RequestMapping(“/employee”) public class EmployeeController { private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeController.class); @Autowired EmployeeRepository repository; @PostMapping("/") public Employee add(@RequestBody Employee employee) { LOGGER.info("Employee add: {}", employee); return repository.save(employee); } @GetMapping("/{id}") public Employee findById(@PathVariable("id") String id) { LOGGER.info("Employee find: id={}", id); return repository.findById(id).get(); } @GetMapping("/") public Iterable<Employee> findAll() { LOGGER.info("Employee find"); return repository.findAll(); } @GetMapping("/department/{departmentId}") public List<Employee> findByDepartment(@PathVariable("departmentId") Long departmentId) { LOGGER.info("Employee find: departmentId={}", departmentId); return repository.findByDepartmentId(departmentId); } @GetMapping("/organization/{organizationId}") public List<Employee> findByOrganization(@PathVariable("organizationId") Long organizationId) { LOGGER.info("Employee find: organizationId={}", organizationId); return repository.findByOrganizationId(organizationId); } }
The application expects to have environment variables pointing to the database name, user and password.
spring: application: name: employee data: mongodb: uri: mongodb://${MONGO_DATABASE_USER}:${MONGO_DATABASE_PASSWORD}@mongodb/${MONGO_DATABASE_NAME}
Inter-service communication is realized through OpenFeign declarative REST client. It is included in department
and organization
microservices.
@FeignClient(name = "employee", url = "${microservices.employee.url}") public interface EmployeeClient { @GetMapping("/employee/organization/{organizationId}") List<Employee> findByOrganization(@PathVariable("organizationId") String organizationId); }
The address of the target service accessed by Feign client is set inside application.yml
. The communication is realized via OpenShift/Kubernetes services. The name of each service is also injected through an environment variable.
spring: application: name: organization data: mongodb: uri: mongodb://${MONGO_DATABASE_USER}:${MONGO_DATABASE_PASSWORD}@mongodb/${MONGO_DATABASE_NAME} microservices: employee: url: http://${EMPLOYEE_SERVICE}:8080 department: url: http://${DEPARTMENT_SERVICE}:8080
2. Running Minishift
To run Minishift locally you just have to download it from that site, copy minishift.exe
(for Windows) to your PATH directory and start using minishift start
command. For more details you may refer to my previous article about OpenShift and Java applications Quick guide to deploying Java apps on OpenShift. The current version of Minishift used during writing this article is 1.29.0.
After starting Minishift we need to run some additional oc commands to enable source-2-image for Java apps. First, we add some privileges to user admin
to be able to access project openshift
. In this project OpenShift stores all the build-in templates and image streams used, for example as S2I builders. Let’s begin from enable admin-user
addon.
$ minishift addons apply admin-user
Thanks to that plugin we are able to login to Minishift as cluster admin. Now, we can grant role cluster-admin
to user admin
.
$ oc login -u system:admin $ oc adm policy add-cluster-role-to-user cluster-admin admin $ oc login -u admin -p admin
After that, you can login to web console using credentials admin/admin. You should be able to see project openshift
. It is not all. The image used for building runnable Java apps (openjdk18-openshift
) is not available by default on Minishift. We can import it manually from RedHat registry using oc import-image
command or just enable and apply plugin xpaas
. I prefer the second option.
$ minishift addons apply xpaas
Now, you can go to Minishift web console (for me available under address https://192.168.99.100:8443), select project openshift and then navigate to Builds -> Images. You should see the image stream redhat-openjdk18-openshift
on the list.
The newest version of that image is 1.3
. Surprisingly it is not the newest version on OpenShift Container Platform. There you have version 1.5
. However, the newest versions of builder images has been moved to registry.redhat.io, which requires authentication.
3. Deploying Java app using S2I
We are finally able to deploy our app on Minishift with S2I builder. The application source code is ready, and the same with Minishift instance. The first step is to deploy an instance of MongoDB. It is very easy with OpenShift, because Mongo template is available in built-in service catalog. We can provide our own configuration settings or left default values. What’s important for us, OpenShift generates secret, by default available under the name mongodb
.
The S2I builder image provided by OpenShift may be used by through the image stream redhat-openjdk18-openshift. This image is intended for use with Maven-based Java standalone projects that are run via main class, for example Spring Boot applications. If you would not provide any builder during creating new app the type of application is auto-detected by OpenShift, and source code written Java it will be jee deployed on WildFly server. The current version of the Java S2I builder image supports OpenJDK 1.8, Jolokia 1.3.5, and Maven 3.3.9-2.8.
Let’s create our first application on OpenShift. We begin from microservice employee
. Under normal circumstances each microservice would be located in separated Git repository. In our sample all of them are placed in the single repository, so we have provide the location of current app by setting parameter --context-dir
. We will also override default branch to openshift, which has been created for the purposes of this article.
$ oc new-app redhat-openjdk18-openshift:1.3~https://github.com/piomin/sample-spring-microservices-new.git#openshift --name=employee --context-dir=employee-service
All our microservices are connecting to Mongo database, so we also have to inject connection settings and credentials into application pod. It can achieved by injecting mongodb
secret to BuildConfig
object.
$ oc set env bc/employee --from="secret/mongodb" --prefix=MONGO_
BuildConfig
is one of the OpenShift object created after running command oc new-app
. It also creates DeploymentConfig
with deployment definition, Service
, and ImageStream
with newest Docker image of application. After creating application a new build is running. First, it download source code from Git repository, then it builds it using Maven, assembles build results into the Docker image, and finally saves image in registry.
Now, we can create the next application – department
. For simplification, all three microservices are connecting to the same database, which is not recommended under normal circumstances. In that case the only difference between department
and employee
app is the environment variable EMPLOYEE_SERVICE
set as parameter on oc new-app
command.
$ oc new-app redhat-openjdk18-openshift:1.3~https://github.com/piomin/sample-spring-microservices-new.git#openshift --name=department --context-dir=department-service -e EMPLOYEE_SERVICE=employee
The same as before we also inject mongodb
secret into BuildConfig
object.
$ oc set env bc/department --from="secret/mongodb" --prefix=MONGO_
A build is starting just after creating a new application, but we can also start it manually by executing the following running command.
$ oc start-build department
Finally, we are deploying the last microservice. Here are the appropriate commands.
$ oc new-app redhat-openjdk18-openshift:1.3~https://github.com/piomin/sample-spring-microservices-new.git#openshift --name=organization --context-dir=organization-service -e EMPLOYEE_SERVICE=employee -e DEPARTMENT_SERVICE=department $ oc set env bc/organization --from="secret/mongodb" --prefix=MONGO_
4. Deep look into created OpenShift objects
The list of builds may be displayed on web console under section Builds -> Builds. As you can see on the picture below there are three BuildConfig
objects available – each one for the single application. The same list can be displayed using oc command oc get bc
.
You can take a look on build history by selecting one of the element from the list. You can also start a new by clicking button Start Build as shown below.
We can always display YAML configuration file with BuildConfig
definition. But it is also possible to perform the similar action using web console. The following picture shows the list of environment variables injected from mongodb
secret into the BuildConfig
object.
Every build generates Docker image with application and saves it in Minishift internal registry. Minishift internal registry is available under address 172.30.1.1:5000. The list of available image streams is available under section Builds -> Images.
Every application is automatically exposed on ports 8080 (HTTP), 8443 (HTTPS) and 8778 (Jolokia) via services. You can also expose these services outside Minishift by creating OpenShift Route
using oc expose
command.
5. Testing the sample system
To proceed with the tests we should first expose our microservices outside Minishift. To do that just run the following commands.
$ oc expose svc employee $ oc expose svc department $ oc expose svc organization
After that we can access applications on the address http://${APP_NAME}-${PROJ_NAME}.${MINISHIFT_IP}.nip.io
as shown below.
Each microservice provides Swagger2 API documentation available on page swagger-ui.html
. Thanks to that we can easily test every single endpoint exposed by the service.
It’s worth notice that every application making use of three approaches to inject environment variables into the pod:
- It stores version number in source code repository inside the file
.s2i/environment
. S2I builder reads all the properties defined inside that file and set them as environment variables for builder pod, and then application pod. Our property name isVERSION
, which is injected using Spring@Value
, and set for Swagger API (the code is visible below). - I have already set the names of dependent services as ENV vars during executing command
oc new-app
fordepartment
andorganization
apps. - I have also inject MongoDB secret into every
BuildConfig
object usingoc set env
command.
@Value("${VERSION}") String version; public static void main(String[] args) { SpringApplication.run(DepartmentApplication.class, args); } @Bean public Docket swaggerApi() { return new Docket(DocumentationType.SWAGGER_2) .select() .apis(RequestHandlerSelectors.basePackage("pl.piomin.services.department.controller")) .paths(PathSelectors.any()) .build() .apiInfo(new ApiInfoBuilder().version(version).title("Department API").description("Documentation Department API v" + version).build()); }
Conclusion
Today I show you that deploying your applications on OpenShift may be very simple thing. You don’t have to create any YAML descriptor files or build Docker images by yourself to run your app. It is built directly from your source code. You can compare it with deployment on Kubernetes described in one of my previous articles Quick Guide to Microservices with Kubernetes, Spring Boot 2.0 and Docker.
Integration tests on OpenShift using Arquillian Cube and Istio
Building integration tests for applications deployed on Kubernetes/OpenShift platforms seems to be quite a big challenge. With Arquillian Cube, an Arquillian extension for managing Docker containers, it is not complicated. Kubernetes extension, being a part of Arquillian Cube, helps you write and run integration tests for your Kubernetes/Openshift application. It is responsible for creating and managing temporary namespace for your tests, applying all Kubernetes resources required to setup your environment and once everything is ready it will just run defined integration tests.
The one very good information related to Arquillian Cube is that it supports Istio framework. You can apply Istio resources before executing tests. One of the most important features of Istio is an ability to control of traffic behavior with rich routing rules, retries, delays, failovers, and fault injection. It allows you to test some unexpected situations during network communication between microservices like server errors or timeouts.
If you would like to run some tests using Istio resources on Minishift you should first install it on your platform. To do that you need to change some privileges for your OpenShift user. Let’s do that.
1. Enabling Istio on Minishift
Istio requires some high-level privileges to be able to run on OpenShift. To add those privileges to the current user we need to login as an user with cluster admin role. First, we should enable admin-user addon on Minishift by executing the following command.
$ minishift addons enable admin-user
After that you would be able to login as system:admin
user, which has cluster-admin
role. With this user you can also add cluster-admin
role to other users, for example admin
. Let’s do that.
$ oc login -u system:admin $ oc adm policy add-cluster-role-to-user cluster-admin admin $ oc login -u admin -p admin
Now, let’s create new project dedicated especially for Istio and then add some required privileges.
$ oc new-project istio-system $ oc adm policy add-scc-to-user anyuid -z istio-ingress-service-account -n istio-system $ oc adm policy add-scc-to-user anyuid -z default -n istio-system $ oc adm policy add-scc-to-user anyuid -z prometheus -n istio-system $ oc adm policy add-scc-to-user anyuid -z istio-egressgateway-service-account -n istio-system $ oc adm policy add-scc-to-user anyuid -z istio-citadel-service-account -n istio-system $ oc adm policy add-scc-to-user anyuid -z istio-ingressgateway-service-account -n istio-system $ oc adm policy add-scc-to-user anyuid -z istio-cleanup-old-ca-service-account -n istio-system $ oc adm policy add-scc-to-user anyuid -z istio-mixer-post-install-account -n istio-system $ oc adm policy add-scc-to-user anyuid -z istio-mixer-service-account -n istio-system $ oc adm policy add-scc-to-user anyuid -z istio-pilot-service-account -n istio-system $ oc adm policy add-scc-to-user anyuid -z istio-sidecar-injector-service-account -n istio-system $ oc adm policy add-scc-to-user anyuid -z istio-galley-service-account -n istio-system $ oc adm policy add-scc-to-user privileged -z default -n myproject
Finally, we may proceed to Istio components installation. I downloaded the current newest version of Istio – 1.0.1
. Installation file is available under install/kubernetes
directory. You just have to apply it to your Minishift instance by calling oc apply
command.
$ oc apply -f install/kubernetes/istio-demo.yaml
2. Enabling Istio for Arquillian Cube
I have already described how to use Arquillian Cube to run tests with OpenShift in the article Testing microservices on OpenShift using Arquillian Cube. In comparison with the sample described in that article we need to include dependency responsible for enabling Istio features.
<dependency> <groupId>org.arquillian.cube</groupId> <artifactId>arquillian-cube-istio-kubernetes</artifactId> <version>1.17.1</version> <scope>test</scope> </dependency>
Now, we can use @IstioResource
annotation to apply Istio resources into OpenShift cluster or IstioAssistant
bean to be able to use some additional methods for adding, removing resources programmatically or polling an availability of URLs.
Let’s take a look on the following JUnit test class using Arquillian Cube with Istio support. In addition to the standard test created for running on OpenShift instance I have added Istio resource file customer-to-account-route.yaml
. Then I have invoked method await
provided by IstioAssistant
. First test test1CustomerRoute
creates new customer, so it needs to wait until customer-route
is deployed on OpenShift. The next test test2AccountRoute
adds account for the newly created customer, so it needs to wait until account-route
is deployed on OpenShift. Finally, the test test3GetCustomerWithAccounts
is ran, which calls the method responsible for finding customer by id with list of accounts. In that case customer-service
calls method endpoint by account-service
. As you have probably find out the last line of that test method contains an assertion to empty list of accounts: Assert.assertTrue(c.getAccounts().isEmpty())
. Why? We will simulate the timeout in communication between customer-service
and account-service
using Istio rules.
@Category(RequiresOpenshift.class) @RequiresOpenshift @Templates(templates = { @Template(url = "classpath:account-deployment.yaml"), @Template(url = "classpath:deployment.yaml") }) @RunWith(ArquillianConditionalRunner.class) @IstioResource("classpath:customer-to-account-route.yaml") @FixMethodOrder(MethodSorters.NAME_ASCENDING) public class IstioRuleTest { private static final Logger LOGGER = LoggerFactory.getLogger(IstioRuleTest.class); private static String id; @ArquillianResource private IstioAssistant istioAssistant; @RouteURL(value = "customer-route", path = "/customer") private URL customerUrl; @RouteURL(value = "account-route", path = "/account") private URL accountUrl; @Test public void test1CustomerRoute() { LOGGER.info("URL: {}", customerUrl); istioAssistant.await(customerUrl, r -> r.isSuccessful()); LOGGER.info("URL ready. Proceeding to the test"); OkHttpClient httpClient = new OkHttpClient(); RequestBody body = RequestBody.create(MediaType.parse("application/json"), "{\"name\":\"John Smith\", \"age\":33}"); Request request = new Request.Builder().url(customerUrl).post(body).build(); try { Response response = httpClient.newCall(request).execute(); ResponseBody b = response.body(); String json = b.string(); LOGGER.info("Test: response={}", json); Assert.assertNotNull(b); Assert.assertEquals(200, response.code()); Customer c = Json.decodeValue(json, Customer.class); this.id = c.getId(); } catch (IOException e) { e.printStackTrace(); } } @Test public void test2AccountRoute() { LOGGER.info("Route URL: {}", accountUrl); istioAssistant.await(accountUrl, r -> r.isSuccessful()); LOGGER.info("URL ready. Proceeding to the test"); OkHttpClient httpClient = new OkHttpClient(); RequestBody body = RequestBody.create(MediaType.parse("application/json"), "{\"number\":\"01234567890\", \"balance\":10000, \"customerId\":\"" + this.id + "\"}"); Request request = new Request.Builder().url(accountUrl).post(body).build(); try { Response response = httpClient.newCall(request).execute(); ResponseBody b = response.body(); String json = b.string(); LOGGER.info("Test: response={}", json); Assert.assertNotNull(b); Assert.assertEquals(200, response.code()); } catch (IOException e) { e.printStackTrace(); } } @Test public void test3GetCustomerWithAccounts() { String url = customerUrl + "/" + id; LOGGER.info("Calling URL: {}", customerUrl); OkHttpClient httpClient = new OkHttpClient(); Request request = new Request.Builder().url(url).get().build(); try { Response response = httpClient.newCall(request).execute(); String json = response.body().string(); LOGGER.info("Test: response={}", json); Assert.assertNotNull(response.body()); Assert.assertEquals(200, response.code()); Customer c = Json.decodeValue(json, Customer.class); Assert.assertTrue(c.getAccounts().isEmpty()); } catch (IOException e) { e.printStackTrace(); } } }
3. Creating Istio rules
On of the interesting features provided by Istio is an availability of injecting faults to the route rules. we can specify one or more faults to inject while forwarding HTTP requests to the rule’s corresponding request destination. The faults can be either delays or aborts. We can define a percentage level of error using percent
field for the both types of fault. In the following Istio resource I have defines 2 seconds delay for every single request sent to account-service
.
apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: account-service spec: hosts: - account-service http: - fault: delay: fixedDelay: 2s percent: 100 route: - destination: host: account-service subset: v1
Besides VirtualService
we also need to define DestinationRule
for account-service
. It is really simple – we have just define version
label of the target service.
apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: account-service spec: host: account-service subsets: - name: v1 labels: version: v1
Before running the test we should also modify OpenShift deployment templates of our sample applications. We need to inject some Istio resources into the pods definition using istioctl kube-inject
command as shown below.
$ istioctl kube-inject -f deployment.yaml -o customer-deployment-istio.yaml $ istioctl kube-inject -f account-deployment.yaml -o account-deployment-istio.yaml
Finally, we may rewrite generated files into OpenShift templates. Here’s the fragment of Openshift template containing DeploymentConfig
definition for account-service
.
kind: Template apiVersion: v1 metadata: name: account-template objects: - kind: DeploymentConfig apiVersion: v1 metadata: name: account-service labels: app: account-service name: account-service version: v1 spec: template: metadata: annotations: sidecar.istio.io/status: '{"version":"364ad47b562167c46c2d316a42629e370940f3c05a9b99ccfe04d9f2bf5af84d","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs"],"imagePullSecrets":null}' name: account-service labels: app: account-service name: account-service version: v1 spec: containers: - env: - name: DATABASE_NAME valueFrom: secretKeyRef: key: database-name name: mongodb - name: DATABASE_USER valueFrom: secretKeyRef: key: database-user name: mongodb - name: DATABASE_PASSWORD valueFrom: secretKeyRef: key: database-password name: mongodb image: piomin/account-vertx-service name: account-vertx-service ports: - containerPort: 8095 resources: {} - args: - proxy - sidecar - --configPath - /etc/istio/proxy - --binaryPath - /usr/local/bin/envoy - --serviceCluster - account-service - --drainDuration - 45s - --parentShutdownDuration - 1m0s - --discoveryAddress - istio-pilot.istio-system:15007 - --discoveryRefreshDelay - 1s - --zipkinAddress - zipkin.istio-system:9411 - --connectTimeout - 10s - --statsdUdpAddress - istio-statsd-prom-bridge.istio-system:9125 - --proxyAdminPort - "15000" - --controlPlaneAuthPolicy - NONE env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: INSTANCE_IP valueFrom: fieldRef: fieldPath: status.podIP - name: ISTIO_META_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: ISTIO_META_INTERCEPTION_MODE value: REDIRECT image: gcr.io/istio-release/proxyv2:1.0.1 imagePullPolicy: IfNotPresent name: istio-proxy resources: requests: cpu: 10m securityContext: readOnlyRootFilesystem: true runAsUser: 1337 volumeMounts: - mountPath: /etc/istio/proxy name: istio-envoy - mountPath: /etc/certs/ name: istio-certs readOnly: true initContainers: - args: - -p - "15001" - -u - "1337" - -m - REDIRECT - -i - '*' - -x - "" - -b - 8095, - -d - "" image: gcr.io/istio-release/proxy_init:1.0.1 imagePullPolicy: IfNotPresent name: istio-init resources: {} securityContext: capabilities: add: - NET_ADMIN volumes: - emptyDir: medium: Memory name: istio-envoy - name: istio-certs secret: optional: true secretName: istio.default
4. Building applications
The sample applications are implemented using Eclipse Vert.x framework. They use Mongo database for storing data. The connection settings are injected into pods using Kubernetes Secrets.
public class MongoVerticle extends AbstractVerticle { private static final Logger LOGGER = LoggerFactory.getLogger(MongoVerticle.class); @Override public void start() throws Exception { ConfigStoreOptions envStore = new ConfigStoreOptions() .setType("env") .setConfig(new JsonObject().put("keys", new JsonArray().add("DATABASE_USER").add("DATABASE_PASSWORD").add("DATABASE_NAME"))); ConfigRetrieverOptions options = new ConfigRetrieverOptions().addStore(envStore); ConfigRetriever retriever = ConfigRetriever.create(vertx, options); retriever.getConfig(r -> { String user = r.result().getString("DATABASE_USER"); String password = r.result().getString("DATABASE_PASSWORD"); String db = r.result().getString("DATABASE_NAME"); JsonObject config = new JsonObject(); LOGGER.info("Connecting {} using {}/{}", db, user, password); config.put("connection_string", "mongodb://" + user + ":" + password + "@mongodb/" + db); final MongoClient client = MongoClient.createShared(vertx, config); final CustomerRepository service = new CustomerRepositoryImpl(client); ProxyHelper.registerService(CustomerRepository.class, vertx, service, "customer-service"); }); } }
MongoDB should be started on OpenShift before starting any applications, which connect to it. To achieve it we should insert Mongo deployment resource into Arquillian configuration file as env.config.resource.name
field.
The configuration of Arquillian Cube is visible below. We will use an existing namespace myproject
, which has already granted the required privileges (see Step 1). We also need to pass authentication token of user admin
. You can collect it using command oc whoami -t
after login to OpenShift cluster.
<extension qualifier="openshift"> <property name="namespace.use.current">true</property> <property name="namespace.use.existing">myproject</property> <property name="kubernetes.master">https://192.168.99.100:8443</property> <property name="cube.auth.token">TYYccw6pfn7TXtH8bwhCyl2tppp5MBGq7UXenuZ0fZA</property> <property name="env.config.resource.name">mongo-deployment.yaml</property> </extension>
The communication between customer-service
and account-service
is realized by Vert.x WebClient
. We will set read timeout for the client to 1 second. Because Istio injects 2 seconds delay into the route, the communication is going to end with timeout.
public class AccountClient { private static final Logger LOGGER = LoggerFactory.getLogger(AccountClient.class); private Vertx vertx; public AccountClient(Vertx vertx) { this.vertx = vertx; } public AccountClient findCustomerAccounts(String customerId, Handler<AsyncResult<List>> resultHandler) { WebClient client = WebClient.create(vertx); client.get(8095, "account-service", "/account/customer/" + customerId).timeout(1000).send(res2 -> { if (res2.succeeded()) { LOGGER.info("Response: {}", res2.result().bodyAsString()); List accounts = res2.result().bodyAsJsonArray().stream().map(it -> Json.decodeValue(it.toString(), Account.class)).collect(Collectors.toList()); resultHandler.handle(Future.succeededFuture(accounts)); } else { resultHandler.handle(Future.succeededFuture(new ArrayList())); } }); return this; } }
The full code of sample applications is available on GitHub in the repository https://github.com/piomin/sample-vertx-kubernetes/tree/openshift-istio-tests.
5. Running tests
You can the tests during Maven build or just using your IDE. As the first test1CustomerRoute
test is executed. It adds new customer and save generated id for two next tests.
The next test is test2AccountRoute
. It adds an account for the customer created during previous test.
Finally, the test responsible for verifying communication between microservices is running. It verifies if the list of accounts is empty, what is a result of timeout in communication with account-service
.
Quick Guide to Microservices with Kubernetes, Spring Boot 2.0 and Docker
Here’s the next article in a series of “Quick Guide to…”. This time we will discuss and run examples of Spring Boot microservices on Kubernetes. The structure of that article will be quite similar to this one Quick Guide to Microservices with Spring Boot 2.0, Eureka and Spring Cloud, as they are describing the same aspects of applications development. I’m going to focus on showing you the differences and similarities in development between for Spring Cloud and for Kubernetes. The topics covered in this article are:
- Using Spring Boot 2.0 in cloud-native development
- Providing service discovery for all microservices using Spring Cloud Kubernetes project
- Injecting configuration settings into application pods using Kubernetes Config Maps and Secrets
- Building application images using Docker and deploying them on Kubernetes using YAML configuration files
- Using Spring Cloud Kubernetes together with Zuul proxy to expose a single Swagger API documentation for all microservices
Spring Cloud and Kubernetes may be threaten as a competitive solutions when you build microservices environment. Such components like Eureka, Spring Cloud Config or Zuul provided by Spring Cloud may be replaced by built-in Kubernetes objects like services, config maps, secrets or ingresses. But even if you decide to use Kubernetes components instead of Spring Cloud you can take advantage of some interesting features provided throughout the whole Spring Cloud project.
The one raelly interesting project that helps us in development is Spring Cloud Kubernetes (https://github.com/spring-cloud-incubator/spring-cloud-kubernetes). Although it is still in incubation stage it is definitely worth to dedicating some time to it. It integrates Spring Cloud with Kubernetes. I’ll show you how to use implementation of discovery client, inter-service communication with Ribbon client and Zipkin discovery using Spring Cloud Kubernetes.
Before we proceed to the source code, let’s take a look on the following diagram. It illustrates the architecture of our sample system. It is quite similar to the architecture presented in the already mentioned article about microservices on Spring Cloud. There are three independent applications (employee-service
, department-service
, organization-service
), which communicate between each other through REST API. These Spring Boot microservices use some build-in mechanisms provided by Kubernetes: config maps and secrets for distributed configuration, etcd for service discovery, and ingresses for API gateway.
Let’s proceed to the implementation. Currently, the newest stable version of Spring Cloud is Finchley.RELEASE
. This version of spring-cloud-dependencies
should be declared as a BOM for dependency management.
<dependencyManagement> <dependencies> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-dependencies</artifactId> <version>Finchley.RELEASE</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement>
Spring Cloud Kubernetes is not released under Spring Cloud Release Trains. So, we need to explicitly define its version. Because we use Spring Boot 2.0 we have to include the newest SNAPSHOT
version of spring-cloud-kubernetes
artifacts, which is 0.3.0.BUILD-SNAPSHOT
.
The source code of sample applications presented in this article is available on GitHub in repository https://github.com/piomin/sample-spring-microservices-kubernetes.git.
Pre-requirements
In order to be able to deploy and test our sample microservices we need to prepare a development environment. We can realize that in the following steps:
- You need at least a single node cluster instance of Kubernetes (Minikube) or Openshift (Minishift) running on your local machine. You should start it and expose embedded Docker client provided by both of them. The detailed intruction for Minishift may be found there: Quick guide to deploying Java apps on OpenShift. You can also use that description to run Minikube – just replace word ‘minishift’ with ‘minikube’. In fact, it does not matter if you choose Kubernetes or Openshift – the next part of this tutorial would be applicable for both of them
- Spring Cloud Kubernetes requires access to Kubernetes API in order to be able to retrieve a list of address of pods running for a single service. If you use Kubernetes you should just execute the following command:
$ kubectl create clusterrolebinding admin --clusterrole=cluster-admin --serviceaccount=default:default
If you deploy your microservices on Minishift you should first enable admin-user addon, then login as a cluster admin, and grant required permissions.
$ minishift addons enable admin-user $ oc login -u system:admin $ oc policy add-role-to-user cluster-reader system:serviceaccount:myproject:default
- All our sample microservices use MongoDB as a backend store. So, you should first run an instance of this database on your node. With Minishift it is quite simple, as you can use predefined templates just by selecting service Mongo on the Catalog list. With Kubernetes the task is more difficult. You have to prepare deployment configuration files by yourself and apply it to the cluster. All the configuration files are available under kubernetes directory inside sample Git repository. To apply the following YAML definition to the cluster you should execute command
kubectl apply -f kubernetes\mongo-deployment.yaml
. After it Mongo database would be available under the namemongodb
inside Kubernetes cluster.
apiVersion: apps/v1 kind: Deployment metadata: name: mongodb labels: app: mongodb spec: replicas: 1 selector: matchLabels: app: mongodb template: metadata: labels: app: mongodb spec: containers: - name: mongodb image: mongo:latest ports: - containerPort: 27017 env: - name: MONGO_INITDB_DATABASE valueFrom: configMapKeyRef: name: mongodb key: database-name - name: MONGO_INITDB_ROOT_USERNAME valueFrom: secretKeyRef: name: mongodb key: database-user - name: MONGO_INITDB_ROOT_PASSWORD valueFrom: secretKeyRef: name: mongodb key: database-password --- apiVersion: v1 kind: Service metadata: name: mongodb labels: app: mongodb spec: ports: - port: 27017 protocol: TCP selector: app: mongodb
1. Inject configuration with Config Maps and Secrets
When using Spring Cloud the most obvious choice for realizing distributed configuration in your system is Spring Cloud Config. With Kubernetes you can use Config Map. It holds key-value pairs of configuration data that can be consumed in pods or used to store configuration data. It is used for storing and sharing non-sensitive, unencrypted configuration information. To use sensitive information in your clusters, you must use Secrets. An usage of both these Kubernetes objects can be perfectly demonstrated basing on the example of MongoDB connection settings. Inside Spring Boot application we can easily inject it using environment variables. Here’s fragment of application.yml
file with URI configuration.
spring: data: mongodb: uri: mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@mongodb/${MONGO_DATABASE}
While username or password are a sensitive fields, a database name is not. So we can place it inside config map.
apiVersion: v1 kind: ConfigMap metadata: name: mongodb data: database-name: microservices
Of course, username and password are defined as secrets.
apiVersion: v1 kind: Secret metadata: name: mongodb type: Opaque data: database-password: MTIzNDU2 database-user: cGlvdHI=
To apply the configuration to Kubernetes cluster we run the following commands.
$ kubectl apply -f kubernetes/mongodb-configmap.yaml $ kubectl apply -f kubernetes/mongodb-secret.yaml
After it we should inject the configuration properties into application’s pods. When defining container configuration inside Deployment YAML file we have to include references to environment variables and secrets as shown below
apiVersion: apps/v1 kind: Deployment metadata: name: employee labels: app: employee spec: replicas: 1 selector: matchLabels: app: employee template: metadata: labels: app: employee spec: containers: - name: employee image: piomin/employee:1.0 ports: - containerPort: 8080 env: - name: MONGO_DATABASE valueFrom: configMapKeyRef: name: mongodb key: database-name - name: MONGO_USERNAME valueFrom: secretKeyRef: name: mongodb key: database-user - name: MONGO_PASSWORD valueFrom: secretKeyRef: name: mongodb key: database-password
2. Building service discovery with Kubernetes
We usually running microservices on Kubernetes using Docker containers. One or more containers are grouped by pods, which are the smallest deployable units created and managed in Kubernetes. A good practice is to run only one container inside a single pod. If you would like to scale up your microservice you would just have to increase a number of running pods. All running pods that belong to a single microservice are logically grouped by Kubernetes Service. This service may be visible outside the cluster, and is able to load balance incoming requests between all running pods. The following service definition groups all pods labelled with field app
equaled to employee
.
apiVersion: v1 kind: Service metadata: name: employee labels: app: employee spec: ports: - port: 8080 protocol: TCP selector: app: employee
Service can be used for accessing application outside Kubernetes cluster or for inter-service communication inside a cluster. However, the communication between microservices can be implemented more comfortable with Spring Cloud Kubernetes. First we need to include the following dependency to project pom.xml
.
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-kubernetes</artifactId> <version>0.3.0.BUILD-SNAPSHOT</version> </dependency>
Then we should enable discovery client for an application – the same as we have always done for discovery Spring Cloud Netflix Eureka. This allows you to query Kubernetes endpoints (services) by name. This discovery feature is also used by the Spring Cloud Kubernetes Ribbon or Zipkin projects to fetch respectively the list of the pods defined for a microservice to be load balanced or the Zipkin servers available to send the traces or spans.
@SpringBootApplication @EnableDiscoveryClient @EnableMongoRepositories @EnableSwagger2 public class EmployeeApplication { public static void main(String[] args) { SpringApplication.run(EmployeeApplication.class, args); } // ... }
The last important thing in this section is to guarantee that Spring application name would be exactly the same as Kubernetes service name for the application. For application employee-service
it is employee
.
spring: application: name: employee
3. Building microservice using Docker and deploying on Kubernetes
There is nothing unusual in our sample microservices. We have included some standard Spring dependencies for building REST-based microservices, integrating with MongoDB and generating API documentation using Swagger2.
<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger2</artifactId> <version>2.9.2</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-mongodb</artifactId> </dependency>
In order to integrate with MongoDB we should create interface that extends standard Spring Data CrudRepository
.
public interface EmployeeRepository extends CrudRepository { List findByDepartmentId(Long departmentId); List findByOrganizationId(Long organizationId); }
Entity class should be annotated with Mongo @Document
and a primary key field with @Id
.
@Document(collection = "employee") public class Employee { @Id private String id; private Long organizationId; private Long departmentId; private String name; private int age; private String position; // ... }
The repository bean has been injected to the controller class. Here’s the full implementation of our REST API inside employee-service.
@RestController public class EmployeeController { private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeController.class); @Autowired EmployeeRepository repository; @PostMapping("/") public Employee add(@RequestBody Employee employee) { LOGGER.info("Employee add: {}", employee); return repository.save(employee); } @GetMapping("/{id}") public Employee findById(@PathVariable("id") String id) { LOGGER.info("Employee find: id={}", id); return repository.findById(id).get(); } @GetMapping("/") public Iterable findAll() { LOGGER.info("Employee find"); return repository.findAll(); } @GetMapping("/department/{departmentId}") public List findByDepartment(@PathVariable("departmentId") Long departmentId) { LOGGER.info("Employee find: departmentId={}", departmentId); return repository.findByDepartmentId(departmentId); } @GetMapping("/organization/{organizationId}") public List findByOrganization(@PathVariable("organizationId") Long organizationId) { LOGGER.info("Employee find: organizationId={}", organizationId); return repository.findByOrganizationId(organizationId); } }
In order to run our microservices on Kubernetes we should first build the whole Maven project with mvn clean install
command. Each microservice has Dockerfile placed in the root directory. Here’s Dockerfile definition for employee-service
.
FROM openjdk:8-jre-alpine ENV APP_FILE employee-service-1.0-SNAPSHOT.jar ENV APP_HOME /usr/apps EXPOSE 8080 COPY target/$APP_FILE $APP_HOME/ WORKDIR $APP_HOME ENTRYPOINT ["sh", "-c"] CMD ["exec java -jar $APP_FILE"]
Let’s build Docker images for all three sample microservices.
$ cd employee-service $ docker build -t piomin/employee:1.0 . $ cd department-service $ docker build -t piomin/department:1.0 . $ cd organization-service $ docker build -t piomin/organization:1.0 .
The last step is to deploy Docker containers with applications on Kubernetes. To do that just execute commands kubectl apply
on YAML configuration files. The sample deployment file for employee-service
has been demonstrated in step 1. All required deployment fields are available inside project repository in kubernetes
directory.
$ kubectl apply -f kubernetes\employee-deployment.yaml $ kubectl apply -f kubernetes\department-deployment.yaml $ kubectl apply -f kubernetes\organization-deployment.yaml
4. Communication between microservices with Spring Cloud Kubernetes Ribbon
All the microservice are deployed on Kubernetes. Now, it’s worth to discuss some aspects related to inter-service communication. Application employee-service
in contrast to other microservices did not invoke any other microservices. Let’s take a look on to other microservices that calls API exposed by employee-service
and communicates between each other (organization-service
calls department-service
API).
First we need to include some additional dependencies to the project. We use Spring Cloud Ribbon and OpenFeign. Alternatively you can also use Spring @LoadBalanced
RestTemplate
.
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-netflix-ribbon</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-kubernetes-ribbon</artifactId> <version>0.3.0.BUILD-SNAPSHOT</version> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-openfeign</artifactId> </dependency>
Here’s the main class of department-service
. It enables Feign client using @EnableFeignClients
annotation. It works the same as with discovery based on Spring Cloud Netflix Eureka. OpenFeign uses Ribbon for client-side load balancing. Spring Cloud Kubernetes Ribbon provides some beans that forces Ribbon to communicate with Kubernetes API through Fabric8 KubernetesClient
.
@SpringBootApplication @EnableDiscoveryClient @EnableFeignClients @EnableMongoRepositories @EnableSwagger2 public class DepartmentApplication { public static void main(String[] args) { SpringApplication.run(DepartmentApplication.class, args); } // ... }
Here’s implementation of Feign client for calling method exposed by employee-service
.
@FeignClient(name = "employee") public interface EmployeeClient { @GetMapping("/department/{departmentId}") List findByDepartment(@PathVariable("departmentId") String departmentId); }
Finally, we have to inject Feign client’s beans to the REST controller. Now, we may call the method defined inside EmployeeClient
, which is equivalent to calling REST endpoints.
@RestController public class DepartmentController { private static final Logger LOGGER = LoggerFactory.getLogger(DepartmentController.class); @Autowired DepartmentRepository repository; @Autowired EmployeeClient employeeClient; // ... @GetMapping("/organization/{organizationId}/with-employees") public List findByOrganizationWithEmployees(@PathVariable("organizationId") Long organizationId) { LOGGER.info("Department find: organizationId={}", organizationId); List departments = repository.findByOrganizationId(organizationId); departments.forEach(d -> d.setEmployees(employeeClient.findByDepartment(d.getId()))); return departments; } }
5. Building API gateway using Kubernetes Ingress
An Ingress is a collection of rules that allow incoming requests to reach the downstream services. In our microservices architecture ingress is playing a role of an API gateway. To create it we should first prepare YAML description file. The descriptor file should contain the hostname under which the gateway will be available and mapping rules to the downstream services.
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: gateway-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: backend: serviceName: default-http-backend servicePort: 80 rules: - host: microservices.info http: paths: - path: /employee backend: serviceName: employee servicePort: 8080 - path: /department backend: serviceName: department servicePort: 8080 - path: /organization backend: serviceName: organization servicePort: 8080
You have to execute the following command to apply the configuration visible above to the Kubernetes cluster.
$ kubectl apply -f kubernetes\ingress.yaml
For testing this solution locally we have to insert the mapping between IP address and hostname set in ingress definition inside hosts
file as shown below. After it we can services through ingress using defined hostname just like that: http://microservices.info/employee
.
192.168.99.100 microservices.info
You can check the details of created ingress just by executing command kubectl describe ing gateway-ingress
.
6. Enabling API specification on gateway using Swagger2
Ok, what if we would like to expose single swagger documentation for all microservices deployed on Kubernetes? Well, here the things are getting complicated… We can run container with Swagger UI, and map all paths exposed by the ingress manually, but it is rather not a good solution…
In that case we can use Spring Cloud Kubernetes Ribbon one more time – this time together with Spring Cloud Netflix Zuul. Zuul will act as gateway only for serving Swagger API.
Here’s the list of dependencies used in my gateway-service
project.
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-netflix-zuul</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-kubernetes</artifactId> <version>0.3.0.BUILD-SNAPSHOT</version> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-netflix-ribbon</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-kubernetes-ribbon</artifactId> <version>0.3.0.BUILD-SNAPSHOT</version> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger-ui</artifactId> <version>2.9.2</version> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger2</artifactId> <version>2.9.2</version> </dependency>
Kubernetes discovery client will detect all services exposed on cluster. We would like to display documentation only for our three microservices. That’s why I defined the following routes for Zuul.
zuul: routes: department: path: /department/** employee: path: /employee/** organization: path: /organization/**
Now we can use ZuulProperties
bean to get routes addresses from Kubernetes discovery, and configure them as Swagger resources as shown below.
@Configuration public class GatewayApi { @Autowired ZuulProperties properties; @Primary @Bean public SwaggerResourcesProvider swaggerResourcesProvider() { return () -> { List resources = new ArrayList(); properties.getRoutes().values().stream() .forEach(route -> resources.add(createResource(route.getId(), "2.0"))); return resources; }; } private SwaggerResource createResource(String location, String version) { SwaggerResource swaggerResource = new SwaggerResource(); swaggerResource.setName(location); swaggerResource.setLocation("/" + location + "/v2/api-docs"); swaggerResource.setSwaggerVersion(version); return swaggerResource; } }
Application gateway-service
should be deployed on cluster the same as other applications. You can the list of running service by executing command kubectl get svc
. Swagger documentation is available under address http://192.168.99.100:31237/swagger-ui.html
.
Conclusion
I’m actually rooting for Spring Cloud Kubernetes project, which is still at the incubation stage. Kubernetes popularity as a platform is rapidly growing during some last months, but it still has some weaknesses. One of them is inter-service communication. Kubernetes doesn’t give us many mechanisms out-of-the-box, which allows configure more advanced rules. This a reason for creating frameworks for service mesh on Kubernetes like Istio or Linkerd. While these projects are still relatively new solutions, Spring Cloud is stable, opinionated framework. Why not to use to provide service discovery, inter-service communication or load balancing? Thanks to Spring Cloud Kubernetes it is possible.
Testing microservices on OpenShift using Arquillian Cube
I had a touch with Arquillian framework for the first time when I was building the automated end-to-end tests for JavaEE based applications. At that time testing applications deployed on JavaEE servers was not very comfortable. Arquillian came with nice solution for that problem. It has been providing useful mechanisms for testing EJBs deployed on an embedded application server.
Currently, Arquillian provides multiple modules dedicated for different technologies and use cases. One of these modules is Arquillian Cube. With this extension you can create integration/functional tests running on Docker containers or even more advanced orchestration platforms like Kubernetes or OpenShift.
In this article I’m going to show you how to use Arquillian Cube for building integration tests for applications running on OpenShift platform. All the examples would be deployed locally on Minishift. Here’s the full list of topics covered in this article:
- Using Arquillian Cube for deploying, and running applications on Minishift
- Testing applications deployed on Minishift by calling their REST API exposed using OpenShift routes
- Testing inter-service communication between deployed applications basing on Kubernetes services
Before reading this article it is worth to consider reading two of my previous articles about Kubernetes and OpenShift:
- Running Vert.x Microservices on Kubernetes/OpenShift – describes how to run Vert.x microservices on Minishift, integrate them with Mongo database, and provide inter-service communication between them
- Quick guide to deploying Java apps on OpenShift – describes further steps of running Minishift and deploying applications on that platform
The following picture illustrates the architecture of currently discussed solution. We will build and deploy two sample applications on Minishift. They integrate with NoSQL database, which is also ran as a service on OpenShift platform.
Now, we may proceed to the development.
1. Including Arquillian Cube dependencies
Before including dependencies to Arquillian Cube libraries we should define dependency management section in our pom.xml
. It should contain BOM of Arquillian framework and also of its Cube extension.
<dependencyManagement> <dependencies> <dependency> <groupId>org.arquillian.cube</groupId> <artifactId>arquillian-cube-bom</artifactId> <version>1.15.3</version> <scope>import</scope> <type>pom</type> </dependency> <dependency> <groupId>org.jboss.arquillian</groupId> <artifactId>arquillian-bom</artifactId> <version>1.4.0.Final</version> <scope>import</scope> <type>pom</type> </dependency> </dependencies> </dependencyManagement>
Here’s the list of libraries used in my sample project. The most important thing is to include starter for Arquillian Cube OpenShift extension, which contains all required dependencies. It is also worth to include arquillian-cube-requirement
artifact if you would like to annotate test class with @RunWith(ArquillianConditionalRunner.class)
, and openshift-client
in case you would like to use Fabric8 OpenShiftClient
.
<dependency> <groupId>org.jboss.arquillian.junit</groupId> <artifactId>arquillian-junit-container</artifactId> <version>1.4.0.Final</version> <scope>test</scope> </dependency> <dependency> <groupId>org.arquillian.cube</groupId> <artifactId>arquillian-cube-requirement</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.arquillian.cube</groupId> <artifactId>arquillian-cube-openshift-starter</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>io.fabric8</groupId> <artifactId>openshift-client</artifactId> <version>3.1.12</version> <scope>test</scope> </dependency>
2. Running Minishift
I gave you a detailed instruction how to run Minishift locally in my previous articles about OpenShift. Here’s the full list of commands that should be executed in order to start Minishift, reuse Docker daemon managed by Minishift and create test namespace (project).
$ minishift start --vm-driver=virtualbox --memory=2G $ minishift docker-env $ minishift oc-env $ oc login -u developer -p developer $ oc new-project sample-deployment
We also have to create Mongo database service on OpenShift. OpenShift platform provides an easily way of deploying built-in services via web console available at https://192.168.99.100:8443. You can select there the required service on main dashboard, and just confirm the installation using default properties. Otherwise, you would have to provide YAML template with deployment configuration, and apply it to Minishift using oc
command. YAML file will be also required if you decide to recreate namespace on every single test case (explained in the subsequent text in Step 3). I won’t paste here content of the template with configuration for creating MongoDB service on Minishift. This file is available in my GitHub repository in the /openshift/mongo-deployment.yaml
file. To access that file you need to clone repository sample-vertx-kubernetes and switch to branch openshift (https://github.com/piomin/sample-vertx-kubernetes/tree/openshift-tests). It contains definitions of secret
, persistentVolumeClaim
, deploymentConfig
and service
.
3. Configuring connection with Minishift for Arquillian
All the Arquillian configuration settings should be provided in arquillian.xml
file located in src/test/resources
directory. When running Arquillian tests on Minishift you generally have two approaches that may be applied. You can create new namespace per every test suite and then remove it after the test or just use the existing one, and then remove all the created components within the selected namespace. First approach is set by default for every test until you modify it inside Arquillian configuration file using namespace.use.existing
and namespace.use.current
properties.
<extension qualifier="openshift"> <property name="namespace.use.current">true</property> <property name="namespace.use.existing">sample-deployment</property> <property name="kubernetes.master">https://192.168.99.100:8443</property> <property name="cube.auth.token">EMNHP8QIB4A_VU4kE_vQv8k9he_4AV3GTltrzd06yMU</property> </extension>
You also have to set Kubernetes master address and API token. In order to obtain token just run the following command.
$ oc whoami -t EMNHP8QIB4A_VU4kE_vQv8k9he_4AV3GTltrzd06yMU
4. Building Arquillian JUnit test
Every JUnit test class should be annotated with @RequiresOpenshift
. It should also have runner set. In this case it is ArquillianConditionalRunner
. The test method testCustomerRoute
applies the configuration passed inside file deployment.yaml
, which is assigned to the method using @Template
annotation.
The important part of this unit test is route’s URL declaration. We have to annotate it with the following annotation:
@RouteURL
– it searches for a route with a name defined usingvalue
parameter and inject it into URL object instance@AwaitRoute
– if you do not declare this annotation the test will finish just after running, because deployment on OpenShift is processed asynchronously.@AwaitRoute
will force test to wait until route is available on Minishift. We can set the timeout of waiting for route (in this case it is 2 minutes) and route’s path. Especially route’s path is very important here, without it our test won’t locate the route and finished with 2 minutes timeout.
The test method is very simple. In fact, I only send POST request with JSON object to the endpoint assigned to the customer-route
route and verify if HTTP status code is 200. Because I had a problem with injecting route’s URL (in fact it doesn’t work for my sample with Minishift v3.9.0, while it works with Minishift v3.7.1) I needed to prepare it manually in the code. If it works properly we could use URL url
instance for that.
@Category(RequiresOpenshift.class) @RequiresOpenshift @RunWith(ArquillianConditionalRunner.class) public class CustomerServiceApiTest { private static final Logger LOGGER = LoggerFactory.getLogger(CustomerServiceApiTest.class); @ArquillianResource OpenShiftAssistant assistant; @ArquillianResource OpenShiftClient client; @RouteURL(value = "customer-route") @AwaitRoute(timeoutUnit = TimeUnit.MINUTES, timeout = 2, path = "/customer") private URL url; @Test @Template(url = "classpath:deployment.yaml") public void testCustomerRoute() { OkHttpClient httpClient = new OkHttpClient(); RequestBody body = RequestBody.create(MediaType.parse("application/json"), "{\"name\":\"John Smith\", \"age\":33}"); Request request = new Request.Builder().url("http://customer-route-sample-deployment.192.168.99.100.nip.io/customer").post(body).build(); try { Response response = httpClient.newCall(request).execute(); LOGGER.info("Test: response={}", response.body().string()); Assert.assertNotNull(response.body()); Assert.assertEquals(200, response.code()); } catch (IOException e) { e.printStackTrace(); } } }
5. Preparing deployment configuration
Before running the test we have to prepare template with configuration, which is loaded by Arquillian Cube using @Template
annotation. We need to create deploymentConfig
, inject there MongoDB credentials stored in secret
object, and finally expose the service outside container using route
object.
kind: Template apiVersion: v1 metadata: name: customer-template objects: - kind: ImageStream apiVersion: v1 metadata: name: customer-image spec: dockerImageRepository: piomin/customer-vertx-service - kind: DeploymentConfig apiVersion: v1 metadata: name: customer-service spec: template: metadata: labels: name: customer-service spec: containers: - name: customer-vertx-service image: piomin/customer-vertx-service ports: - containerPort: 8090 protocol: TCP env: - name: DATABASE_USER valueFrom: secretKeyRef: key: database-user name: mongodb - name: DATABASE_PASSWORD valueFrom: secretKeyRef: key: database-password name: mongodb - name: DATABASE_NAME valueFrom: secretKeyRef: key: database-name name: mongodb replicas: 1 triggers: - type: ConfigChange - type: ImageChange imageChangeParams: automatic: true containerNames: - customer-vertx-service from: kind: ImageStreamTag name: customer-image:latest strategy: type: Rolling paused: false revisionHistoryLimit: 2 minReadySeconds: 0 - kind: Service apiVersion: v1 metadata: name: customer-service spec: ports: - name: "web" port: 8090 targetPort: 8090 selector: name: customer-service - kind: Route apiVersion: v1 metadata: name: customer-route spec: path: "/customer" to: kind: Service name: customer-service
6. Testing inter-service communication
In the sample project the communication with other microservices is realized by Vert.x WebClient
. It takes Kubernetes service name and its container port as parameters. It is implemented inside customer-service
by AccountClient
, which is then invoked inside Vert.x HTTP route implementation. Here’s AccountClient
implementation.
public class AccountClient { private static final Logger LOGGER = LoggerFactory.getLogger(AccountClient.class); private Vertx vertx; public AccountClient(Vertx vertx) { this.vertx = vertx; } public AccountClient findCustomerAccounts(String customerId, Handler<AsyncResult<List>> resultHandler) { WebClient client = WebClient.create(vertx); client.get(8095, "account-service", "/account/customer/" + customerId).send(res2 -> { LOGGER.info("Response: {}", res2.result().bodyAsString()); List accounts = res2.result().bodyAsJsonArray().stream().map(it -> Json.decodeValue(it.toString(), Account.class)).collect(Collectors.toList()); resultHandler.handle(Future.succeededFuture(accounts)); }); return this; } }
Endpoint GET /account/customer/:customerId
exposed by account-service
is called within implementation of method GET /customer/:id
exposed by customer-service
. This time we create new namespace instead using the existing one. That’s why we have to apply MongoDB deployment configuration before applying configuration of sample services. We also need to upload configuration of account-service
that is provided inside account-deployment.yaml
file. The rest part of JUnit test is pretty similar to the test described in Step 4. It waits until customer-route
is available on Minishift. The only differences are in calling URL and dynamic injection of namespace into route’s URL.
@Category(RequiresOpenshift.class) @RequiresOpenshift @RunWith(ArquillianConditionalRunner.class) @Templates(templates = { @Template(url = "classpath:mongo-deployment.yaml"), @Template(url = "classpath:deployment.yaml"), @Template(url = "classpath:account-deployment.yaml") }) public class CustomerCommunicationTest { private static final Logger LOGGER = LoggerFactory.getLogger(CustomerCommunicationTest.class); @ArquillianResource OpenShiftAssistant assistant; String id; @RouteURL(value = "customer-route") @AwaitRoute(timeoutUnit = TimeUnit.MINUTES, timeout = 2, path = "/customer") private URL url; // ... @Test public void testGetCustomerWithAccounts() { LOGGER.info("Route URL: {}", url); String projectName = assistant.getCurrentProjectName(); OkHttpClient httpClient = new OkHttpClient(); Request request = new Request.Builder().url("http://customer-route-" + projectName + ".192.168.99.100.nip.io/customer/" + id).get().build(); try { Response response = httpClient.newCall(request).execute(); LOGGER.info("Test: response={}", response.body().string()); Assert.assertNotNull(response.body()); Assert.assertEquals(200, response.code()); } catch (IOException e) { e.printStackTrace(); } } }
You can run the test using your IDE or just by executing command mvn clean install
.
Conclusion
Arquillian Cube comes with gentle solution for integration testing over Kubernetes and OpenShift platforms. It is not difficult to prepare and upload configuration with database and microservices and then deploy it on OpenShift node. You can event test communication between microservices just by deploying dependent application with OpenShift template.
Quick guide to deploying Java apps on OpenShift
In this article I’m going to show you how to deploy your applications on OpenShift (Minishift), connect them with other services exposed there or use some other interesting deployment features provided by OpenShift. Openshift is built on top of Docker containers and the Kubernetes container cluster orchestrator. Currently, it is the most popular enterprise platform basing on those two technologies, so it is definitely worth examining it in more details.
1. Running Minishift
We use Minishift to run a single-node OpenShift cluster on the local machine. The only prerequirement before installing MiniShift is the necessity to have a virtualization tool installed. I use Oracle VirtualBox as a hypervisor, so I should set --vm-driver
parameter to virtualbox
in my running command.
$ minishift start --vm-driver=virtualbox --memory=3G
2. Running Docker
It turns out that you can easily reuse the Docker daemon managed by Minishift, in order to be able to run Docker commands directly from your command line, without any additional installations. To achieve this just run the following command after starting Minishift.
@FOR /f "tokens=* delims=^L" %i IN ('minishift docker-env') DO @call %i
3. Running OpenShift CLI
The last tool, that is required before starting any practical exercise with Minishift is CLI. CLI is available under command oc
. To enable it on your command-line run the following commands.
$ minishift oc-env $ SET PATH=C:\Users\minkowp\.minishift\cache\oc\v3.9.0\windows;%PATH% $ REM @FOR /f "tokens=*" %i IN ('minishift oc-env') DO @call %i
Alternatively you can use OpenShift web console which is available under port 8443. On my Windows machine it is by default launched under address 192.168.99.100.
4. Building Docker images of the sample applications
I prepared the two sample applications that are used for the purposes of presenting OpenShift deployment process. These are simple Java, Vert.x applications that provide HTTP API and store data in MongoDB. However, a technology is not very important now. We need to build Docker images with these applications. The source code is available on GitHub (https://github.com/piomin/sample-vertx-kubernetes.git) in branch openshift (https://github.com/piomin/sample-vertx-kubernetes/tree/openshift). Here’s sample Dockerfile for account-vertx-service
.
FROM openjdk:8-jre-alpine ENV VERTICLE_FILE account-vertx-service-1.0-SNAPSHOT.jar ENV VERTICLE_HOME /usr/verticles ENV DATABASE_USER mongo ENV DATABASE_PASSWORD mongo ENV DATABASE_NAME db EXPOSE 8095 COPY target/$VERTICLE_FILE $VERTICLE_HOME/ WORKDIR $VERTICLE_HOME ENTRYPOINT ["sh", "-c"] CMD ["exec java -jar $VERTICLE_FILE"]
Go to account-vertx-service
directory and run the following command to build image from a Dockerfile visible above.
$ docker build -t piomin/account-vertx-service .
The same step should be performed for customer-vertx-service
. After it you have two images built, both in the same version latest
, which now can be deployed and ran on Minishift.
5. Preparing OpenShift deployment descriptor
When working with OpenShift, the first step of application’s deployment is to create YAML configuration file. This file contains basic information about deployment like containers used for running applications (1), scaling (2), triggers that drive automated deployments in response to events (3) or a strategy of deploying your pods on the platform (4).
kind: "DeploymentConfig" apiVersion: "v1" metadata: name: "account-service" spec: template: metadata: labels: name: "account-service" spec: containers: # (1) - name: "account-vertx-service" image: "piomin/account-vertx-service:latest" ports: - containerPort: 8095 protocol: "TCP" replicas: 1 # (2) triggers: # (3) - type: "ConfigChange" - type: "ImageChange" imageChangeParams: automatic: true containerNames: - "account-vertx-service" from: kind: "ImageStreamTag" name: "account-vertx-service:latest" strategy: # (4) type: "Rolling" paused: false revisionHistoryLimit: 2
Deployment configurations can be managed with the oc
command like any other resource. You can create new configuration or update the existing one by using oc apply
command.
$ oc apply -f account-deployment.yaml
You can be surprised a little, but this command does not trigger any build and does not start the pods. In fact, you have only created a resource of type deploymentConfig
, which may be describes deployment process. You can start this process using some other oc
commands, but first let’s take a closer look on the resources required by our application.
6. Injecting environment variables
As I have mentioned before, our sample applications uses external datasource. They need to open the connection to the existing MongoDB instance in order to store there data passed using HTTP endpoints exposed by the application. Here’s MongoVerticle
class, which is responsible for establishing client connection with MongoDB. It uses environment variables for setting security credentials and database name.
public class MongoVerticle extends AbstractVerticle { @Override public void start() throws Exception { ConfigStoreOptions envStore = new ConfigStoreOptions() .setType("env") .setConfig(new JsonObject().put("keys", new JsonArray().add("DATABASE_USER").add("DATABASE_PASSWORD").add("DATABASE_NAME"))); ConfigRetrieverOptions options = new ConfigRetrieverOptions().addStore(envStore); ConfigRetriever retriever = ConfigRetriever.create(vertx, options); retriever.getConfig(r -> { String user = r.result().getString("DATABASE_USER"); String password = r.result().getString("DATABASE_PASSWORD"); String db = r.result().getString("DATABASE_NAME"); JsonObject config = new JsonObject(); config.put("connection_string", "mongodb://" + user + ":" + password + "@mongodb/" + db); final MongoClient client = MongoClient.createShared(vertx, config); final AccountRepository service = new AccountRepositoryImpl(client); ProxyHelper.registerService(AccountRepository.class, vertx, service, "account-service"); }); } }
MongoDB is available in the OpenShift’s catalog of predefined Docker images. You can easily deploy it on your Minishift instance just by clicking “MongoDB” icon in “Catalog” tab. Username and password will be automatically generated if you do not provide them during deployment setup. All the properties are available as deployment’s environment variables and are stored as secrets/mongodb
, where mongodb
is the name of the deployment.
Environment variables can be easily injected into any other deployment using oc set
command, and therefore they are injected into the pod after performing deployment process. The following command inject all secrets assigned to mongodb
deployment to the configuration of our sample application’s deployment.
$ oc set env --from=secrets/mongodb dc/account-service
7. Importing Docker images to OpenShift
A deployment configuration is ready. So, in theory we could have start deployment process. However, we have back for a moment to the deployment config defined in the Step 5. We defined there two triggers that causes a new replication controller to be created, what results in deploying new version of pod. First of them is a configuration change trigger that fires whenever changes are detected in the pod template of the deployment configuration (ConfigChange
). The second of them, image change trigger (ImageChange
) fires when a new version of the Docker image is pushed to the repository. To be able to watch if an image in repository has been changed, we have to define and create image stream. Such an image stream does not contain any image data, but present a single virtual view of related images, something similar to an image repository. Inside deployment config file we referred to image stream account-vertx-service
, so the same name should be provided inside image stream definition. In turn, when setting the spec.dockerImageRepository
field we define the Docker pull specification for the image.
apiVersion: "v1" kind: "ImageStream" metadata: name: "account-vertx-service" spec: dockerImageRepository: "piomin/account-vertx-service"
Finally, we can create resource on OpenShift platform.
$ oc apply -f account-image.yaml
8. Running deployment
Once a deployment configuration has been prepared, and Docker images has been succesfully imported into repository managed by OpenShift instance, we may trigger the build using the following oc
command.
$ oc rollout latest dc/account-service $ oc rollout latest dc/customer-service
If everything goes fine the new pods should be started for the defined deployments. You can easily check it out using OpenShift web console.
9. Updating image stream
We have already created two image streams related to the Docker repositories. Here’s the screen from OpenShift web console that shows the list of available image streams.
To be able to push a new version of an image to OpenShift internal Docker registry we should first perform docker login
against this registry using user’s authentication token. To obtain the token from OpenShift use oc whoami
command, and then pass it to your docker login
command with -p
parameter.
$ oc whoami -t Sz9_TXJQ2nyl4fYogR6freb3b0DGlJ133DVZx7-vMFM $ docker login -u developer -p Sz9_TXJQ2nyl4fYogR6freb3b0DGlJ133DVZx7-vMFM https://172.30.1.1:5000
Now, if you perform any change in your application and rebuild your Docker image with latest
tag, you have to push that image to image stream on OpenShift. The address of internal registry has been automatically generated by OpenShift, and you can check it out in the image stream’s details. For me, it is 172.30.1.1:5000.
$ docker tag piomin/account-vertx-service 172.30.1.1:5000/sample-deployment/account-vertx-service:latest $ docker push 172.30.1.1:5000/sample-deployment/account-vertx-service
After pushing new version of Docker image to image stream, a rollout of application is started automatically. Here’s the screen from OpenShift web console that shows the history of account-service application deployments.
Conclusion
I have shown you the further steps of deploying your application on the OpenShift platform. Basing on sample Java application that connects to a database, I illustrated how to inject credentials to that application’s pod entirely transparently for a developer. I also perform an update of application’s Docker image, in order to show how to trigger a new version deployment on image change.
Running Vert.x Microservices on Kubernetes/OpenShift
Automatic deployment, scaling, container orchestration, self-healing are a few of very popular topics in some recent months. This is reflected in the rapidly growing popularity of such tools like Docker, Kubernetes or OpenShift. It’s hard to find any developer who didn’t heard about these technologies. How many of you did setup and run all those tools locally?
Despite appearances, it is not very hard thing to do. Both Kubernetes and OpenShift provide simplified, single-node versions of their platform that allows you to create and try a local cluster, even on Windows.
In this article I’m going to guide you through the all steps that result in deploying and running microservices that communicates with each other and use MongoDB as a data source.
Technologies
Eclipse Vert.x – a toolkit for building reactive applications (and more) on the JVM. It’s a polyglot, event-driven, non blocking and fast framework what makes it the perfect choice for creating light-weight, high-performance microservices.
Kubernetes – is an open-source system for automating deployment, scaling, and management of containerized applications. Now, even Docker platform decided to get support for Kubernetes, although they are promoting their own clustering solution – Docker Swarm. You may easily run it locally using Minikube. However, we won’t use it this time. You can read interesting article about creating Spring Boot microservices and running them on Minikube here: Microservices with Kubernetes and Docker.
RedHat OpenShift – is an open source container application platform build on top of Docker containers and Kubernetes. It is also available online on website https://www.openshift.com/. You may easily run it locally with Minishift.
Getting started with Minishift
Of cource, you can read some tutorials available on RedHat website, but I’ll try to condense an instruction of installation and configuration in a few words. Firstly, I would like to point out that all the instructions will be applied to Windows OS.
Minishift requires a hyper-visor to start the virtual machine, so first you should download and install one of these tools. If you use other solution than Hyper-V, like I do, you would have to pass that driver name during Minishift starting. The command visible below launches it on Oracle VirtualBox and allocates 3GB of RAM memory for VM.
$ minishift start --vm-driver=virtualbox --memory=3G
The executable minishift.exe
should be included in the system path. You should also have Docker client binary installed on your machine. Docker daemon is in turn managed by Minishift, so you can reuse it for other use-cases as well. All what you need to do to take an advantage of this functionality is to run the following command in your shell.
$ @FOR /f "tokens=* delims=^L" %i IN ('minishift docker-env') DO @call %i
OpenShift platform my be managed using CLI or web console. To enable CLI on Windows you should add it to the path and then run one command to configure your shell. The description of required steps is displayed after running the following command.
$ minishift oc-env SET PATH=C:\Users\minkowp\.minishift\cache\oc\v3.7.1\windows;%PATH% REM Run this command to configure your shell: REM @FOR /f "tokens=*" %i IN ('minishift oc-env') DO @call %i
In order to use web console just run command $ minishift console
, which automatically opens it in your web browser. For me, it is available under address https://192.168.99.100:8443/console. To check your ip just execute $ minishift ip
.
Sample applications
The source code of sample applications is available on GitHub (https://github.com/piomin/sample-vertx-kubernetes.git). In fact, the similar application have been ran locally and described in the article Asynchronous Microservices with Vert.x. This article can be treated as an introduction to building microservices with Vert.x framework and to to Vert.x framework in general. The current application is even simpler, because it does not have to integrate with any external discovery server like Consul.
Now, let’s take a look on the code below. It declares a verticle that establishes a client connection to MongoDB and registers repository object as a proxy service. Such a service may be easily accessed by another verticle. MongoDB network address is managed by Minishift.
public class MongoVerticle extends AbstractVerticle { @Override public void start() throws Exception { JsonObject config = new JsonObject(); config.put("connection_string", "mongodb://micro:micro@mongodb/microdb"); final MongoClient client = MongoClient.createShared(vertx, config); final AccountRepository service = new AccountRepositoryImpl(client); ProxyHelper.registerService(AccountRepository.class, vertx, service, "account-service"); } }
That verticle can be deployed in the application’s main method. It is also important to set property vertx.disableFileCPResolving
to true, if you would like to run your application on Minishift. It forces Vert.x to resolve file from the its classloader in addition from the file system.
public static void main(String[] args) throws Exception { System.setProperty("vertx.disableFileCPResolving", "true"); Vertx vertx = Vertx.vertx(); vertx.deployVerticle(new MongoVerticle()); vertx.deployVerticle(new AccountServer()); }
AccountServer
verticle contains simple API methods that performs CRUD operations on MongoDB.
Building Docker image
Assuming you have successfully installed and configured Minishift, and cloned my sample Maven project shared on GitHub, you may proceed to the build and deploy stage. The first step is to build the applications from source code by executing mvn clean install command on the root project. It consists of two independent modules: account-vert-service, customer-vertx-service. Each of these modules contains Dockerfile with image definition. Here’s the one created for customer-vertx-service. It is based openjdk:8-jre-alpine image. Alpine Linux is much smaller than most distribution base images, so our result image would have around 100MB, instead around 600MB if using standard OpenJDK image. Because we are generating Fat JAR files during Maven build we only have to run application inside container using java -jar command.
FROM openjdk:8-jre-alpine ENV VERTICLE_FILE customer-vertx-service-1.0-SNAPSHOT.jar ENV VERTICLE_HOME /usr/verticles EXPOSE 8090 COPY target/$VERTICLE_FILE $VERTICLE_HOME/ WORKDIR $VERTICLE_HOME ENTRYPOINT ["sh", "-c"] CMD ["exec java -jar $VERTICLE_FILE"]
Once we have successfully build the project, we should navigate to the main directory of every module. The sample command visible below builds Docker image of customer-vertx-service.
$ docker build -t microservices/customer-vertx-service:1.0 .
In fact, there are some different approaches of building and deploying microservices on OpenShift. For example, we could use Maven plugin or OpenShift definition file. Currently discussed way of deploying application is obviously one the simplest, and it assumes using CLI and web console for configuring deployments and services.
Deploy application on Minishift
Before proceeding to the main part of that article including deploy and run application on Minishift we have to provide some pre-configuration. We have to begin from logging into OpenShift and creating new project with oc command. Here are two required CLI commands. The name of our first OpenShift project is microservices.
$ oc login -u developer -p developer $ oc new-project microservices
We might as well perform the same actions using web console. After succesfully login there first you will see a dashboard with all available services brokered by Minishift. Let’s initialize a container with MongoDB. All the provided container settings should the same as configured inside application. After creating MongoDB service would available for all other services under mongodb name.
Creating MongoDB container managed by Minishift is only a part of a success. The most important thing is to deploy containers with two sample microservices, where each of them would have access to the database. Here as well, we may leverage two methods of resources creation: by CLI or via web console. Here are some CLI commands for creating deployment on OpenShift.
$ oc new-app --docker-image microservices/customer-vertx-service:1.0 $ oc new-app --docker-image microservices/account-vertx-service:1.0
The commands visible above create not only deployment, but also creates pods, and exposes each of them as a service. Now yoiu may easily scale number of running pods by executing the following command.
oc scale --replicas=2 dc customer-vertx-service oc scale --replicas=2 dc account-vertx-service
The next step is to expose your service outside a container to make it publicly visible. We can achieve it by creating a route. OpenShift route is in fact Kubernetes ingress. OpenShift web console provides an interface for creating routes available under section Applications -> Routes. When defining new route you should enter its name, a name of a service, and a path on the basis of which requets are proxied. If hostname is not specified, it is automatically generated by OpenShift.
Now, let’s take a look on web console dashboard. There are three applications deployed: mongodb-persistent, account-vertx-service and customer-vertx-service. Both Vert.x microservices are scaled up with two running instances (Kubernetes pods), and are exposed under automatically generated hostname with given context path, for example http://account-route-microservices.192.168.99.100.nip.io/account.
You may check the details of every deployment by expanding it on the list view.
HTTP API is available outside and can be easily tested. Here’s the source code with REST API implementation for account-vertx-service.
AccountRepository repository = AccountRepository.createProxy(vertx, "account-service"); Router router = Router.router(vertx); router.route("/account/*").handler(ResponseContentTypeHandler.create()); router.route(HttpMethod.POST, "/account").handler(BodyHandler.create()); router.get("/account/:id").produces("application/json").handler(rc -> { repository.findById(rc.request().getParam("id"), res -> { Account account = res.result(); LOGGER.info("Found: {}", account); rc.response().end(account.toString()); }); }); router.get("/account/customer/:customer").produces("application/json").handler(rc -> { repository.findByCustomer(rc.request().getParam("customer"), res -> { List accounts = res.result(); LOGGER.info("Found: {}", accounts); rc.response().end(Json.encodePrettily(accounts)); }); }); router.get("/account").produces("application/json").handler(rc -> { repository.findAll(res -> { List accounts = res.result(); LOGGER.info("Found all: {}", accounts); rc.response().end(Json.encodePrettily(accounts)); }); }); router.post("/account").produces("application/json").handler(rc -> { Account a = Json.decodeValue(rc.getBodyAsString(), Account.class); repository.save(a, res -> { Account account = res.result(); LOGGER.info("Created: {}", account); rc.response().end(account.toString()); }); }); router.delete("/account/:id").handler(rc -> { repository.remove(rc.request().getParam("id"), res -> { LOGGER.info("Removed: {}", rc.request().getParam("id")); rc.response().setStatusCode(200); }); }); vertx.createHttpServer().requestHandler(router::accept).listen(8095);
Inter-service communication
All the microservices are deployed and exposed outside the container. The last thing that we still have to do is provide a communication between them. In our sample system customer-vertx-service calls endpoint exposed by account-vertx-service. Thanks to Kubernetes services mechanism we may easily call another service from application’s container, for example using simple HTTP client implementation. Let’s take a look on the list of services exposed by Kubernetes.
Here’s client’s implementation responsible for communication with account-vertx-service. Vert.x WebClient takes three parameters when calling GET method: port, hostname and path. We should set a Kubernetes service name as a hostname paramater, and default container’s port as a port.
public class AccountClient { private static final Logger LOGGER = LoggerFactory.getLogger(AccountClient.class); private Vertx vertx; public AccountClient(Vertx vertx) { this.vertx = vertx; } public AccountClient findCustomerAccounts(String customerId, Handler<AsyncResult<List>> resultHandler) { WebClient client = WebClient.create(vertx); client.get(8095, "account-vertx-service", "/account/customer/" + customerId).send(res2 -> { LOGGER.info("Response: {}", res2.result().bodyAsString()); List accounts = res2.result().bodyAsJsonArray().stream().map(it -> Json.decodeValue(it.toString(), Account.class)).collect(Collectors.toList()); resultHandler.handle(Future.succeededFuture(accounts)); }); return this; } }
AccountClient
is invoked inside customer-vertx-service GET /customer/:id
endpoint’s implementation.
router.get("/customer/:id").produces("application/json").handler(rc -> { repository.findById(rc.request().getParam("id"), res -> { Customer customer = res.result(); LOGGER.info("Found: {}", customer); new AccountClient(vertx).findCustomerAccounts(customer.getId(), res2 -> { customer.setAccounts(res2.result()); rc.response().end(customer.toString()); }); }); });
Summary
It is no coincidence that OpenShift is considered as the leading enterprise distribution of Kubernetes. It adds several helpful features to Kubernetes that simplify adopting it for developers and operation teams. You can easily try such features like CI/CD for DevOps, multiple projects with collaboration, networking, log aggregation from multiple pods on your local machine with Minishift.