JPA Data Access with Micronaut Data

When I have writing some articles comparing Spring and Micronaut frameworks recently, I have taken a note of many comments about lack of built-in ORM and data repositories support in Micronaut. Spring provides this feature for a long time through Spring Data project. The good news is that the Micronaut team is close to complete work on first version of their project with ORM support. The project called Micronaut Predator (short for Precomputed Data Repositories) is still under active development, and currently we may access just the snapshot version. However, the authors are introducing it is as more efficient with reduced memory consumption than competitive solutions like Spring Data or Grails GORM. In short, this could be achieved thanks to Ahead of Time (AoT) compilation to pre-compute queries for repository interfaces that are then executed by a thin, lightweight runtime layer, and avoiding usage of reflection or runtime proxies. Continue reading “JPA Data Access with Micronaut Data”

Kotlin Microservices with Micronaut, Spring Cloud and JPA

Micronaut Framework provides support for Kotlin built upon Kapt compiler plugin. It also implements the most popular cloud-native patterns like distributed configuration, service discovery and client-side load balancing. These features allows to include your application built on top of Micronaut into the existing microservices-based system. The most popular example of such approach may be an integration with Spring Cloud ecosystem. If you have already used Spring Cloud, it is very likely you built your microservices-based architecture using Eureka discovery server and Spring Cloud Config as a configuration server. Beginning from version 1.1 Micronaut supports both these popular tools being a part of Spring Cloud project. That’s a good news, because in version 1.0 the only supported distributed solution was Consul, and there were no possibility to use Eureka discovery together with Consul property source (running them together ends with exception). Continue reading “Kotlin Microservices with Micronaut, Spring Cloud and JPA”

Secure Spring Cloud Microservices with Vault and Nomad

One of the significant topics related to microservices security is managing and protecting sensitive data like tokens, passwords or certificates used by your application. As a developer you probably often implement a software that connects with external databases, message brokers or just the other applications. How do you store the credentials used by your application? To be honest, most of the software code I have seen in my life just stored a sensitive data as a plain text in the configuration files. Thanks to that, I could always be able to retrieve the credentials to every database I needed at a given time just by looking at the application source code. Of course, we can always encrypt sensitive data, but if we working with many microservices having separated databases I may not be very comfortable solution.

Today I’m going to show you how to integrate you Spring Boot application with HashiCorp’s Vault in order to store your sensitive data properly. The first good news is that you don’t have to create any keys or certificates for encryption and decryption, because Vault will do it in your place. In this article in a few areas I’ll refer to my previous article about HashiCorp’s solutions Deploying Spring Cloud Microservices on HashiCorp’s Nomad. Now, as then, I also deploy my sample applications on Nomad to take an advantage of build-in integration between those two very interesting HashiCorp’s tools. We will also use another HashiCorp’s solution for service discovery in inter-service communication – Consul. It’s also worth mentioning that Spring Cloud provides a dedicated project for integration with Vault – Spring Cloud Vault.

Architecture

The sample presented in this article will consists of two applications deployed on HashiCorp’s Nomad callme-service and caller-service. Microservice caller-service is calling endpoint exposed by callme-service. An inter-service communication is performed using the name of target application registered in Consul server. Microservice callme-service will store the history of all interactions triggered by caller-service in database. The credentials to database are stored on Vault. Nomad is integrated with Vault and store root token, which is not visible by the applications. The architecture of described solution is visible on the following picture.

vault-1

The current sample is pretty similar to the sample presented in my article Deploying Spring Cloud Microservices on Hashicorp’s Nomad. It is also available in the same repository on GitHub sample-nomad-java-service, but in the different branch vault. The current sample add an integration with PostgreSQL and Vault server for managing credentials to database.

1. Running Vault

We will run Vault inside Docker container in a development mode. Server in development mode does not require any further setup, it is ready to use just after startup. It provides in-memory encrypted storage and unsecure (HTTP) connection, which is not a problem for a demo purposes. We can override default server IP address and listening port by setting environment property VAULT_DEV_LISTEN_ADDRESS, but we won’t do that. After startup our instance of Vault is available on port 8200. We can use admin web console, which is for me available under address http://192.168.99.100:8200. The current version of Vault is 1.0.0.

$ docker run --cap-add=IPC_LOCK -d --name vault -p 8200:8200 vault

It is possible to login using different methods, but the most suitable way for us is through a token. To do that we have to display container logs using command docker logs vault, and then copy Root Token as shown below.

vault-1

Now you can login to Vault web console.

vault-2

2. Integration with Postgres database

In Vault we can create Secret Engine that connects to other services and generates dynamic credentials on demand. Secrets engines are available under path. There is the dedicated engine for the various databases, for example PostgreSQL. Before activating such an engine we should run an instance of Postgres database. This time we will also use Docker container. It is possible to set login and password to the database using environment variables.

$ docker run -d --name postgres -p 5432:5432 -e POSTGRES_PASSWORD=postgres123456 -e POSTGRES_USER=postgres postgres

After starting database, we may proceed to the engine configuration in Vault web console. First, let’s create our first secret engine. We may choose between some different types of engine. The right choice for now is Databases.

vault-3

You can apply a new configuration to Vault using vault command or by HTTP API. Vault web console provides terminal for running CLI commands, but it could be problematic in some cases. For example, I have a problem with escaping strings in some SQL commands, and therefore I had to add it using HTTP API. No matter which method you use, the next steps are the same. Following Vault documentation we first need to configure plugin for PostgreSQL database and then provide connection settings and credentials.

$ vault write database/config/postgres plugin_name=postgresql-database-plugin allowed_roles="default" connection_url="postgresql://{{username}}:{{password}}@192.168.99.100:5432?sslmode=disable" username="postgres" password="postgres123456"

Alternatively, you can perform the same action using HTTP API method. To authenticate against Vault we need to add header X-Vault-Token with root token. I have disabled SSL for connection with Postgres by setting sslmode=disable. There is only one role allowed to use this plugin: default. Now, let’s configure that role.

$ curl --header "X-Vault-Token: s.44GiacPqbV78fNbmoWK4mdYq" --request POST --data '{"plugin_name": "postgresql-database-plugin","allowed_roles": "default","connection_url": "postgresql://{{username}}:{{password}}@localhost:5432?sslmode=disable","username": "postgres","password": "postgres123456"}' http://192.168.99.100:8200/v1/database/config/postgres

The role can created either with CLI or with HTTP API. The name of role should the same as the name passed in field allowed_roles in the previous step. We also have to set target database name and SQL statement that creates user with privileges.

$ vault write database/roles/default db_name=postgres creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}';GRANT SELECT, UPDATE, INSERT ON ALL TABLES IN SCHEMA public TO \"{{name}}\";GRANT USAGE,  SELECT ON ALL SEQUENCES IN SCHEMA public TO \"{{name}}\";" default_ttl="1h" max_ttl="24h"

Alternatively you can call the following HTTP API endpoint.

$ curl --header "X-Vault-Token: s.44GiacPqbV78fNbmoWK4mdYq" --request POST --data '{"db_name":"postgres", "creation_statements": ["CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}';GRANT SELECT, UPDATE, INSERT ON ALL TABLES IN SCHEMA public TO \"{{name}}\";GRANT USAGE, SELECT ON ALL SEQUENCES IN SCHEMA public TO \"{{name}}\";"]}' http://192.168.99.100:8200/v1/database/roles/default

And it’s all. Now, we can test our configuration using command with role’s name vault read database/creds/default as shown below. You can login to database using returned credentials. By default, they are valid for one hour.

vault-5

3. Enabling Spring Cloud Vault

We have succesfully configured secret engine that is responsible for creating user on Postgres. Now, we can proceed to the development and integrate our application with Vault. Fortunately, there is a project Spring Cloud Vault, which provides out-of-the-box integration with Vault database secret engines. The only thing we have to do is to include Spring Cloud Vault to our project and provide some configuration settings. Let’s start from setting Spring Cloud Release Train. We use the newest stable version Finchley.SR2.

<dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-dependencies</artifactId>
			<version>Finchley.SR2</version>
			<type>pom</type>
			<scope>import</scope>
		</dependency>
	</dependencies>
</dependencyManagement>

We have to include two dependencies to our pom.xml. Starter spring-cloud-starter-vault-config is responsible for loading configuration from Vault and spring-cloud-vault-config-databases responsible for integration with secret engines for databases.

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-starter-vault-config</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-vault-config-databases</artifactId>
</dependency>

The sample application also connects to Postgres database, so we will include the following dependencies.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
	<groupId>org.postgresql</groupId>
	<artifactId>postgresql</artifactId>
	<version>42.2.5</version>
</dependency>

The only thing we have to do is to configure integration with Vault via Spring Cloud Vault. The following configuration settings should be placed in bootstrap.yml (no application.yml). Because we run our application on Nomad server, we use the port number dynamically set by Nomad available under environment property NOMAD_HOST_PORT_http and secret token from Vault available under environment property VAULT_TOKEN.

server:
  port: ${NOMAD_HOST_PORT_http:8091}

spring:
  application:
    name: callme-service
  cloud:
    vault:
      uri: http://192.168.99.100:8200
      token: ${VAULT_TOKEN}
      postgresql:
        enabled: true
        role: default
        backend: database
  datasource:
    url: jdbc:postgresql://192.168.99.100:5432/postgres

The important part of the configuration visible above is under the property spring.cloud.vault.postgresql. Following Spring Cloud documentation “Username and password are stored in spring.datasource.username and spring.datasource.password so using Spring Boot will pick up the generated credentials for your DataSource without further configuration”. Spring Cloud Vault is connecting with Vault, and then using role default (previously created on Vault) to generate new credentials to database. Those credentials are injected into spring.datasource properties. Then, the application is connecting to database using injected credentials. Everything works fine. Now, let’s try to run our applications on Nomad.

4. Deploying apps on Nomad

Before starting Nomad node we should also run Consul using its Docker container. Here’s Docker command that starts single node Consul instance.

$ docker run -d --name consul -p 8500:8500 consul

After that we can configure connection settings to Consul and Vault in Nomad configuration. I have create the file nomad.conf. Nomad is authenticating itself against Vault using root token. Connection with Consul is not secured. Sometimes it is also required to set network interface name and total CPU on the machine for Nomad client. Most clients are able to determine it automatically, but it does not work for me.

client {
  network_interface = "Połączenie lokalne 4"
  cpu_total_compute = 10400
}

consul {
  address = "192.168.99.100:8500"
}

vault {
  enabled = true
  address = "http://192.168.99.100:8200"
  token = "s.6jhQ1WdcYrxpZmpa0RNd0LMw"
}

Let’s run Nomad in development mode passing configuration file location.

$ nomad agent -dev -config=nomad.conf

If everything works fine you should see the similar log on startup.

vault-6

Once we have succesfully started Nomad agent integrated with Consul and Vault, we can proceed to the applications deployment. First build the whole project with mvn clean install command. The next step is to prepare Nomad’s job descriptor file. For more details about Nomad deployment process and its descriptor file you can refer to my previous article about it (mentioned in the preface of this article). Descriptor file is available inside application GitHub under path callme-service/job.nomad for callme-service, and caller-service/job.nomad for caller-service.

job "callme-service" {
	datacenters = ["dc1"]
	type = "service"
	group "callme" {
		count = 2
		task "api" {
			driver = "java"
			config {
				jar_path    = "C:\\Users\\minkowp\\git\\sample-nomad-java-services-idea\\callme-service\\target\\callme-service-1.0.0-SNAPSHOT.jar"
				jvm_options = ["-Xmx256m", "-Xms128m"]
			}
			resources {
				cpu    = 500 # MHz
				memory = 300 # MB
				network {
					port "http" {}
				}
			}
			service {
				name = "callme-service"
				port = "http"
			}
			vault {
				policies = ["nomad"]
			}
		}
		restart {
			attempts = 1
		}
	}
}

You will have to change value of jar_path property with your path of application binaries. Before applying this deployment to Nomad we will have to add some additional configuration on Vault. When adding integration with Vault we have to pass the name of policies used for checking permissions. I set the policy with name nomad, which now has to created in Vault. Our application requires a permission for reading paths /secret/* and /database/* as shown below.

vault-7

Finally, we can deploy our application callme-service on Nomad by executing the following command.

$ nomad job run job.nomad

The similar descriptor file is available for caller-service, so we can also deploy it. All the microservice has been registered in Consul as shown below.

vault-8

Here are the list of registered instances of caller-service. As you can see on the picture below it is available under port 25816.

vault-9

You can also take a look on Nomad jobs view.

vault-10

Up-to-date cache with EclipseLink and Oracle

One of the most useful feature provided by ORM libraries is a second-level cache, usually called L2. L2 object cache reduces database access for entities and their relationships. It is enabled by default in the most popular JPA implementations like Hibernate or EclipseLink. That won’t be a problem, unless a table inside a database is not modified directly by third-party applications, or by the other instance of the same application in a clustered environment. One of the available solutions to this problem is in-memory data grid, which stores all data in a memory, and is distributed across many nodes inside a cluster. Such a tools like Hazelcast or Apache Ignite has been described several times in my blog. If you are interested in one of that tools I recommend you read one of my previous article bout it: Hazelcast Hot Cache with Striim.

However, we won’t discuss about it in this article. Today, I would like to talk about Continuous Query Notification feature provided by Oracle Database. It solves a problem with updating or invalidating a cache when the data changes in the database. Oracle JDBC drivers provide support for it since 11g Release 1. This functionality is based on receiving invalidation events from the JDBC drivers. Fortunately, EclipseLink extends that feature in their solution called EclipseLink Database Change Notification. In this article I’m going to show you how to implement it using Spring Data JPA together with EclipseLink library.

How it works

The most useful functionality provided by the Oracle Database Continuous Query Notification is an ability to raise database events when rows in a table were modified. It enables client applications to register queries with the database and receive notifications in response to DML or DDL changes on the objects associated with the queries. To detect modifications, EclipseLink DCN uses Oracle ROWID to intercept changes in the table. ROWID is included to all queries for a DCN-enabled class. EclipseLink also retrieves ROWID of saved entity after an insert operation, and maintains a cache index on that ROWID. It also selects the database transaction ID once for each transaction to avoid invalidating the cache during the processing of transaction.

When a database sends a notification it usually contains the followoing information:

  • Names of the modifying objects, for example a name of changed table
  • Type of change. The possible values are INSERT, UPDATE, DELETE, ALTER TABLE, or DROP TABLE
  • Oracle’s ROWID of changed record

Running Oracle database locally

Before starting working on our sample application we need to have Oracle database installed. Fortunately, there are some Docker images with Oracle Standard Edition 12c. The command visible below starts Oracle XE version and exposes it on default 1521 port. It is also possible to use web console available under port 9080.

$ docker run -d --name oracle -p 9080:8080 -p 1521:1521 sath89/oracle-12c

We need to have sysdba role in order to be able to grant privilege CHANGE NOTIFICATION to our database. The default password for user system is oracle.

GRANT CHANGE NOTIFICATION TO PIOMIN;

You may use any Oracle client like Oracle SQL Developer to connect with database or just login to a web console. Since I run Docker on Windows it is available on my laptop under address http://192.168.99.100:9080/em. Of course it is Oracle, so you need to settle in for a long haul, and wait until it starts. You can observer a progress of an installation by running command docker logs -f oracle. When you finally see a “100% complete” log entry you may grant the required privileges to the existing user or create a new one with a set of needed privileges, and proceed to the next step.

Sample application

The sample application source code is available on GitHub under address https://github.com/piomin/sample-eclipselink-jpa.git. It is Spring Boot application that uses Spring Data JPA as a data access layer implementation. Because the default JPA provider used in that project is EclipseLink, we should remember about excluding Hibernate libraries from starters spring-boot-starter-data-jpa and spring-boot-starter-web. Besides a standard EclipseLink library for JPA, we also have to include EclipseLink implementation for Oracle database (org.eclipse.persistence.oracle) and Oracle JDBC driver.

<dependency>
	<groupId>org.eclipse.persistence</groupId>
	<artifactId>org.eclipse.persistence.jpa</artifactId>
	<version>2.7.1</version>
</dependency>
<dependency>
	<groupId>org.eclipse.persistence</groupId>
	<artifactId>org.eclipse.persistence.oracle</artifactId>
	<version>2.7.1</version>
</dependency>
<dependency>
	<groupId>com.oracle</groupId>
	<artifactId>ojdbc7</artifactId>
	<version>12.1.0.1</version>
</dependency>

The next step is to provide connection settings to Oracle database launched as a Docker container. Do not try to do it through application.yml properties, because Spring Boot by default uses HikariCP for connection pooling. This in turn causes a conflict with Oracle datasource during application bootstrap. The following datasource declaration would work succesfully.

@Bean
public DataSource dataSource() {
	final DriverManagerDataSource dataSource = new DriverManagerDataSource();
	dataSource.setDriverClassName("oracle.jdbc.driver.OracleDriver");
	dataSource.setUrl("jdbc:oracle:thin:@192.168.99.100:1521:xe");
	dataSource.setUsername("piomin");
	dataSource.setPassword("Piot_123");
	return dataSource;
}

EclipseLink with Database Change Notification

EclipseLink needs some specific configuration settings to succesfully work with Spring Boot and Spring Data JPA. These settings may be provided inside @Configuration class that extends JpaBaseConfiguration class. First, we should set EclipseLinkJpaVendorAdapter as default JPA vendor adapter. Then, we may configure some additional JPA settings like detailed logging level or automatic creation of database objects during application startup. However, the most important thing for us in the fragment of source code visible below is Oracle Continuous Query Notification settings.
EclipseLink CQN support is enabled by the OracleChangeNotificationListener listener which integrates with Oracle JDBC in order to received database change notifications. The full class name of the listener should be passed as a value of eclipselink.cache.database-event-listener property. EclipseLink by default enabled L2 cache for all entities, and respectively all tables in the persistence unit are registered for a change notification. You may exclude some of them by using the databaseChangeNotificationType attribute of the @Cache annotation on the selected entity.

@Configuration
@EnableAutoConfiguration
public class JpaConfiguration extends JpaBaseConfiguration {

	protected JpaConfiguration(DataSource dataSource, JpaProperties properties, ObjectProvider jtaTransactionManager, ObjectProvider transactionManagerCustomizers) {
		super(dataSource, properties, jtaTransactionManager, transactionManagerCustomizers);
	}

	@Override
	protected AbstractJpaVendorAdapter createJpaVendorAdapter() {
		return new EclipseLinkJpaVendorAdapter();
	}

	@Override
	protected Map getVendorProperties() {
	    HashMap map = new HashMap();
	    map.put(PersistenceUnitProperties.WEAVING, InstrumentationLoadTimeWeaver.isInstrumentationAvailable() ? "true" : "static");
	    map.put(PersistenceUnitProperties.DDL_GENERATION, "create-or-extend-tables");
	    map.put(PersistenceUnitProperties.LOGGING_LEVEL, SessionLog.FINEST_LABEL);
	    map.put(PersistenceUnitProperties.DATABASE_EVENT_LISTENER, "org.eclipse.persistence.platform.database.oracle.dcn.OracleChangeNotificationListener");
	    return map;
	}

}

What is worth mentioning EclipseLink’s CQN integration has some important limitations:

  • Changes to an object’s secondary tables will not trigger it to be invalidate unless a version is used and updated in the primary table
  • Changes to an object’s OneToMany, ManyToMany, and ElementCollection relationships will not trigger it to be invalidate unless a version is used and updated in the primary table

The conclusion from these limitations is obvious. We should enable optimistic locking by including an @Version in our entities. The column with @Version in the primary table will always be updated, and the object will always be invalidated. There are three entities implemented. Entity Order is in many-to-one relationship with Product and Customer entities. All these classes has @Version feature enabled.

@Entity
@Table(name = "JPA_ORDER")
public class Order {

	@Id
	@SequenceGenerator(sequenceName = "SEQ_ORDER", allocationSize = 1, initialValue = 1, name = "orderSequence")
	@GeneratedValue(generator = "orderSequence", strategy = GenerationType.SEQUENCE)
	private Long id;
	@ManyToOne
	private Customer customer;
	@ManyToOne
	private Product product;
	@Enumerated
	private OrderStatus status;
	private int count;

	@Version
	private long version;

	public Long getId() {
		return id;
	}

	public void setId(Long id) {
		this.id = id;
	}

	public Customer getCustomer() {
		return customer;
	}

	public void setCustomer(Customer customer) {
		this.customer = customer;
	}

	public Product getProduct() {
		return product;
	}

	public void setProduct(Product product) {
		this.product = product;
	}

	public OrderStatus getStatus() {
		return status;
	}

	public void setStatus(OrderStatus status) {
		this.status = status;
	}

	public int getCount() {
		return count;
	}

	public void setCount(int count) {
		this.count = count;
	}

	public long getVersion() {
		return version;
	}

	public void setVersion(long version) {
		this.version = version;
	}

	@Override
	public String toString() {
		return "Order [id=" + id + ", product=" + product + ", status=" + status + ", count=" + count + "]";
	}

}

Testing

After launching your application you see the following logs generated with Finest level.

[EL Finest]: connection: 2018-03-23 15:45:50.591--ServerSession(465621833)--Thread(Thread[main,5,main])--Registering table [JPA_PRODUCT] for database change event notification.
[EL Finest]: connection: 2018-03-23 15:45:50.608--ServerSession(465621833)--Thread(Thread[main,5,main])--Registering table [JPA_CUSTOMER] for database change event notification.
[EL Finest]: connection: 2018-03-23 15:45:50.616--ServerSession(465621833)--Thread(Thread[main,5,main])--Registering table [JPA_ORDER] for database change event notification.

The registration are stored in table user_change_notification_regs, which is available for your application’s user (PIOMIN).

$ SELECT regid, table_name FROM user_change_notification_regs;
     REGID TABLE_NAME
---------- ---------------------------------------------------------------
       326 PIOMIN.JPA_PRODUCT
       326 PIOMIN.JPA_CUSTOMER
       326 PIOMIN.JPA_ORDER

Our sample application exposes Swagger documentation of API, which may be accessed under address http://localhost:8090/swagger-ui.html. You can create or find some entities using it. If try to find the same entity several times you would see that the only first invoke generates SQL query in logs, while all others are taken from a cache. Now, try to change that record using any Oracle’s client like Oracle SQL Developer, and verify if cache has been succesfully refreshed.

eclipse-link-1

Summary

When I first heard about Oracle Database Change Notification supported by EclipseLink JPA vendor, my expectations were really high. It is very interesting solution, which guarantees automatic cache refresh after changes performed on database tables by third-party application avoiding your cache. However, I had some problems with that solution during tests. In some cases it just doesn’t work, and the detection of errors was really troublesome. It would be fine if such a solution could be also available for other databases than Oracle and JPA vendors like Hibernate.