Mapping Entity relationships with JPA annotations

We will look at some practical examples of how to wire in Entity relationships using standard JPA annotations.

Generating Primary Key Id from Sequence

Consider the below SQL:

I would like to auto-generate the ID of GENRE from the SEQ_GENRE_ID.

This is how my Entity would look like:

Here, the @SequenceGenerator contains the name of the sequence used for ID generation. This can be ignored for DB like MySQL, which has support for AUTO GENERATION at the table level itself. The @GeneratedValue has the strategy used to generate the value.

Simple One to One Relationship

Consider the below tables:



The simple One-to-One can be wired up by using the below 2 annotations:

@OneToOne(fetch = FetchType.EAGER)
@JoinColumn(name = “content_id”)
private Section contents;

Specify the name of the column in the CHAPTER table which is the FOREIGN KEY to the SECTION table.

One to Many Relationship using a Mapping table

Consider the below entity:

In terms of SQL, we can have a main table called BOOK and then a mapping table called BOOK_GENRE. This mapping table would contain the IDs of BOOK and GENRE tables.

This relationship can be represented by:

@OneToMany(fetch = FetchType.EAGER)

@JoinTable(name = “BOOK_GENRE”, joinColumns = @JoinColumn(name = “book_id”, referencedColumnName = “id”), inverseJoinColumns = @JoinColumn(name = “genre_id”, referencedColumnName = “id”))

The @JoinTable takes in the name of the Mapping table. It has the below 2 attributes:

  1. joinColumns: You need to provide the FOREIGN KEY from the owning part of the relationship, in this case,  BOOK
  2. inverseJoinColumns: You need to provide the FOREIGN KEY of the non-owning side of the entity, in this case, GENRE

Saving Entity relationships

By default, none of the Entity relations are inserted or updated. You need to explicitly specify the cascade attribute in the OneToOne or OneToMany annotation.

@OneToOne(fetch = FetchType.EAGER, cascade = { CascadeType.PERSIST, CascadeType.MERGE })
@JoinColumn(name = “main_chapter_id”)
private Chapter mainChapter;

If you do not want to save the Entity, do not specify the cascade attribute:

@OneToOne(fetch = FetchType.EAGER)
@JoinColumn(name = “author_id”)
private Author author;


A working example can be found here:

How Spring-JPA sucks big time

Ok, so all I am trying to do is save a new Entity into the DB and then retrieving it using its ID.

Saving my Entity

The entity Book contains an Author, some Chapters and some Genres as shown below:

While saving, the assumption is that the Author and the Genres are already present in the DB. While saving a new Book, I pass all the attributes of the Book and the Chapters. However, since the Author and the Genres are already present in the DB, I am passing their IDs only and not all of their attributes; pretty much like a Foreign Key. I am illustrating this with the below JSON:

Note that for author and genres, we are only passing their IDs and no other attributes.

Fetching the Entity I just now saved

Now, after saving the new Book, when I fetch, my expectation is that Spring-JPA fetches the entire object graph faithfully, without missing any attributes. In reality, it never even bothers to hit the DB with a query, but returns the Book from the Session Cache itself. So, when I fetch my newly saved Book, only the IDs are fetched for Author and Genre, and no other attribute.

Implementation details with Spring JPA

Repository Layer

The interface BookDao defines the contract.

The implementation looks like:

Service Layer

Note that the Transactions are started in the Repository layer. So, when I do a saveOrUpdate() in my service, there are 2 transactions that are happening. However, despite that, the Book is returned from the Session Cache and I get an incomplete object graph. Spring JPA does not give me much leverage to clear or evict or refresh the Session Cache after the insert happens.

Implementing with standard JPA

This can be handled better with the plain vanilla JPA. This is how the Repository looks like:

The below line is particularly important in the method getBook(Long bookId):


Without the above line, we will still get an incomplete object graph.


An working example of this can be found here:

Maven plugin for docker: embedding a Spring Boot application

We will take an example of how to use a Maven plugin for docker to embed a Spring Boot application. The Maven plugin that I am using is:

These are the basic operations that I do with this plugin:

  1. Given a Dockerfile, I create a docker image by saying mvn dockerfile:build. This builds the docker image from the jar file in my target folder
  2. After building the docker file, I would like to push it to my docker hub repository by using the command mvn dockerfile:push.

This is how I define the docker plugin in my pom.xml:

Note that I have defined my docker repository details in the repository tag.

This is how the Dockerfile looks like:

Providing encrypted credentials to Maven

In addition to the above, I also need to provide my credentials for And, of course, the password should be encrypted. Maven supports encryption. I have largely followed the below link:

Step 1: Create a master password

Create a master password by typing:

Say, it gives you the below output:


Create a file: ~/.m2/settings-security.xml with the following content:

Step 2: Create an encrypted password

Create the encrypted password for your user. Type:

Say, the output is:


In the ~/.m2/settings.xml file, add the below lines under the servers tag.

Now, you should be all set. Use the below command to build the docker image and then push it:


A working example of this can be found here:

Accessing docker on a tcp port for non-root users

Well, it seems that running docker for non-root users is trivial. You can just add your user to the docker usergroup as mentioned below:

However, it might not be the best idea, as there are security implications:

In the default docker installation, the dockerd listens on a Unix socket: /var/run/docker.sock, which in some Linux distros like CentOS and RHEL, can only be accessed by root user or users in the sudo group. This becomes an issue especially, for example, when we try to run docker through a Maven plugin.

The solution is to enable the docker daemon to listen on a tcp socket. This can be done by:

Edit the below line as shown:

ExecStart=/usr/bin/dockerd -H unix:///var/run/docker.sock -H tcp://localhost:2375

This will tell docker daemon to listen on port 2375 for tcp connection. Next, reload the configuration and restart dockerd:

To test whether it is working, do:

Now, you should be able to run docker as a non-root user, if you do:

Better still, you can define the below variable:

export DOCKER_HOST=tcp://localhost:2375

With that, the below command should work fine:

Note that now, we can run the below Maven plugin without any issue:


Enabling Docker Remote API on Ubuntu 16.04

Quick Tip – How to enable Docker Remote API?


JPA: Creating EntityManager without persistence.xml

For JPA to work, we need to define a persistence.xml inside META-INF. I always found it pretty cumbersome and difficult to maintain. Especially so for my integration tests. Wouldn’t it be cool if we could do everything from Java code?

Here you go:

Using only Hibernate

Using Spring-JPA

Note that you can add your entities through code. In order for this to compile, you would need to include the below dependencies in maven:

The complete sources can be found here:

This particular Java file is:

Apache httpclient: Reading SSL with self-signed certificates

When we try to access SSL sites secured with self-signed certificates using apache httpclient, we get the below exception:

Exception in thread “main” PKIX path building failed: unable to find valid certification path to requested target

We will attempt to work-around this problem. First, we will run a docker image that has a Tomcat8 with a self-signed certificate (refer to

Check that the link is accessible: https://localhost:9090/docs/security-howto.html

The below code will ignore the self-signed certificate security issue and allow us to access this site:

Here are the Maven dependencies:

The sources for this example can be found here:



Tomcat 8: SSL configuration with self-signed certificate

Download and unpack a Tomcat8 distribution. Lets say the location is /usr/local/tomcat8.

First, we will create a self-signed certificate using the java keytool. This is the command:

keytool -genkey -noprompt -trustcacerts -keyalg RSA -alias tomcat -dname “CN=Palash Ray, OU=Demo, O=Swayam, L=Bangalore, ST=Karnataka, C=IN” -keypass changeme -keystore /usr/local/tomcat8/keystore/my_keystore -storepass changeme

This will create the keypair at the location /usr/local/tomcat8/keystore/my_keystore.

Now, go to the /usr/local/tomcat8/conf directory. In the server.xml, look for commented lines:

<Connector port=”8443″ protocol=”org.apache.coyote.http11.Http11NioProtocol” …

Uncomment that and replace it with:


You should be all set now. Save the server.xml and start tomcat. Go to: https://localhost:8443

This can be embedded into a docker image. This is how the docker file would look:

The sources can be found here:

The docker image can be found here:

You can run the image by using:

docker pull paawak/self-signed-tomcat8

docker run -d -p 9090:8443 paawak/self-signed-tomcat8




Creating a Java 8 Stream from unbounded data using Spliterator

Problem Statement

I have a large XML file. I would like to read it, and group-by and aggregate the rows in it using Java 8. DOM parser with JAXB will not be able to handle this, as its a really large file. I would like to create a Stream from the unbounded data contained in the XML file.


I read the XML by streaming with Stax. Since I do not load the entire file in memory, I am good. I go a step further, and use JAXB to un-marshall small portions of this file, which I will call a row. I use a Spliterator backed by a BlockingQueue to create a Stream out of it. After I have the stream, I apply the famous grouping-by function and aggregate the rows.


The sample XML looks like this:

There would be thousands of elements “T”. I have modeled my POJO on the element “T”. I use Stax to read the xml. When I read one element “T”, I use Jaxb to un-marshall it to a Java object and then add it to the Stream.


I have modeled the POJO as below:

The Stax Parser

The heart of this is the Stax parser:


I use the CountDownLatch only because I need my JUnit to be alive till the document is read fully. It would not be needed in an actual server environment. Note the usage of the BlockingQueue.

Spliterator implementation


The grouping logic

This part is very simple. We actually stream a GZip file by using a GZIPInputStream:



I found some large xmls from the below location:


Building a REST Server with Spring MVC

We would like to build a REST server with Spring MVC. It should be very simple to support various formats like JSON and XML for the same request, just by changing the Content header. Example, I have the below url:


It should return me either json or xml or some other format depending on the Accept header to application/json or application/xml respectively.

Lets see how to achieve that.

Configuration of Spring MVC

We will use pure Java configuration:

Note the use of WebMvcConfigurerAdapter. It comes in handy when you want to work with Spring MVC. Especially note worthy is the configureMessageConverters() method. You would use that to configure a REST service. It would define how Spring handles the @ResponseBody or @RestController annotation, to translate a POJO to the response type: json, xml, etc. In the above example, we are using MappingJackson2HttpMessageConverter to convert our POJOs to JSON and Jaxb2RootElementHttpMessageConverter to convert them to XML.


Note the use of @XmlRootElement. This is absolutely necessary as we are using Jaxb2RootElementHttpMessageConverter to convert our POJOs to xml. If you omit this, you will get a “Error 406 Not Acceptable” error, the underlying cause being:

org.springframework.web.HttpMediaTypeNotAcceptableException: Could not find acceptable representation


The controller is very simple, and returns the POJO. It is upto the HttpMessageConverter to make sense of it and convert that to either json or xml.

This makes the perfect sense, as the controller can just return the model, and the conversion can be a configuration detail.

Alternate ways of specifying the desired response type

Spring gives us the flexibility of doing away with the Accept header to specify the type of response. If we want json output, we can simply say:


For xml, we can similarly say:



The sources can be found here:

Spring Java Config

After Spring came out with annotations based Java configuration, I found them very handy. Get rid of the xml Spring configs, as the Java configs are safe with refactoring, more readable and less verbose. I will give some of the examples that I used:

Configuration of Jdbc Connection Pool

Configuration of Spring MVC

Note the use of WebMvcConfigurerAdapter. It comes in handy when you want to work with Spring MVC. Especially note worthy is the configureMessageConverters() method. You would use that to configure a REST service. It would define how Spring handles the @ResponseBody or @RestController annotation, to translate a POJO to the response type: json, xml, etc. In the above example, we are using MappingJackson2HttpMessageConverter to convert our POJOs to JSON and Jaxb2RootElementHttpMessageConverter to convert them to XML.

Excluding a specific class from the config

Sometimes it so happens that we would like to selectively disable a couple of classes from the annotation config. This is how it is done:



The sources can be found here: