Launch Spring-Boot JAR From Different Main Class

I found this very useful stack overflow:

https://stackoverflow.com/a/36552613/857994

It shows you how to start a spring-boot JAR from a different main class.  It’s a quirky solution, but it worked great.  Here’s a slightly more obvious/cleaner copy of the command (mostly for my own future reference):

java -cp presto-ws-3.2.2.jar -Dloader.main=com.company.PrestoQueryRunner org.springframework.boot.loader.PropertiesLauncher

Debugging Spring Boot Multi Stage Docker Build – JAR Extract Not Working

Containerized Spring Boot – Multi Stage Build

I was following this good tutorial on deploying spring-boot apps to kubernetes:

https://spring.io/guides/gs/spring-boot-kubernetes/

Their docker file looks like this:

FROM openjdk:8-jdk-alpine AS builder
WORKDIR target/dependency
ARG APPJAR=target/*.jar
COPY ${APPJAR} app.jar
RUN jar -xf ./app.jar

FROM openjdk:8-jre-alpine
VOLUME /tmp
ARG DEPENDENCY=target/dependency
COPY --from=builder ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY --from=builder ${DEPENDENCY}/META-INF /app/META-INF
COPY --from=builder ${DEPENDENCY}/BOOT-INF/classes /app
ENTRYPOINT ["java","-cp","app:app/lib/*","com.example.demo.DemoApplication"]

So, it is using a multi-sage docker build.  The reasoning for this seems to be described well in this other article here: https://spring.io/blog/2018/11/08/spring-boot-in-a-container.  Basically:

A Spring Boot fat jar naturally has “layers” because of the way that the jar itself is packaged. If we unpack it first it will already be divided into external and internal dependencies. To do this in one step in the docker build, we need to unpack the jar first. For example (sticking with Maven, but the Gradle version is pretty similar):

There are now 3 layers, with all the application resources in the later 2 layers. If the application dependencies don’t change, then the first layer (from BOOT-INF/lib) will not change, so the build will be faster, and so will the startup of the container at runtime as long as the base layers are already cached.

Build Failing

In my case, I was getting an error like this:

Step 9/12 : COPY --from=0 ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY failed: stat /var/lib/docker/overlay2/72b76f414c0b527f00df9b17f1e87afb4109fa6448a1466655aeaa5e9e358e27/merged/target/dependency/BOOT-INF/lib: no such file or directory

This was a little confusing.  At first I assumed, that the overlay file system path didn’t exist.  When I tried to go there (to the 72… folder), it in fact was not present.  This was a red herring though.

It is failing to copy BOOT-INF/ content.  So, on a hunch, I assumed the 72… folder path was ephemeral.  If that is true, then it did exist at some time, and the path did not have BOOT-INF content in it.

So, the next step was to debug the intermediate container for the multi-stage build.  In my docker file, the first like is:

FROM openjdk:8-jdk-alpine AS builder

So, the name of the intermediate container is “builder”.  We can stop the build at that container to interrogate it by doing this:

docker build . --target=builder
Sending build context to Docker daemon   17.2MB
Step 1/5 : FROM openjdk:8-jdk-alpine AS builder
 ---> a3562aa0b991
Step 2/5 : WORKDIR target/dependency
 ---> Using cache
 ---> 1dbeb7fad4c0
Step 3/5 : ARG APPJAR=target/*.jar
 ---> Using cache
 ---> 67ba4a4a1863
Step 4/5 : COPY ${APPJAR} app.jar
 ---> Using cache
 ---> d9da25b3bc23
Step 5/5 : RUN jar -xf ./app.jar
 ---> Using cache
 ---> b513f7190731
Successfully built b513f7190731

And then interactively running in the built image (b513f7190731):

docker run -it b513f7190731

In my case, I could clearly see that basicalyl no files existed in the /target/dependency folder.  There was only a JAR that was never extracted by the “jar -xf ./app.jar” command.

Debugging the JAR Extraction

I tried to manually run the jar extraction command myself, and it gave no error and did absolutely nothing.  After that, I went to make sure the JAR was valid.  When I looked at its size, it was fine.  When I looked at it with “vi”, I could see that the header was a bash script.

This is because my spring-boot project had the spring-boot-maven plugin in its POM with executable = true; this embeds a bash script in the front to allow it to run as a Linux init.d service.

So, my first move was to remove the plugin altogether.  This did not help as, after that, the JAR did extract but it had no BOOT-INF folder.  Spring boot was not building the fat-jar with dependencies as I removed the plugin.  So, I had to put the plugin back in like this:

<plugin>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-maven-plugin</artifactId>
</plugin>

With executable = true removed.  This builds the fat JAR without the bash file embedded, which allows the “jar -xf ./app.jar” command to properly extract the contents including the BOOT-INF dependencies.

This was not a problem in the tutorial because they’re relying on generated projects from Initializr rather than existing projects you made yourself with possibly different use cases.

Summary

I hope this helped you learn about putting spring-boot in docker, and that it shed some light on how to debug a multi-stage docker build!  I knew a fair bit about docker and spring-boot going into this, but combining them revealed some process differences I had taken for granted over the years.  So, it was a good experience.

Spring Boot @Autowired Abstract/Base Class

Properly using (and not overusing) abstract classes is vital to having a cleanly organized and DRY application.  Developers tend to get confused when trying to inject services into abstract or base classes in Spring Boot though.

Setter injection actually works fine in base classes; but as noted here, you should  make sure to use the final keyword on the setter to prevent it from being overridden.

public abstract class AbstractSchemaService {
    ...
    private StringShorteningService shortener;

    @Autowired
    public final void setStringShorteningService(StringShorteningService shortener) {
        this.shortener = shortener;
    }
    ...
}

Spring Time out REST HTTP Calls With RestTemplate

No Timeouts By Default!

Spring’s RestTemplate is an extremely convenient way to make REST calls to web services.  But most people don’t realize initially that these calls have no timeout by default.  This means no connection timeout and no data call timeout.  So, potentially, your app can make a call that should take 1 second and could freeze up for a very long time if the back end is behaving badly.

Setting a Timeout

There are a lot of ways of doing this, but the best one I’ve seen recently (from this stackoverflow post) is to create the RestTemplate in an @Configuration class and then inject it into your services.  That way you know the RestTemplate you are using everywhere was configured properly with your desired timeouts.

Here is a full example.

package com.company.cloudops.config;

import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.web.client.RestTemplateBuilder;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.web.client.RestTemplate;
import java.time.Duration;

@Configuration
public class AppConfig {

    @Value("${rest.template.timeout}") private int restTemplateTimeoutMs;

    @Bean
    public RestTemplate restTemplate(RestTemplateBuilder builder) {
        return builder
                .setConnectTimeout(Duration.ofMillis(restTemplateTimeoutMs))
                .setReadTimeout(Duration.ofMillis(restTemplateTimeoutMs))
                .build();
    }
}

To use this RestTemplate in another Spring bean class, just pull it in with:

@Autowired private RestTemplate template;

Connection Pooling With Spring 2.0 Hikari – Verify Idle Timeouts are Working

Use Case

I’ve been working on an odd API project where each user needs their own connection to various back-end databases/data-sources.  This is a break from the norm because in general, you set up a connection pool of, say, 10 connections and everyone shares it and you’re golden.

If you have 500 users throughout the day though and each one gets some connections, that would be a disaster.  So, in my case making sure the pool is of limited size and making sure the idle timeout works is pretty vital.  So, I started playing around to see how I can verify old connections are really being removed.

My Configuration

I had started with an Apache BasicDataSource (old habits die hard).  But then when I enabled debug I didn’t see connections being dropped, or info on them being logged at all for that matter.  Before bothering with trace, I started reading about Hikari which is a connection pool I see spring using a lot… and it looked pretty awesome! See some good performance and usage info right here.

Anyway! I switched to Hikari quick which was easy since its already in Spring Boot 2.X (which I habitually use for everything these days).

Here’s my Spring config class/code. I have it set in properties to allow a minimum of 0 connections, to time out connections after 60 seconds, and to have a maximum of 4 connections. Connections are tested with “select 1” which is pretty safe on most databases.

@Configuration
public class Config {

    //Configuration for our general audit data source.
    private @Value("${audit.ds.url}") String auditDsUrl;
    private @Value("${audit.ds.user}") String auditDsUser;
    private @Value("${audit.ds.password}") String auditDsPassword;

    @Bean
    public DataSource auditDataSource() {
        HikariConfig config = new HikariConfig();
        config.setJdbcUrl(auditDsUrl);
        config.setUsername(auditDsUser);
        config.setPassword(auditDsPassword);
        config.setMaximumPoolSize(4);
        config.setMinimumIdle(0);
        config.setIdleTimeout(60000);
        config.setConnectionTestQuery("select 1");
        config.setPoolName("Audit Pool");
        config.setValidationTimeout(10000);
        return new HikariDataSource(config);
    }

    @Bean
    public NamedParameterJdbcTemplate auditJdbcTemplate() {
        return new NamedParameterJdbcTemplate(auditDataSource());
    }
}

Verifying it Works

After sending a query to my API, where it uses a basic JDBC template to execute the query, I see the logs do this (note that I removed the date/time/class/etc for brevity).

Audit Pool - Before cleanup stats (total=0, active=0, idle=0, waiting=0)
Audit Pool - After cleanup stats (total=0, active=0, idle=0, waiting=0)
Audit Pool - Before cleanup stats (total=1, active=0, idle=1, waiting=0)
Audit Pool - After cleanup stats (total=1, active=0, idle=1, waiting=0)
Audit Pool - Before cleanup stats (total=1, active=0, idle=1, waiting=0)
Audit Pool - After cleanup stats (total=1, active=0, idle=1, waiting=0)
Audit Pool - After cleanup stats (total=0, active=0, idle=0, waiting=0)
Audit Pool - Closing connection PG...: (connection has passed idleTimeout)

So, we can see that it went from 0 connections total, to 1 connection total. The connection looks idle pretty quick because it was a short query that was done before the regular output log. Then after a minute, the connection gets closed and the total goes back to 0.

So, we’re correctly timing out idle connections using our settings. Also, we’re getting our pool name (Audit Pool) in the logs which is awesome too!