Lookup Tiller Version in Kubernetes Using Kubectl

This is just a simple command to help you in finding the tiller version running in kubernetes.  I made it when trying to make sure my laptop’s helm install matched the cluster tiller install:

TILLER_POD=`kubectl get pods -n kube-system | grep tiller | awk '{print $1}'`
kubectl exec -n kube-system $TILLER_POD -- /tiller -version

Debugging Spring Boot Multi Stage Docker Build – JAR Extract Not Working

Containerized Spring Boot – Multi Stage Build

I was following this good tutorial on deploying spring-boot apps to kubernetes:

https://spring.io/guides/gs/spring-boot-kubernetes/

Their docker file looks like this:

FROM openjdk:8-jdk-alpine AS builder
WORKDIR target/dependency
ARG APPJAR=target/*.jar
COPY ${APPJAR} app.jar
RUN jar -xf ./app.jar

FROM openjdk:8-jre-alpine
VOLUME /tmp
ARG DEPENDENCY=target/dependency
COPY --from=builder ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY --from=builder ${DEPENDENCY}/META-INF /app/META-INF
COPY --from=builder ${DEPENDENCY}/BOOT-INF/classes /app
ENTRYPOINT ["java","-cp","app:app/lib/*","com.example.demo.DemoApplication"]

So, it is using a multi-sage docker build.  The reasoning for this seems to be described well in this other article here: https://spring.io/blog/2018/11/08/spring-boot-in-a-container.  Basically:

A Spring Boot fat jar naturally has “layers” because of the way that the jar itself is packaged. If we unpack it first it will already be divided into external and internal dependencies. To do this in one step in the docker build, we need to unpack the jar first. For example (sticking with Maven, but the Gradle version is pretty similar):

There are now 3 layers, with all the application resources in the later 2 layers. If the application dependencies don’t change, then the first layer (from BOOT-INF/lib) will not change, so the build will be faster, and so will the startup of the container at runtime as long as the base layers are already cached.

Build Failing

In my case, I was getting an error like this:

Step 9/12 : COPY --from=0 ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY failed: stat /var/lib/docker/overlay2/72b76f414c0b527f00df9b17f1e87afb4109fa6448a1466655aeaa5e9e358e27/merged/target/dependency/BOOT-INF/lib: no such file or directory

This was a little confusing.  At first I assumed, that the overlay file system path didn’t exist.  When I tried to go there (to the 72… folder), it in fact was not present.  This was a red herring though.

It is failing to copy BOOT-INF/ content.  So, on a hunch, I assumed the 72… folder path was ephemeral.  If that is true, then it did exist at some time, and the path did not have BOOT-INF content in it.

So, the next step was to debug the intermediate container for the multi-stage build.  In my docker file, the first like is:

FROM openjdk:8-jdk-alpine AS builder

So, the name of the intermediate container is “builder”.  We can stop the build at that container to interrogate it by doing this:

docker build . --target=builder
Sending build context to Docker daemon   17.2MB
Step 1/5 : FROM openjdk:8-jdk-alpine AS builder
 ---> a3562aa0b991
Step 2/5 : WORKDIR target/dependency
 ---> Using cache
 ---> 1dbeb7fad4c0
Step 3/5 : ARG APPJAR=target/*.jar
 ---> Using cache
 ---> 67ba4a4a1863
Step 4/5 : COPY ${APPJAR} app.jar
 ---> Using cache
 ---> d9da25b3bc23
Step 5/5 : RUN jar -xf ./app.jar
 ---> Using cache
 ---> b513f7190731
Successfully built b513f7190731

And then interactively running in the built image (b513f7190731):

docker run -it b513f7190731

In my case, I could clearly see that basicalyl no files existed in the /target/dependency folder.  There was only a JAR that was never extracted by the “jar -xf ./app.jar” command.

Debugging the JAR Extraction

I tried to manually run the jar extraction command myself, and it gave no error and did absolutely nothing.  After that, I went to make sure the JAR was valid.  When I looked at its size, it was fine.  When I looked at it with “vi”, I could see that the header was a bash script.

This is because my spring-boot project had the spring-boot-maven plugin in its POM with executable = true; this embeds a bash script in the front to allow it to run as a Linux init.d service.

So, my first move was to remove the plugin altogether.  This did not help as, after that, the JAR did extract but it had no BOOT-INF folder.  Spring boot was not building the fat-jar with dependencies as I removed the plugin.  So, I had to put the plugin back in like this:

<plugin>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-maven-plugin</artifactId>
</plugin>

With executable = true removed.  This builds the fat JAR without the bash file embedded, which allows the “jar -xf ./app.jar” command to properly extract the contents including the BOOT-INF dependencies.

This was not a problem in the tutorial because they’re relying on generated projects from Initializr rather than existing projects you made yourself with possibly different use cases.

Summary

I hope this helped you learn about putting spring-boot in docker, and that it shed some light on how to debug a multi-stage docker build!  I knew a fair bit about docker and spring-boot going into this, but combining them revealed some process differences I had taken for granted over the years.  So, it was a good experience.

Kubernetes Run Bash / Get Logs for Pod in Namespace

Here are some useful commands for managing pods within a given namespace.

List Pods in Namespace “qaas”

This will show you all the available pods in the given namespace.

$> kubectl get pods -n qaas
NAME READY STATUS RESTARTS AGE
dapper-kudu-pgadmin-694bdf8c7f-ljcc4 1/1 Running 0 9h

Get Logs for Pod

After finding the pod name, you can view its logs centrally from kubernetes which is very cool.

$> kubectl logs dapper-kudu-pgadmin-694bdf8c7f-ljcc4 -n qaas
::ffff:10.237.183.230 - - [01/Nov/2019:13:18:26 +0000] "GET / HTTP...
::ffff:10.237.183.230 - - [01/Nov/2019:13:18:26 +0000] "GET /login?...

Exec Into Pod And Run a Shell

This will let you get into the pod with a shell (like BASH) so that you can look around and see what’s going on.

$> kubectl exec -it dapper-kudu-pgadmin-694bdf8c7f-ljcc4 /bin/sh -n qaas
/pgadmin4 #

 

 

Minikube – Setup Kubernetes UI

If you have not yet installed Minikube, refer to this blog first: https://coding-stream-of-consciousness.com/2018/11/08/installing-minikube-on-windows-10/.

Assuming you have Minikube running, open another command prompt (I used an administrator one).  Then run:

kubectl proxy

Once that is up, you can go to this link: http://localhost:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/#!/node?namespace=default and the kubernetes UI should be available.

The Kubernetes documentation may recommend a different link that does not work. I believe this is probably because Minikube is out of date compared to the current Kubernetes releases; but maybe it is just a documentation error.

For the record in case it changes in the future, the link from the Kubernetes documentation is: http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/.

Installing Minikube on Windows 10

Installing Minikube on Windows 10 was not as straightforward as I had expected.  I tried it at home and gave up originally (instead using https://labs.play-with-k8s.com/).

I wanted to get it set up properly for a new job though, so I worked through the issues.  Please note in case your issues are different than mine, these links are extremely enlightening:

Much of the online documentation I found was outdated.  The actual Git Hub repository was fine though; so I recommend using the instructions there : https://github.com/kubernetes/minikube.  Mine are based on those and the previously noted links.

Before We Start – If You’re Already in a Bad State

If you’ve already started installing Minikube and you failed, or if you have been hacking at this with my instructions and are stuck, you may end up needing to delete the C:\users\user-name{.minikube and .kube} directories to clean things up.  I had to do this as I messed up the virtual switch/etc set-up originally.  Don’t do this unless you do get stuck though.

You can’t do this folder clean-up if the Hyper-V minikube VM is still running (you can see that in the Hyper-V manager).  If you try to delete them while its running, it will partially delete files and then you’re in a state where you can’t even stop the VM on purpose!  To fix that, you need to kill the processes related to the VM and set it to not automatically start up in the Hyper-V manager. Then you can remove the VM and try again.

Some detailed information on how to kill a hung Hyper-V VM is here: http://woshub.com/how-to-stop-a-hung-virtual-machine-on-hyper-v-2016/. This was painful to fix.

The Minikube start command will hang if you’re in these bad states, so you really do need to delete those directories to get it set up cleanly.

Warning: Shut Down Issues

I found after I had this all working that (1) my PC was having trouble restarting – it would never stop, and (2) if I came and tried to do “minikube stop”, it would hang forever.  It turns out the current version of minikube has issues noted here: https://github.com/kubernetes/minikube/issues/2914.  So, to shut it down you need to do “minikube ssh” and then “sudo poweroff”.  Even trying to manage it from Hyper-V will not work properly.  They recommend downgrading to v.27; the current version I saw the issue with is .30.  I haven’t tried the downgrade yet.  So, these instructions will get it working with v.30; but you will have this shut-down issue potentially.

For now, I’m personally just going to stay with v.30, but I told it to NOT automatically start the minikube VM when my PC starts up in Hyper-V.  If I use this too often and they don’t fix this issue, I may downgrade at a later date though.

Installation Steps

Here is the full set of steps I had to use to get Minikube correctly running on my Windows 10 desktop:

  • Enable the “Hyper-V” windows feature (I also enabled containers, and WIndows Hypervisor Platform, they probably aren’t needed though).
  • Go into Hyper-V manager (search your start menu) and:
    • Click Virtual Switch Manager
    • Create a new virtual switch of type “External”.
    • Name it “External”.
    • Set it to use the “External network” and pick the NIC you commonly use.
    • Press OK.
  • Get an administrative command prompt open.
  • Install https://chocolatey.org/ if you don’t already use it; it’s like yum or apt-get in Linux.
  • choco install minikube
  • choco install kubernetes-cli
  • minikube start –vm-driver “hyperv” –hyperv-virtual-switch “External” –v=7
    • Note that we’re telling it to use Hyper-V and the switch we created.
    • We’re also setting a very verbose log level.

If you have issues after this, you may want to clean up and try again with the information I provided before the steps.

Validation

Let’s try and launch something to make sure it’s working (with instructions copied from the Git Hub link).

  • Launch the “hello-minikube” sample image as a pod.
    • kubectl run hello-minikube --image=k8s.gcr.io/echoserver:1.4 --port=8080
  • Expose it as a service.
    • kubectl expose deployment hello-minikube --type=NodePort
  • Verify the pod is up and running (wait if it is still starting).
    • kubectl get pod
  • Ask Kubernetes for the service URL:
    • minikube service hello-minikube --url
  • Hit that URL in your browser or CURL (you should get back a bunch of text with CLIENT VALUES at the top of it.
  • Hopefully all that is working.  So, let’s remove the service and pod we deployed:
    • kubectl delete service hello-minikube
    • kubectl delete deployment hello-minikube
  • Now you have Minikube running, and it’s a clean copy.

I hope that helps you get started!

User Interface

Refer to this blog for the quick user-interface setup instructions: https://coding-stream-of-consciousness.com/2018/11/08/minikube-set-up-kubernetes-ui/.

Doing Upgrades

You will want to update minikube and the kubernetes CLI after a while.  You can see their versions in the status command and then you can update them easily with chocolatey.

  • minikube status
  • choco upgrade kubernetes-cli
  • choco upgrade minikube
  • minikube update-context