Fn / Function Key Lock on Dell Laptops (Running Ubuntu which is Irrelevant)

This isn’t much of a post, but I suspect some people will find it useful =).

If your laptop is acting as if your function key is always held down; e.g. the F1-F12 keys do the special action (change volume, or whatever) instead of their normal purpose, then try Hold Fn + Esc and, depending on your model, it may disable the “Function Key Lock”.

On my laptop this actually shows a picture of a lock with the letters Fn within it to indicate it locks the function key as on.  But I didn’t see that in advance and had to google more than I would have liked to figure it out.

Linux Neatly Manage SSH Sessions Like MRemoteNG on Windows

SSH technically works the same anywhere, but on Windows without special tooling, using SSH can be cumbersome as the CLI is not very… elegant?  Generally, on Windows, people use tools like putty or MRemoteNG (also putty based) to make things neater.

If you’re on Linux, you can use SSH out of the box very cleanly by leveraging the config file to create named hosts.

For example, this verbose command:

ssh -i ~/.ssh/prod-key.pem centos@10.20.30.40

Can be replaced by the following command – and the nice “pgadmin-prod” name even auto-completes.  So, if you have 100 servers for different things, you can still find them easily.

ssh pgadmin-prod

This works if you add the following block into your ~/.ssh/config file.

Host pgadmin-prod
User centos
HostName 10.20.30.40
IdentityFile ~/.ssh/prod-key.pem

Where:

  • Host = the user-friendly name I want to use for this connection.
  • User = the user I normally have to log in as.
  • HostName = the IP address of the target host.
  • IdentityFile = the path to the private key used for the connection.

You can have as many named hosts as you like and, again, their names will auto complete after the ssh command – so they’re very easy to use.

This gets even more effective if you combine it with a screen/session management app like tmux or screen.  So, I recommend you look into those too if you don’t use them already (tmux is a little more modern if you’re a new to both).

I hope this helps you be a little more efficient, and thanks for reading.

Kubernetes Run Bash / Get Logs for Pod in Namespace

Here are some useful commands for managing pods within a given namespace.

List Pods in Namespace “qaas”

This will show you all the available pods in the given namespace.

$> kubectl get pods -n qaas
NAME READY STATUS RESTARTS AGE
dapper-kudu-pgadmin-694bdf8c7f-ljcc4 1/1 Running 0 9h

Get Logs for Pod

After finding the pod name, you can view its logs centrally from kubernetes which is very cool.

$> kubectl logs dapper-kudu-pgadmin-694bdf8c7f-ljcc4 -n qaas
::ffff:10.237.183.230 - - [01/Nov/2019:13:18:26 +0000] "GET / HTTP...
::ffff:10.237.183.230 - - [01/Nov/2019:13:18:26 +0000] "GET /login?...

Exec Into Pod And Run a Shell

This will let you get into the pod with a shell (like BASH) so that you can look around and see what’s going on.

$> kubectl exec -it dapper-kudu-pgadmin-694bdf8c7f-ljcc4 /bin/sh -n qaas
/pgadmin4 #

 

 

Spring Boot @Autowired Abstract/Base Class

Properly using (and not overusing) abstract classes is vital to having a cleanly organized and DRY application.  Developers tend to get confused when trying to inject services into abstract or base classes in Spring Boot though.

Setter injection actually works fine in base classes; but as noted here, you should  make sure to use the final keyword on the setter to prevent it from being overridden.

public abstract class AbstractSchemaService {
    ...
    private StringShorteningService shortener;

    @Autowired
    public final void setStringShorteningService(StringShorteningService shortener) {
        this.shortener = shortener;
    }
    ...
}

Artifactory + Maven Deploy +/- Jenkins – 502 Failure

This is just a quick post…

If you are running a maven (mvn) deploy to artifactory, be it locally or through Jenkins/etc, and you get an error like this:

[ERROR] Failed to execute goal org.apache.maven.plugins:maven-deploy-plugin:2.8.2:deploy (default-deploy) on project sample-ws: Failed to deploy artifacts: Could not transfer artifact com.company.cs.sample:sample-ws:jar:3.2.2 from/to central (https://adlm.company.com/artifactory/sample-maven): Failed to transfer file https://adlm.company.com/artifactory/sample-maven/com/company/cs/sample/sample-ws/3.2.2/sample-ws-3.2.2.jar with status code 502 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException

You need to edit your ~/.settings.xml and add in your artifactory-user/artifactory-token (preferably for you on your own desktop and for a service account on your server).  You can generate your token in your user settings in the Aritfactory API.

<settings>
  <servers>
    <server>
      <id>central</id>
      <username>your-user</username>
      <password>your-token</password>
    </server>
  <server>
    <id>snapshots</id>
      <username>your-user</username>
      <password>your-token</password>
    </server>
  </servers>
</settings>

JupyterHub – NGINX TLS Termination – Ubuntu

Due to corporate security requirements, I just had to ensure I had TLS both between clients and my load balancer as well as TLS from the load balancer to the back-end application (JupyterHub).

This was a little problematic because I was using a real certificate, so I had intentionally terminated TLS at the load balancer for cost reasons.  So, I used a self-signed certificate between the load balancer and the back-end just now.

If you use this GitHub gist right here for your nginx config, and you modify the certificate paths to point to files you generate from this digital-ocean tutorial, it works out just fine.  Then you just have to point your load balancer to point 443 on your JuptyerHub host(s) and everything works out great.

Here’s an excerpt of the relevant parts of the digital-ocean tutorial. Once you make the files, you can just update the gist yourselves to use them. The Diffie-Hellman group line is not in the gist; so add that yourself based on the digital-ocean one if you are so inclined.

mkdir -p /etc/nginx/ssl && cd /etc/nginx/ssl;
sudo chmod 700 /etc/nginx/ssl
sudo openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout nginx-selfsigned.key -out nginx-selfsigned.crt

Generally fill out all the details for the cert normally, but pay extra attention to common name. This should match your DNS name (e.g. env.yoursite.com). If you deploy to multiple environments and this is an internal app/etc, you may consider *.yoursite.com to avoid needing one per environment).

Once you’re done that, also run the following to create a “strong Diffie-Helman group”. Refer to digital-ocean’s link for this one; I honestly didn’t have the time to look into why this is needed yet.

sudo openssl dhparam -out dhparam.pem 2048

Install PGAdmin Server With Docker

You can get PGAdmin 4 running in server mode with docker very easily.  Using this command will set up the server, set it to always restart in response to reboots or errors, and it will ensure that its data (users, config) is persisted between container runs.

docker pull dpage/pgadmin4

docker run -p 80:80 \
    --name pgadmin \
    --restart always \
    -v "/opt/pgadmin4/data:/var/lib/pgadmin" \
    -v "/opt/pgadmin4/config/servers.json:/servers.json" \
    -e "PGADMIN_DEFAULT_EMAIL=user@domain.com" \
    -e "PGADMIN_DEFAULT_PASSWORD=SuperSecret" \
    -d dpage/pgadmin4

You can run this command afterward to see the details and confirm the restart policy as well if you like:

docker inspect pgadmin