Helm 3 / GitLab Uninstall If Exists

Helm 3 does not seem to have a good way to “uninstall if exists” unfortunately. So, we had to find a way around that to make sure we could wipe out a previous deployment reliably, in CI/CD (in cases where we had to change a deployment version, which is rare).

As we use GitLab, we found this trick in the docs:

If any of the script commands return an exit code different from zero, the job will fail and further commands won’t be executed. This behavior can be avoided by storing the exit code in a variable:

job:
script:
- false || exit_code=$?
- if [ $exit_code -ne 0 ]; then echo "Previous command failed"; fi;

Using this, you can do:

helm uninstall -n your-namespace some-deployment-0-0-5 || exit_code=$?

And, while you’ll receive a note that it didn’t work on any release after it’s gone, the pipeline will continue on fine.

IntelliJ Maven Not Resolving Dependencies / Not Applying Excludes

I recently had a ton of issues with dependencies in IntelliJ with maven, on multiple consecutive occasions. This is pretty odd as I’ve used IntelliJ and Maven for probably around 10 years (I even have the top youtube videos on that combination!).

I’m on IntelliJ 2020.1 currently, and I found a few things through painful trial and error here. I hope they help you.

  1. Apparently at some point they removed that “Automatically Import” option that used to pop up when you created/imported a maven project. This used to make things automatically resolve as you changed your POM. Now you need to make sure you build to pull in dependencies, and for safe measure I also click the “re-import” button on the maven tools tab (little icon at the top row).
  2. There is now an “offline” mode. So, you may keep failing and failing to install because you can’t resolve dependencies on, say, maven central. This is super confusing as you may actually be online and looking at maven central and seeing the dependencies there. If this happens, check/disable offline mode!
  3. You may add excludes to a complex dependency (like the hive metastore in my case) to remove transitive dependencies that break your app/framework (like spring boot). You shold make sure to change the POM, clean, and then re-import the maven project again to ensure they’re really gone. I kept seeing them in the external dependencies list in the object browser, and my builds were failing, until I did the last step of re-importing.
  4. If all else fails, clean, invalidate-and-restart (in the file menu), install, and re-import and it seems to be a good catch-all for when you’re completely lost.

This all seems pretty crazy to me, but I’ve gone through it a few times now and it seems right. I hope it helps you save some of the time I wasted!

Presto Resource Groups Practical Notes

I recently had to start using resource groups in Presto. I’ll expand this over time with example configurations and such, but for now, I’m just taking some notes on things that are not necessarily obvious.

Concurrency Limit vs Connection Pool Size

Being a Java guy, I always visualize any database work as if it’s being done from a connection pool. Without any resource groups, I was able to use hundreds of parallel queries against presto, which requires hundreds of connections in a Java connection pool.

When we added resource groups with concurrency limits, I was curious – if I have a connection pool of 100 and launch 100 queries in Java, and I have a hard concurrency limit for that user/group of 25, what happens?

Presto will let you launch the 100 parallel connections/queries from Java, and it will queue 75 of those queries/connections, assuming your queue size in the resource group is > 75. If your queue size was 50 though, you would have 25 running queries, 50 queued queries, and 25 queries would fail with a note about resources being exceeded on the cluster like this:

Caused by: java.sql.SQLException: Query failed (#20200704_001046_01778_pw9xr): Too many queued queries for “global.users.john.humphreys”

CPU Limits – Practical Effects

You can put soft and hard limits on CPU. They are a little hard to calculate though; you have to think in terms of total cores in the cluster an the period in which the limits are checked. E.g. if your period is 30 minutes, and you have 10 worker servers, and you have 32 cores a server, then there are 30 * 10 * 32 = 9,600 minutes available on your cluster in that period. So, you can assign a user/group, say, 3,200 minutes to give them 1/3 of the cluster time.

This will *not* prevent them from using 100% CPU on the cluster for an hour though. If they start 25 parallel queries (keeping our 25 limit from earlier), and all queries run for > 1 hour and use all CPUs, presto does *not* have advanced enough logic to restrict/penalize those running queries until they are done.

New queries after that will be severely penalized though. E.g. I tested huge queries with a 5 minute period, and giving a user 10% of the cluster on CPU limits. As the queries used the whole cluster for much more than 5 minutes, that user was not allowed to run queries for over an hour! So, a user can get penalized for many times the original period.

This last part made it hard for me to use the limits. It would mean one harsh query by a production user could cause them to basically not be able to run their app for many hours.

Also, since CPU limits are a hard number rather than percentage based (like memory), it means that if you auto scale your presto (custom, or starburst, or EMR), users can not take advantage of more of the cluster while still being restricted from using the whole cluster.

All in all, I found the CPU limits not amazingly useful as a whole, but they may be useful for keeping ad-hoc users from using much of the cluster. E.g. allow applications to do what they need to, but stop random users from doing damage with concrete limits.

Also note – you can specify CPU limits and period in any units you like. So, if it helps you, use hours or whatever to keep the numbers smaller – but don’t forget to do the math based on node count & core count on the actual limits. The period obviously is not related to the # nodes or cores, so don’t confuse that part.

Sub Groups

Sub groups are in most examples. I would point out that they are probably the most powerful thing you should make use of. They let you, say, group all ad-hoc users together and say all ad-hoc users combined can’t use more than 40% of the cluster memory, but any one ad-hoc user can use up to 20%. That way you can protect the overall cluster while still ensuring at least 2 users can make use of their max memory amount in parallel (very useful).

Sub-groups can be dynamically named based on the user, and you can do multiple. E.g. we put all our application users in one group and subgroup, and our ad-hoc users in another one with far less resources. App users start with “app.”, so this is really easy to pull of with their pattern support.