Presto Resource Groups Practical Notes

I recently had to start using resource groups in Presto. I’ll expand this over time with example configurations and such, but for now, I’m just taking some notes on things that are not necessarily obvious.

Concurrency Limit vs Connection Pool Size

Being a Java guy, I always visualize any database work as if it’s being done from a connection pool. Without any resource groups, I was able to use hundreds of parallel queries against presto, which requires hundreds of connections in a Java connection pool.

When we added resource groups with concurrency limits, I was curious – if I have a connection pool of 100 and launch 100 queries in Java, and I have a hard concurrency limit for that user/group of 25, what happens?

Presto will let you launch the 100 parallel connections/queries from Java, and it will queue 75 of those queries/connections, assuming your queue size in the resource group is > 75. If your queue size was 50 though, you would have 25 running queries, 50 queued queries, and 25 queries would fail with a note about resources being exceeded on the cluster like this:

Caused by: java.sql.SQLException: Query failed (#20200704_001046_01778_pw9xr): Too many queued queries for “global.users.john.humphreys”

CPU Limits – Practical Effects

You can put soft and hard limits on CPU. They are a little hard to calculate though; you have to think in terms of total cores in the cluster an the period in which the limits are checked. E.g. if your period is 30 minutes, and you have 10 worker servers, and you have 32 cores a server, then there are 30 * 10 * 32 = 9,600 minutes available on your cluster in that period. So, you can assign a user/group, say, 3,200 minutes to give them 1/3 of the cluster time.

This will *not* prevent them from using 100% CPU on the cluster for an hour though. If they start 25 parallel queries (keeping our 25 limit from earlier), and all queries run for > 1 hour and use all CPUs, presto does *not* have advanced enough logic to restrict/penalize those running queries until they are done.

New queries after that will be severely penalized though. E.g. I tested huge queries with a 5 minute period, and giving a user 10% of the cluster on CPU limits. As the queries used the whole cluster for much more than 5 minutes, that user was not allowed to run queries for over an hour! So, a user can get penalized for many times the original period.

This last part made it hard for me to use the limits. It would mean one harsh query by a production user could cause them to basically not be able to run their app for many hours.

Also, since CPU limits are a hard number rather than percentage based (like memory), it means that if you auto scale your presto (custom, or starburst, or EMR), users can not take advantage of more of the cluster while still being restricted from using the whole cluster.

All in all, I found the CPU limits not amazingly useful as a whole, but they may be useful for keeping ad-hoc users from using much of the cluster. E.g. allow applications to do what they need to, but stop random users from doing damage with concrete limits.

Also note – you can specify CPU limits and period in any units you like. So, if it helps you, use hours or whatever to keep the numbers smaller – but don’t forget to do the math based on node count & core count on the actual limits. The period obviously is not related to the # nodes or cores, so don’t confuse that part.

Sub Groups

Sub groups are in most examples. I would point out that they are probably the most powerful thing you should make use of. They let you, say, group all ad-hoc users together and say all ad-hoc users combined can’t use more than 40% of the cluster memory, but any one ad-hoc user can use up to 20%. That way you can protect the overall cluster while still ensuring at least 2 users can make use of their max memory amount in parallel (very useful).

Sub-groups can be dynamically named based on the user, and you can do multiple. E.g. we put all our application users in one group and subgroup, and our ad-hoc users in another one with far less resources. App users start with “app.”, so this is really easy to pull of with their pattern support.

Presto / Hive find parquet files touched/referenced by a query/predicate.

We had a use case where we needed to find out which parquet files were touched by a query/predicate.  This was so that we could rewrite certain files in a special way to remove specific records.   In this case, presto was not mastering the data itself.

We found this awesome post -> https://stackoverflow.com/a/44011639/857994 on stack overflow which shows this pseudo-column:

select "$path" from table
This correctly shows you the parquet file a row came from, which is awesome!  I also found this  MR  which shows work has been merged to add $file_size  and $file_modified_time properties which is even cooler.
So, newer versions of presto-sql have even more power here.

 

Presto Internal HTTPS / TLS Graceful Shutdown

Getting graceful shut -down to work on a TLS secured presto cluster can take a few tries.  This script should do it for you easily as an example.

It mines the private key and cert out of your p12 file and it calls CURL with them and with https set up.

Additional Notes:

  • You need to use your nodes’ proper DNS names that match  the cert (e.g. *.app.company.com).
  • You need to specify https protocol.
  • I use port 8321 on presto which is not standard.  So, you may want to update that. 
# Import JKS file to p12.
keytool -importkeystore -srckeystore mycert.jks -srcstorepass SomePassword -srcalias myapp.mycompany.com -destalias myapp.mycompany.com -destkeystore mycert.p12 -deststoretype PKCS12 -deststorepass SomePassword

# Get key and cert out of p12.
openssl pkcs12 -in mycert.p12 -out mycert.key.pem -nocerts -nodes
openssl pkcs12 -in mycert.p12 -out mycert.crt.pem -clcerts -nokeys

# Add key and cert to curl call for graceful shutdown endpoint.
curl -E ./mycert.crt.pem --key ./mycert.key.pem -v -XPUT --data '"SHUTTING_DOWN"' -H "Content-type: application/json" https://ip-10-254-98-5.myapp.mycompany.com:8321/v1/info/state --insecure

Surface Pro 3 Fix Touch Screen Dead Spots Microsoft

I’ve had a surface pro 3 for a long time and it has generally been amazing.  But, twice now, I have had to go through an issue where a few parts of the screen totally stop responding.

It happened a year apart or more and I forgot the solution originally, so I’m posting it here!

This blog: https://www.sony.com/electronics/support/downloads/W0009338

Points you to this download (From sony.com, so it’s legit) -> https://www.sony.com/electronics/support/downloads/W0009338

You run that sony app, it calibrates in CLI for a minute, then open up your windows 10 touch screen callibrator (you can find that by searching for “touch screen” on your search bar on by the windows logo).  Run that, and you should be good!