Presto Resource Groups Practical Notes

I recently had to start using resource groups in Presto. I’ll expand this over time with example configurations and such, but for now, I’m just taking some notes on things that are not necessarily obvious.

Concurrency Limit vs Connection Pool Size

Being a Java guy, I always visualize any database work as if it’s being done from a connection pool. Without any resource groups, I was able to use hundreds of parallel queries against presto, which requires hundreds of connections in a Java connection pool.

When we added resource groups with concurrency limits, I was curious – if I have a connection pool of 100 and launch 100 queries in Java, and I have a hard concurrency limit for that user/group of 25, what happens?

Presto will let you launch the 100 parallel connections/queries from Java, and it will queue 75 of those queries/connections, assuming your queue size in the resource group is > 75. If your queue size was 50 though, you would have 25 running queries, 50 queued queries, and 25 queries would fail with a note about resources being exceeded on the cluster like this:

Caused by: java.sql.SQLException: Query failed (#20200704_001046_01778_pw9xr): Too many queued queries for “global.users.john.humphreys”

CPU Limits – Practical Effects

You can put soft and hard limits on CPU. They are a little hard to calculate though; you have to think in terms of total cores in the cluster an the period in which the limits are checked. E.g. if your period is 30 minutes, and you have 10 worker servers, and you have 32 cores a server, then there are 30 * 10 * 32 = 9,600 minutes available on your cluster in that period. So, you can assign a user/group, say, 3,200 minutes to give them 1/3 of the cluster time.

This will *not* prevent them from using 100% CPU on the cluster for an hour though. If they start 25 parallel queries (keeping our 25 limit from earlier), and all queries run for > 1 hour and use all CPUs, presto does *not* have advanced enough logic to restrict/penalize those running queries until they are done.

New queries after that will be severely penalized though. E.g. I tested huge queries with a 5 minute period, and giving a user 10% of the cluster on CPU limits. As the queries used the whole cluster for much more than 5 minutes, that user was not allowed to run queries for over an hour! So, a user can get penalized for many times the original period.

This last part made it hard for me to use the limits. It would mean one harsh query by a production user could cause them to basically not be able to run their app for many hours.

Also, since CPU limits are a hard number rather than percentage based (like memory), it means that if you auto scale your presto (custom, or starburst, or EMR), users can not take advantage of more of the cluster while still being restricted from using the whole cluster.

All in all, I found the CPU limits not amazingly useful as a whole, but they may be useful for keeping ad-hoc users from using much of the cluster. E.g. allow applications to do what they need to, but stop random users from doing damage with concrete limits.

Also note – you can specify CPU limits and period in any units you like. So, if it helps you, use hours or whatever to keep the numbers smaller – but don’t forget to do the math based on node count & core count on the actual limits. The period obviously is not related to the # nodes or cores, so don’t confuse that part.

Sub Groups

Sub groups are in most examples. I would point out that they are probably the most powerful thing you should make use of. They let you, say, group all ad-hoc users together and say all ad-hoc users combined can’t use more than 40% of the cluster memory, but any one ad-hoc user can use up to 20%. That way you can protect the overall cluster while still ensuring at least 2 users can make use of their max memory amount in parallel (very useful).

Sub-groups can be dynamically named based on the user, and you can do multiple. E.g. we put all our application users in one group and subgroup, and our ad-hoc users in another one with far less resources. App users start with “app.”, so this is really easy to pull of with their pattern support.

Azure: Tagging All Resources in a Resource Group With its Tags

Recently, I had to go back and correctly tag a whole bunch of items in a new resource group, none of which had been given tags.

This kind of task can be daunting in the Azure portal… you have to click each, click the tags tab, and then type each key/value, for each tag, and save.  So… tagging 50 resources with 5 tags each ends up being 50 * 2 * 5 + 50 = 550 clicks at minimum, plus all the typing!  Clearly, this is a task better suited for the CLI.

Using the Azure CLI

Microsoft actually has a very full featured tutorial on this subject right here.  The more advanced code they provide will actually find every resource you have in every group and give each resource the tags from the group.  It will even optionally retain existing tags for resources that are already tagged.

I wanted something a little simpler with the login included so that I can quickly copy it in to fix a resource group here and there without worrying about affecting all the other resource groups.  So, here is the code. It also counts the items so you can see progress as it can take some time.

Note, I wanted to forcibly replace all the tags on the resources with the RG tags as some of them were incorrect. You can get code to merge with existing tags from the link noted above if you prefer.

tenant="your-tenant-id"
subscription="your-subscription-name"
rg="your-resource-group-name"

# Login to azure - it will give you a message and code to log in via
# a web browser on any device
az login --tenant "${tenant}" --subscription "${subscription}"

# Show subscriptions just to show that we're on the correct one.
echo "Listing subscriptions:"
az account list --output table

# Get the tags from the resource group in a useful format.
jsontag=$(az group show -n $rg --query tags)
t=$(echo $jsontag | tr -d '"{},' | sed 's/: /=/g')

# Get all resources in the target resource group, and loop through
# them applying the tags from the resource group. Count them to show
# progress as this can take time.
i=0
r=$(az resource list -g $rg --query [].id --output tsv)
for resid in $r
do
az resource tag --tags $t --id $resid
let "i+=1"
echo $i
done

Also note that you can find the total number of resources you are targeting in advance with this command so the counter is more practical :).

az resource list --resource-group "your-rg-name" --query "[].name" | jq length