Presto – Get and List the Connectors on All Nodes in Cluster

Some problems in presto are the result of having connector definitions only on a subset of nodes in the cluster. For example, a recent error on the presto-sql forum during insert into a hive table was:

java.lang.IllegalArgumentException: No page sink provider for catalog 'hive'
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:216)
at io.prestosql.split.PageSinkManager.providerFor(PageSinkManager.java:67)
at io.prestosql.split.PageSinkManager.createPageSink(PageSinkManager.java:61)
at io.prestosql.operator.TableWriterOperator$TableWriterOperatorFactory.createPageSink(TableWriterOperator.java:114)
at io.prestosql.operator.TableWriterOperator$TableWriterOperatorFactory.createOperator(TableWriterOperator.java:105)
at io.prestosql.operator.DriverFactory.createDriver(DriverFactory.java:114)
at io.prestosql.execution.SqlTaskExecution$DriverSplitRunnerFactory.createDriver(SqlTaskExecution.java:941)
at io.prestosql.execution.SqlTaskExecution$DriverSplitRunner.processFor(SqlTaskExecution.java:1069)
at io.prestosql.execution.executor.PrioritizedSplitRunner.process(PrioritizedSplitRunner.java:163)
at io.prestosql.execution.executor.TaskExecutor$TaskRunner.run(TaskExecutor.java:484)

If you have a decent size cluster, it is very painful to go to each node and check its catalogs. This problem can be even worse if you have an old node join the cluster after maintenance or something like that.

In any case, you can use the following URL on presto (/v1/service/presto) to list all nodes and their registered connectors in one shot. This will help you track down the problem fast :). You can even be lazy and parse the JSON in chrome dev tools/etc so you don’t have to eyeball all the nodes.

https://nonprod.presto.your-company.com/v1/service/presto

Example Output

  "environment": "nonprod",
"services": [
{
"id": "a35ae2a7-fa95-43c9-b893-180449a48c5a",
"nodeId": "blue-presto-worker-865b8db58-g92wn",
"type": "presto",
"pool": "general",
"location": "/blue-presto-worker-865b8db58-g92wn",
"properties": {
"node_version": "331-n-2.6.1",
"coordinator": "false",
"https": "https://10-234-232-180.nonprod-presto.pod.cluster.local:8443",
"https-external": "https://10-234-232-180.nonprod-presto.pod.cluster.local:8443",
"connectorIds": "hive-dl,system,cr-meta,ar-meta,dc-meta"
}
},
{
"id": "b8dd0f39-00b0-4c78-b0c0-ff8e753419d8",
"nodeId": "blue-presto-worker-865b8db58-d2nsz",
"type": "presto",
"pool": "general",
"location": "/blue-presto-worker-865b8db58-d2nsz",
"properties": {
"node_version": "331-n-2.6.1",
"coordinator": "false",
"https": "https://10-234-234-106.nonprod-presto.pod.cluster.local:8443",
"https-external": "https://10-234-234-106.nonprod-presto.pod.cluster.local:8443",
"connectorIds": "hive-dl,system,cr-meta,ar-meta,dc-meta"
}
},
...

Presto Resource Groups Practical Notes

I recently had to start using resource groups in Presto. I’ll expand this over time with example configurations and such, but for now, I’m just taking some notes on things that are not necessarily obvious.

Concurrency Limit vs Connection Pool Size

Being a Java guy, I always visualize any database work as if it’s being done from a connection pool. Without any resource groups, I was able to use hundreds of parallel queries against presto, which requires hundreds of connections in a Java connection pool.

When we added resource groups with concurrency limits, I was curious – if I have a connection pool of 100 and launch 100 queries in Java, and I have a hard concurrency limit for that user/group of 25, what happens?

Presto will let you launch the 100 parallel connections/queries from Java, and it will queue 75 of those queries/connections, assuming your queue size in the resource group is > 75. If your queue size was 50 though, you would have 25 running queries, 50 queued queries, and 25 queries would fail with a note about resources being exceeded on the cluster like this:

Caused by: java.sql.SQLException: Query failed (#20200704_001046_01778_pw9xr): Too many queued queries for “global.users.john.humphreys”

CPU Limits – Practical Effects

You can put soft and hard limits on CPU. They are a little hard to calculate though; you have to think in terms of total cores in the cluster an the period in which the limits are checked. E.g. if your period is 30 minutes, and you have 10 worker servers, and you have 32 cores a server, then there are 30 * 10 * 32 = 9,600 minutes available on your cluster in that period. So, you can assign a user/group, say, 3,200 minutes to give them 1/3 of the cluster time.

This will *not* prevent them from using 100% CPU on the cluster for an hour though. If they start 25 parallel queries (keeping our 25 limit from earlier), and all queries run for > 1 hour and use all CPUs, presto does *not* have advanced enough logic to restrict/penalize those running queries until they are done.

New queries after that will be severely penalized though. E.g. I tested huge queries with a 5 minute period, and giving a user 10% of the cluster on CPU limits. As the queries used the whole cluster for much more than 5 minutes, that user was not allowed to run queries for over an hour! So, a user can get penalized for many times the original period.

This last part made it hard for me to use the limits. It would mean one harsh query by a production user could cause them to basically not be able to run their app for many hours.

Also, since CPU limits are a hard number rather than percentage based (like memory), it means that if you auto scale your presto (custom, or starburst, or EMR), users can not take advantage of more of the cluster while still being restricted from using the whole cluster.

All in all, I found the CPU limits not amazingly useful as a whole, but they may be useful for keeping ad-hoc users from using much of the cluster. E.g. allow applications to do what they need to, but stop random users from doing damage with concrete limits.

Also note – you can specify CPU limits and period in any units you like. So, if it helps you, use hours or whatever to keep the numbers smaller – but don’t forget to do the math based on node count & core count on the actual limits. The period obviously is not related to the # nodes or cores, so don’t confuse that part.

Sub Groups

Sub groups are in most examples. I would point out that they are probably the most powerful thing you should make use of. They let you, say, group all ad-hoc users together and say all ad-hoc users combined can’t use more than 40% of the cluster memory, but any one ad-hoc user can use up to 20%. That way you can protect the overall cluster while still ensuring at least 2 users can make use of their max memory amount in parallel (very useful).

Sub-groups can be dynamically named based on the user, and you can do multiple. E.g. we put all our application users in one group and subgroup, and our ad-hoc users in another one with far less resources. App users start with “app.”, so this is really easy to pull of with their pattern support.

Presto / Hive find parquet files touched/referenced by a query/predicate.

We had a use case where we needed to find out which parquet files were touched by a query/predicate.  This was so that we could rewrite certain files in a special way to remove specific records.   In this case, presto was not mastering the data itself.

We found this awesome post -> https://stackoverflow.com/a/44011639/857994 on stack overflow which shows this pseudo-column:

select "$path" from table
This correctly shows you the parquet file a row came from, which is awesome!  I also found this  MR  which shows work has been merged to add $file_size  and $file_modified_time properties which is even cooler.
So, newer versions of presto-sql have even more power here.

 

Presto Internal HTTPS / TLS Graceful Shutdown

Getting graceful shut -down to work on a TLS secured presto cluster can take a few tries.  This script should do it for you easily as an example.

It mines the private key and cert out of your p12 file and it calls CURL with them and with https set up.

Additional Notes:

  • You need to use your nodes’ proper DNS names that match  the cert (e.g. *.app.company.com).
  • You need to specify https protocol.
  • I use port 8321 on presto which is not standard.  So, you may want to update that. 
# Import JKS file to p12.
keytool -importkeystore -srckeystore mycert.jks -srcstorepass SomePassword -srcalias myapp.mycompany.com -destalias myapp.mycompany.com -destkeystore mycert.p12 -deststoretype PKCS12 -deststorepass SomePassword

# Get key and cert out of p12.
openssl pkcs12 -in mycert.p12 -out mycert.key.pem -nocerts -nodes
openssl pkcs12 -in mycert.p12 -out mycert.crt.pem -clcerts -nokeys

# Add key and cert to curl call for graceful shutdown endpoint.
curl -E ./mycert.crt.pem --key ./mycert.key.pem -v -XPUT --data '"SHUTTING_DOWN"' -H "Content-type: application/json" https://ip-10-254-98-5.myapp.mycompany.com:8321/v1/info/state --insecure

Is Presto Slow When Returning Millions of Records / Big Result Set?

What Was Our Problem?

We were having issues with people reporting that Presto was slow when they were exporting hundreds of millions of records from much larger tables.  The queries were simple where clause filters selecting a few fields from some hundred-billion record tables.

Was It Actually Slow?

No! At least, not when parallelized well and tested properly.  I wrote a Java app to

  • Split a query into N =100 parts by using where clauses with a modulus on an integer column.
  • Query presto in parallel with 30 threads, going through the 100 queries.
  • Output results to standard out.

In an external bash script, I also grepped the results just to show some statistics I outputted.

This was slow!

Why Was it Slow?

First of all, let’s talk about the presto cluster setup:

  • 1 coordinator.
  • 15 workers.
  • All m5.8xlarge = 128GB RAM / 32 processor cores.

This is pretty decent.  So, what could our bottlenecks  be?

  1. Reading from s3.
  2. Processing results in workers.
  3. Slow coordinator due to having 15 workers talk through it.
  4. Slow consumer (our client running the queries).

To rule out 1/2/3 respectively I:

  1. Did a count(*) query which would force a scan over all relevant s3 data.  It came back pretty fast (in 15 seconds or so).
  2. Added more workers.  Having more workers  had minimal effect on the final timings, so we’re not worker bound.
  3. Switched the coordinator to a very large, compute-optimized node type.  This had minimal effect on the timings as well.

So, the problem appears to be with the client!

Why Was the Client Slow?

Our client really wasn’t doing a lot.  It was running 30 parallel queries and outputting the results, which were being grepped.  It was a similarly sized node to our presto coordinator, and it had plenty of CPU, RAM, decent network and disks (EBS).

It turned out though that once we stopped doing the grep and once we stopped writing the results to stdout, and we just held counters/statistics on the results we read, it went from ~25 minutes to ~2 minutes.

If we had run this in Spark or some other engine with good parallel behavior, we would have seen the workload distribute better over more nodes with sufficient ability to parallel process their portions of the records.  But, since we were running on a single node, with all results, the threads/CPU and other resoruces we were using capped out and could not go any faster.

Note: we did not see the client server as having high utilization, but some threads were at 100%.  So, the client app likely had a bottleneck we could avoid if we improved it.

Summary

So… next time you think presto can’t handle returning large numbers of results from the coordinator, take some time to evaluate your testing methodology.  Presto isn’t designed to route hundreds of millions of results, but it does it quite well in our experience.