Presto Coordinator High Availability (HA)

Quick Recap – What is Presto?

Presto is an open-source distributed SQL query engine made by Facebook that runs as its own cluster.  It is able to refer to an existing hive metastore and run queries on the hive tables in HDFS/etc itself using its own resources.  It is much faster as it does everything in-memory rather than by using map-reduce.  It can connect to numerous data sources aside from hive as well (though I have only used it with Hive over Azure’s ADLS personally).

No High Availability

We started using Presto in an enterprise use case, and I was astounded to find out that it doesn’t have any high-availability (HA) built into it.  Presto as a product is wonderful – it is fast, easy to set up, provides pretty solid query diagnostics, handles massive queries in a very stable manner, etc.  So, the complete lack of a HA solution seems very strange given the strength of the product.

The critical component in Presto is the Coordinator, and it is a single point of failure.  It is the brains of the operation; it parses queries, breaks them into tasks, controls where work gets scheduled, etc.  Users only talk to the Coordinator node.

Coordinators vs Workers

Despite the importance of a coordinator, the only real differences between a coordinator and  the rest of the worker nodes at a configuration level are that:

  1. Coordinators specify that they are a coordinator.
  2. Coordinators can run an embedded discovery server – all nodes in the cluster report to this discovery server (including the coordinator itself).  This discovery server can actually be run separately from the coordinator as well if desired; I think the provision of an embedded one is relatively new.  The discovery server is how the coordinator knows the full set of nodes it is managing.
  3. Coordinators can choose whether or not they themselves are used to process queries (as opposed to just managing them).

Again, coordinators take client connections (e.g. JDBC, ODBC, etc), and they take queries from those connections, parse them, validate them, break them into tasks, and schedule them across the pool of available workers.

Workers just report to the discovery server and handle the tasks they are allocated.

HA Options

There is surprisingly little to find online about making Presto HA.  The only two solutions that I’ve seen are:

  • Run multiple clusters behind a load balancer.
  • Run multiple coordinators and some form of proxy service to ensure only one is ever active at a time.

Both of these have challenges and/or drawbacks.

If you are running multiple clusters, you probably want them to be active/active so you don’t only use half of your nodes at a time.  Handling this properly requires that your proxy service issue a redirect to the target cluster’s coordinator so that the client (e.g. JDBC connection) can re-send the request to that and talk to it directly.  This will work, but you’ve still limited the maximum query size you can do as you split your nodes into 2 or more clusters, so they cannot all co-operate on very large queries.

Running multiple coordinators for HA is preferable as you get to combine all of your nodes into a single, large cluster that can attack large queries.  It is not trivial to do though as if two coordinators operate at once, they can degrade and even deadlock the cluster.  We’ll dig into how to run with multiple coordinators now.

Using Multiple Coordinators

If you want to set up Presto using multiple coordinators, here is the general approach:

  1. Set up 2 or 3 nodes as coordinators.
  2. Tell them to run their own discovery servers in their config.
  3. Tell them to point at localhost for their own discovery server – this is quite important.
  4. Tell them not to do work (It will keep things more stable, but unfortunately, that means that your cluster has less power.  You’ll probably have far more workers than coordinators though, so it shouldn’t be an issue).
  5. Install HA proxy on the coordinator nodes.  Have all the coordinator nodes registered in order and make all but the first one a “backup”.  So, for example, run HA Proxy port 8385 and run Presto on port 8321.  All traffic will go to node #1 unless its down, in which case it will go to node #2, and so on.
  6. Set up a load balancer in front of the coordinator nodes pointing at the HA proxy port and make sure traffic can get through.
  7. Set up all worker nodes to target the load balancer for the discovery server.  So, all workers target the load balancer, which goes to any coordinator, all of which redirect to the primary one.  The primary coordinator always has all workers reaching it courtesy of HA proxy.
  8. As each coordinator itself only reports to its localhost discovery server, coordinators will not end up talking to each other’s discovery servers and will not interfere with each other.  Only one coordinator will ever have workers registered with it at a time.

Let Coordinators Use the Load Balancer?

If you let coordinators use the load balancer, then they will all end up at the primary coordinator’s discovery server.  Now… I have seen people online saying that they ran all nodes as coordinators (e.g. in the linked Gooogle Group conversations below) in which case this must be happening.

When I tried it though, I clearly got this warning from all the coordinators (and probably the workers too, but I didn’t check).  It comes out once a second.

2018-12-29T01:38:01.479Z WARN http-worker-176 com.facebook.presto.execution.SqlTaskManager Switching coordinator affinity from awe4s to 9mdsu
2018-12-29T01:38:01.806Z WARN http-worker-175 com.facebook.presto.execution.SqlTaskManager Switching coordinator affinity from 9mdsu to awe4s

Someone in a Git Hub issue I forgot the link to stated that this means that the memory management may get muddled up, which sounds scary.  I did provide a link to the warning in code below which somewhat verifies this.

So, maybe it is a disaster, or maybe it’s harmless – but in any case, I didn’t want warning messages coming at me once a second that looked this bad.  So, I opted to only have each coordinator talk to its own discovery server, which makes them 100% idle (not processing anything) unless they are the current primary coordinator.  This waste is unfortunate, but as we’ll have far more workers than coordinators, it’s not the end of the world.

Drawbacks

This will keep your cluster running in the event that a coordinator fails.  Any active queries at the time a coordinator fails will fail though – we can’t do anything about that unless Presto starts supporting HA internally.  Also, the fail-over period will be very much tied to your HA proxy configuration and your load balancer health checks (mine takes around 30 seconds using an Azure load balancer and HA proxy, I’ll be looking to reduce that).

Useful Links