Is Presto Slow When Returning Millions of Records / Big Result Set?

What Was Our Problem?

We were having issues with people reporting that Presto was slow when they were exporting hundreds of millions of records from much larger tables.  The queries were simple where clause filters selecting a few fields from some hundred-billion record tables.

Was It Actually Slow?

No! At least, not when parallelized well and tested properly.  I wrote a Java app to

  • Split a query into N =100 parts by using where clauses with a modulus on an integer column.
  • Query presto in parallel with 30 threads, going through the 100 queries.
  • Output results to standard out.

In an external bash script, I also grepped the results just to show some statistics I outputted.

This was slow!

Why Was it Slow?

First of all, let’s talk about the presto cluster setup:

  • 1 coordinator.
  • 15 workers.
  • All m5.8xlarge = 128GB RAM / 32 processor cores.

This is pretty decent.  So, what could our bottlenecks  be?

  1. Reading from s3.
  2. Processing results in workers.
  3. Slow coordinator due to having 15 workers talk through it.
  4. Slow consumer (our client running the queries).

To rule out 1/2/3 respectively I:

  1. Did a count(*) query which would force a scan over all relevant s3 data.  It came back pretty fast (in 15 seconds or so).
  2. Added more workers.  Having more workers  had minimal effect on the final timings, so we’re not worker bound.
  3. Switched the coordinator to a very large, compute-optimized node type.  This had minimal effect on the timings as well.

So, the problem appears to be with the client!

Why Was the Client Slow?

Our client really wasn’t doing a lot.  It was running 30 parallel queries and outputting the results, which were being grepped.  It was a similarly sized node to our presto coordinator, and it had plenty of CPU, RAM, decent network and disks (EBS).

It turned out though that once we stopped doing the grep and once we stopped writing the results to stdout, and we just held counters/statistics on the results we read, it went from ~25 minutes to ~2 minutes.

If we had run this in Spark or some other engine with good parallel behavior, we would have seen the workload distribute better over more nodes with sufficient ability to parallel process their portions of the records.  But, since we were running on a single node, with all results, the threads/CPU and other resoruces we were using capped out and could not go any faster.

Note: we did not see the client server as having high utilization, but some threads were at 100%.  So, the client app likely had a bottleneck we could avoid if we improved it.

Summary

So… next time you think presto can’t handle returning large numbers of results from the coordinator, take some time to evaluate your testing methodology.  Presto isn’t designed to route hundreds of millions of results, but it does it quite well in our experience.

 

TMUX Easy Hotkeys + Lean Command Prompt

This is just a simple note of some TMUX and BASH profile configuration that I find helpful when developing.

Tmux lets you split screens arbitrarily.  So, you can divide one terminal up into, say, 1 half-screen, full width panel up top and 2, quarter-screen panels on the bottom.  Generally, you have to hop around by pushing ctrl + b and then a key (like % or “) to split a screen into multiple panels horizontally or vertically.  Equally, you do ctrl +b and an arrow key to move between panes.

Two problems with this are:

  • ctrl+b is an awful hot key, it’s hard to reach and you’ll use it constantly.
  • Once you end up in quarter-size screens, even on big monitors, your default terminal prompt (the thing that shows up before you type commands) becomes too large for comfortable use.

TMUX Config

Edit (or create) the file ~/.tmux.conf to customize tmux.  Then add these lines.  The first two switch the ctrl + b hotkey to ctrl + a (which is much more convenient for constant use).  The others are just nice fixes to have around for other reasons.

# remap prefix from 'C-b' to 'C-a'
unbind C-b
set-option -g prefix C-a

#Fix some annoying behavior that makes some keys not work.
bind-key C-a send-prefix

# Start window numbering at 1
set -g base-index 1

Bash Profile Prompt Config

Edit your ~/.bashrc file (your bash profile).  If a line for PS1= already exists, you’ll want to comment it out or modify it, otherwise just add this in:

PS1="\u:\W$ "

This changes your terminal prompt to only show .  So, for example, if my name is John and I’m working in a folder called /opt/code, it would just display john:code vs a full path name with additional info (which takes up way too much space on quarter-size panes).

I hope you find this useful, and thanks for reading!

 

Validate/Check Parquet File Schema From PC/Laptop

Checking a Parquet Schema

Generally, when I have had to check the schema of a parquet file in the past, I have checked it within Apache Spark, or by using https://github.com/apache/parquet-mr/tree/master/parquet-tools.

Today, I had to check a parquet file schema and I came across this nifty python utility though: https://github.com/chhantyal/parquet-cli.  I think it’s just a wrapper around pyarrow, but it is slick and works easily.

You can pip install it trivially and then use it to view the data and schema of a parquet file with ease.  Here’s an example of installing it and checking a schema:

$ pip install parquet-cli
...

$ parq part-00000-679c332c.c000.snappy.parquet --schema

# Schema
ID: BYTE_ARRAY String
OrderID: BYTE_ARRAY String
SaleID: BYTE_ARRAY String
OrderDate: BYTE_ARRAY String
Pack: BYTE_ARRAY String
Qnty: BYTE_ARRAY String
Ratio: BYTE_ARRAY String
Name: BYTE_ARRAY String
Org: BYTE_ARRAY String
Category: BYTE_ARRAY String
Type: BYTE_ARRAY String
Percentage: BYTE_ARRAY String

Needless to say, this is much easier than dealing with Spark or parquet tools for schema validation or checks of not-too-huge parquet file data.