PrestoSQL / Presto UI – Get stats programmatically via API

If you’re having trouble getting the /ui/api/stats info programmatically, you can use this script. Its ill-advised as they may change those APIs at any time; but as some of the UI stats are better/more correct than prometheus stats, you may need them as we did.

% COOKIE_VALUE=$(curl --location --request POST 'https://some.company.com/ui/login' \
--data-urlencode 'username=john.humphreys' \
--data-urlencode 'password=<password>' --cookie-jar - --output /dev/null  --silent | awk '{print $7}' | tail -1l)

curl 'https://some.company.com/ui/api/stats' -H $''"Cookie: Presto-UI-Token=$COOKIE_VALUE"'' | jq --color-output

{
  "runningQueries": 8,
  "blockedQueries": 0,
  "queuedQueries": 0,
  "activeCoordinators": 1,
  "activeWorkers": 35,
  "runningDrivers": 3957,
  "totalAvailableProcessors": 2450,
  "reservedMemory": 2770000473,
  "totalInputRows": 1133212564136,
  "totalInputBytes": 10872687401451,
  "totalCpuTimeSecs": 777021
}

MAC Make Dock Wait/Delay Longer Before Appearing

If the dock at the bottom of your Mac is getting in your way when you try to do quick actions, like using a horizontal scroll-bar in a full-screen app, then you can use this CLI setting to bump up the delay to a few seconds.

I find 3 seconds is enough to get most things in that area of the screen done, but is also short enough that it using the dock on purpose isn’t too painful.

defaults write com.apple.Dock autohide-delay -float 3; killall Dock

I found this on stack overflow after digging around for a while -> https://superuser.com/a/406571.

Listing Supported Availability Zones (AZs) for Instance Types in AWS

Availability Zones

In Amazon Web Services (AWS), you generally spread your nodes over multiple availability zones for high availability. Unfortunately, not every node type is available in every availability zone, and in general, it is hard to know which zones one is available in in advance.

Error Types

If you are provisioning a single EC2 instance or you are only provisioning EC2s in an Auto Scaling Group (ASG) in a single-zone, you will obviously notice if you chose an incompatible zone for your instance type as it just won’t work.

It can be more nefarious when you have an ASG with multiple zones though. For example, our large scale airflow service runs in kubernetes, and the main ASG goes over 3 zones. Today, we ran out of IPs in two zones and realized that the third was not even being utilized. When hunting down why, this message was in the “activity” tracker page for the ASG.

Launching a new EC2 instance. Status Reason: Your requested instance type (r5.2xlarge) is not supported in your requested Availability Zone (us-east-1e). Please retry your request by not specifying an Availability Zone or choosing us-east-1a, us-east-1b, us-east-1c, us-east-1d, us-east-1f. Launching EC2 instance failed.

This is a very helpful message, but it’s unfortunate that we had to do the wrong thing in order to get the supported zones list.

Getting the Correct Zones in Advance

You can use this AWS CLI (V2) command to check the list of zones supported for an instance type in advance.

% aws ec2 describe-instance-type-offerings --location-type availability-zone --filters="Name=instance-type,Values=r5.2xlarge" --region us-east-1 --output table
-------------------------------------------------------
|            DescribeInstanceTypeOfferings            |
+-----------------------------------------------------+
||               InstanceTypeOfferings               ||
|+--------------+--------------+---------------------+|
|| InstanceType |  Location    |    LocationType     ||
|+--------------+--------------+---------------------+|
||  r5.2xlarge  |  us-east-1f  |  availability-zone  ||
||  r5.2xlarge  |  us-east-1c  |  availability-zone  ||
||  r5.2xlarge  |  us-east-1b  |  availability-zone  ||
||  r5.2xlarge  |  us-east-1d  |  availability-zone  ||
||  r5.2xlarge  |  us-east-1a  |  availability-zone  ||
|+--------------+--------------+---------------------+

Sources

You can find some information on this from AWS at this link.

Understanding and Checking/Analyzing Your DockerHub Rate Limit

We’ve been hitting docker rate limiting pretty hard lately in our EKS clusters. Here are some interesting things we learned:

  • The anonymous request rate limit for DockerHub is 100 requests per IP address per hour.
  • If you are in a private IP space and have internet gateways, you are probably being rate limited on the IPs of the gateways.
  • So, if you have 600 servers going through 6 gateways, you have 600 requests, not 60,000 (obviously this is a massive difference).
  • In kubernetes, you should specify an image tag (which is not mandatory) and pull-if-not-present in order to ensure you pull images less frequently.

If you need to observe your servers and how they are acting with the rate limit, you can refer here -> https://www.docker.com/blog/checking-your-current-docker-pull-rate-limits-and-status/.

For anonymous requests, basically just run:

TOKEN=$(curl "https://auth.docker.io/token?service=registry.docker.io&scope=repository:ratelimitpreview/test:pull" | jq -r .token)

curl --head -H "Authorization: Bearer $TOKEN" https://registry-1.docker.io/v2/ratelimitpreview/test/manifests/latest 2>&1 | grep RateLimit

And you will get output like this, showing the rate limit (100) and how many you have left (100 for me as I haven’t pulled recently).

RateLimit-Limit: 100;w=21600
RateLimit-Remaining: 100;w=21600

Kubernetes PLEG Issues / Lots of Ephemeral Pods / Airflow

What is the Use Case?

We’ve been hosting a service for over a year now that basically deploys Apache Airflow over kubernetes in a SaaS model. Each internal client/user gets one instance of their own, including its own dedicated scheduler, web server, and general namespace to run task pods. Teams can run hundreds or thousands of parallel tasks each on their instance, all scheduled on the central cluster as an individual pod per task.

We use EKS v1.16 on AWS. One interesting problem we have run into is that Airflow can create a ton (tens of thousands) of short-lived/ephemeral pods, and they often have very low resource constraints. Often, they are very short-lived.

This can mean that a node with low CPU/memory usage may have hundreds or thousands of pods scheduled on it back-to-back as they keep creating/running/being cleaned at a rapid pace (which is very cool).

So, What is the Problem?

It turns out that, while CPU and memory can be very low on some nodes, the sheer act of creating/managing/destroying so many pods can cause issues in its own right. We use the prometheus operator in our Kubernetes, and it starts alerting us of KubeletPlegDurationHigh – The Kubelet Pod Lifecycle Event Generator has a 99th percentile duration of 10 seconds on node <node-id>.”

What is the PLEG?

You can review this article to understand the Pod Lifecycle Event Generator (PLEG) more: https://developers.redhat.com/blog/2019/11/13/pod-lifecycle-event-generator-understanding-the-pleg-is-not-healthy-issue-in-kubernetes/. It is very helpful. I’ve extracted the useful bits here:

The PLEG module in kubelet (Kubernetes) adjusts the container runtime state with each matched pod-level event and keeps the pod cache up to date by applying changes.

Let’s take a look at the dotted red line below in the process image.

The original image is here: Kubelet: Pod Lifecycle Event Generator (PLEG).

Monitoring the Issue

Assuming you have the prometheus operator installed and have the relevant metrics/alerts, here is a chart that lets you view the PLEG activity well in graphical form. This helps you understand if your solutions are helping much.

You don’t need the kubernetes_cluster spec, unless you’ve added that as an external label as well over multiple prometheuses (we query this from Thanos which aggregates multiple prometheus instances).

Here’s one of the queries in text form with that removed so you can copy paste easier:

quantile(.95, kubelet_pleg_relist_latency_microseconds) / 1000000

Mitigating the Issue

There are numerous things you can do to help mitigate this issue:

  1. Add more nodes to the cluster / increase minimum on auto scaler range. More nodes = more distribution of pods = less PLEG issues as they are on a per-node basis.
  2. Monitor and find the threshold/count of pods where issues happen, then adjust the kubelet settings to it can’t have that many pods. Generally we only see PLEG issues when we pass 45 pods on a node *and* have lots of ephemeral pods. This will change based on instance type and workload I’m sure, but I’m sure you can spot a trend and set the minimum to help mitigate. This is a good solution as an explicit pod limit will make the CA scale up new nodes properly.
  3. Distribute pods better around the cluster. Kubernetes, when running lots of ephemeral pods, tends to hot spot a bit and put more of these short lived pods on a few nodes that have less resources. You can use things like https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/ in newer kubernetes versions to reduce hot-spotting and mitigate PLEG issues (and other issues like docker rate limiting). This really just helps you use your existing servers more optimally.

I’m sure there are ‘better’ ways to fix this, but we haven’t found them yet. I’ll circle back and update this if and when we find them.