Presto / Hive find parquet files touched/referenced by a query/predicate.

We had a use case where we needed to find out which parquet files were touched by a query/predicate.  This was so that we could rewrite certain files in a special way to remove specific records.   In this case, presto was not mastering the data itself.

We found this awesome post -> https://stackoverflow.com/a/44011639/857994 on stack overflow which shows this pseudo-column:

select "$path" from table
This correctly shows you the parquet file a row came from, which is awesome!  I also found this  MR  which shows work has been merged to add $file_size  and $file_modified_time properties which is even cooler.
So, newer versions of presto-sql have even more power here.

 

Presto Internal HTTPS / TLS Graceful Shutdown

Getting graceful shut -down to work on a TLS secured presto cluster can take a few tries.  This script should do it for you easily as an example.

It mines the private key and cert out of your p12 file and it calls CURL with them and with https set up.

Additional Notes:

  • You need to use your nodes’ proper DNS names that match  the cert (e.g. *.app.company.com).
  • You need to specify https protocol.
  • I use port 8321 on presto which is not standard.  So, you may want to update that. 
# Import JKS file to p12.
keytool -importkeystore -srckeystore mycert.jks -srcstorepass SomePassword -srcalias myapp.mycompany.com -destalias myapp.mycompany.com -destkeystore mycert.p12 -deststoretype PKCS12 -deststorepass SomePassword

# Get key and cert out of p12.
openssl pkcs12 -in mycert.p12 -out mycert.key.pem -nocerts -nodes
openssl pkcs12 -in mycert.p12 -out mycert.crt.pem -clcerts -nokeys

# Add key and cert to curl call for graceful shutdown endpoint.
curl -E ./mycert.crt.pem --key ./mycert.key.pem -v -XPUT --data '"SHUTTING_DOWN"' -H "Content-type: application/json" https://ip-10-254-98-5.myapp.mycompany.com:8321/v1/info/state --insecure

Surface Pro 3 Fix Touch Screen Dead Spots Microsoft

I’ve had a surface pro 3 for a long time and it has generally been amazing.  But, twice now, I have had to go through an issue where a few parts of the screen totally stop responding.

It happened a year apart or more and I forgot the solution originally, so I’m posting it here!

This blog: https://www.sony.com/electronics/support/downloads/W0009338

Points you to this download (From sony.com, so it’s legit) -> https://www.sony.com/electronics/support/downloads/W0009338

You run that sony app, it calibrates in CLI for a minute, then open up your windows 10 touch screen callibrator (you can find that by searching for “touch screen” on your search bar on by the windows logo).  Run that, and you should be good!

 

 

Airflow Task and Dag State/Status Enumerations

I’ve been working to make a kind of “remote dag watcher dag” lately.  Basically, I want to render all the tasks in a dag from another airflow on my airflow.  This way, I can watch a set of tasks across multiple airflows (some of which I may not control).

As part of this, I am having to use the (fairly bad) APIs a we use both the experimental and the plugin one here (https://github.com/teamclairvoyant/airflow-rest-api-plugin).

Anyway, the APIs don’t document task and DAG state, so I frequently have been looking them up in code.  Here’s a reference for ease:

https://github.com/apache/airflow/blob/1.10.5/airflow/utils/state.py

# -*- coding: utf-8 -*-
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.
#
from __future__ import unicode_literals

from builtins import object


class State(object):
    """
    Static class with task instance states constants and color method to
    avoid hardcoding.
    """

    # scheduler
    NONE = None
    REMOVED = "removed"
    SCHEDULED = "scheduled"

    # set by the executor (t.b.d.)
    # LAUNCHED = "launched"

    # set by a task
    QUEUED = "queued"
    RUNNING = "running"
    SUCCESS = "success"
    SHUTDOWN = "shutdown"  # External request to shut down
    FAILED = "failed"
    UP_FOR_RETRY = "up_for_retry"
    UP_FOR_RESCHEDULE = "up_for_reschedule"
    UPSTREAM_FAILED = "upstream_failed"
    SKIPPED = "skipped"

    task_states = (
        SUCCESS,
        RUNNING,
        FAILED,
        UPSTREAM_FAILED,
        SKIPPED,
        UP_FOR_RETRY,
        UP_FOR_RESCHEDULE,
        QUEUED,
        NONE,
        SCHEDULED,
    )

    dag_states = (
        SUCCESS,
        RUNNING,
        FAILED,
    )

    state_color = {
        QUEUED: 'gray',
        RUNNING: 'lime',
        SUCCESS: 'green',
        SHUTDOWN: 'blue',
        FAILED: 'red',
        UP_FOR_RETRY: 'gold',
        UP_FOR_RESCHEDULE: 'turquoise',
        UPSTREAM_FAILED: 'orange',
        SKIPPED: 'pink',
        REMOVED: 'lightgrey',
        SCHEDULED: 'tan',
        NONE: 'lightblue',
    }

    @classmethod
    def color(cls, state):
        return cls.state_color.get(state, 'white')

    @classmethod
    def color_fg(cls, state):
        color = cls.color(state)
        if color in ['green', 'red']:
            return 'white'
        return 'black'

    @classmethod
    def finished(cls):
        """
        A list of states indicating that a task started and completed a
        run attempt. Note that the attempt could have resulted in failure or
        have been interrupted; in any case, it is no longer running.
        """
        return [
            cls.SUCCESS,
            cls.FAILED,
            cls.SKIPPED,
        ]

    @classmethod
    def unfinished(cls):
        """
        A list of states indicating that a task either has not completed
        a run or has not even started.
        """
        return [
            cls.NONE,
            cls.SCHEDULED,
            cls.QUEUED,
            cls.RUNNING,
            cls.SHUTDOWN,
            cls.UP_FOR_RETRY,
            cls.UP_FOR_RESCHEDULE
        ]

Extract Airflow’s execution_date to a String From a Task

If you’re in an airflow task, you can get the execution date, as a string, in the same format the API’s/etc use, like this:

def save_exec_date(filename, **context):

    with open(filename, 'w') as fp:
        fp.write(context["execution_date"].to_iso8601_string(extended=True))


save_exec_date = PythonOperator(
    task_id='save_exec_date_to_file',
    python_callable=save_exec_date,
    op_kwargs={
        'filename': exec_date.txt,
    },
    provide_context=True,
    dag=dag)

Create a Date in Airflow’s Execution_Date Format (ISO with Time Zone)

If you are using Apache Airflow and you find a need to make a date in order to compare it to the airflow execution date, here is a simple/clean way of doing it.

Airflow’s dates appear to be in the ISO standard format, with a time zone qualifier.  From their docs here (https://airflow.apache.org/docs/stable/timezone.html):

Support for time zones is enabled by default. Airflow stores datetime information in UTC internally and in the database. It allows you to run your DAGs with time zone dependent schedules. At the moment Airflow does not convert them to the end user’s time zone in the user interface. There it will always be displayed in UTC.

The execution_date will always be in UTC.  So, this piece of code should always work to get the current time in airflow execution_date’s time format:

from datetime import datetime, timezone
datetime.now(timezone.utc).isoformat('T')

Also, you should note that these dates appear in this format:

2020-04-01T00:41:15.926862+00:00

This is great because it means that you can compare them as strings since the numbers are all most-significant-first and 24-hour based.