Hive – Run With Local Map Reduce

Use Case

I was working to deploy hive for a new system.  I’ve used hive a fair bit but have not personally deployed it myself.  So, I went through some online instructions and ended up installing hadoop, configuring it, starting YARN (which I’ve also used in the past), and then installing hive to run against it.

I was intending on running vs AWS s3 and not HDFS, so I realized I didn’t need the DFS.  Then I thought harder and realized that it would be nice to run without YARN as well.  A colleague pointed out that in his deployment, hive did the work locally and he ran nothing aside from hiveserver2 and the hive metastore.  He was running multiple instances of hiveserver2 and the metastore, and they wouldn’t work together for any map-reduce tasks, but as he didn’t really want people using that execution engine anyway, that was just fine (and it is for me too!).

I didn’t realize this was an option… so I googled around and found this link: https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-Hive,Map-ReduceandLocal-Mode (scroll to the heading Hive, Map-Reduce and Local-Mode).

Local Map Reduce

It turns out that in the property mapred.job.tracker is used to control if hive executes local map reduce or not.  It is supposedly defaulted to “local”, meaning if you don’t override it then hive actually will execute in local mode by default.

With varying degrees of hive documentation maintenance, this is a little hard to rely on though.  This site says it:

https://cwiki.apache.org/confluence/display/Hive/AdminManual+Configuration

But if you look in your hive-default.xml.template file, you will not find this value.  Also, if you run the JDBC command “set” against hive, it will list all its configuration values, and you won’t see it there either!

I pulled down the hive source code though, and in the POM file, you can find an XML block which clearly sets a system property mapred.job.tracker to “local” (I’m kind of surprised it was in the POM).

So, the system property is defaulted to “local”. You can’t find any references to this past this point, but it is apparently used when hive is interacting with hadoop; so I suppose hadoop picks up the property and uses it in a way that isn’t obvious here.

So… you’ll run locally by default as long as you don’t add extra configuration to avoid it (which I did initially when following some other tutorial).

S3 Eventual Consistency

Consistent Distributed File Systems

Historically, I’ve used standard HDFS, MapR’s version of HDFS (MapR-FS), and ADLS (Azure’s data lake service).  All of these behave very much like you would expect a local file system to.  If you write files and another process lists files, it will immediately see them and be able to use them without issue.

Amazon s3 File System Issues

I was surprised when I started learning about Amazon s3 after using all of these prior file systems.  I understand that s3 is an object store… similar to Azure Blob Storage.  I also understand that it is the main data lake solution though.

Maybe it’s just because I’m new and am missing something… but there doesn’t seem to be any AWS version of ADLS.

The s3 storage service is eventually consistent.   This means that if you run Spark, or similar tools on it, they will likely produce improper results or fail.  This is because multiple tasks will write files in parallel and list them and they won’t necessarily get the fully up to date view of the storage.  So, they may write 10 files, list them, and see 5 files, etc.

I came across a very good article describing this in detail here: https://www.opendoor.com/w/blog/why-s3guard-with-s3-as-a-filesystem-spark.

The TLDR is that you have to use a consistency layer between your big data frameworks and s3 to ensure they function well.  You can confirm this by reading the short hadoop documentation site here -> https://wiki.apache.org/hadoop/AmazonS3.

Note that the first article recommends S3Guard which works based on DynamoDB, but there may be other options (e.g. EMR will have a way of dealing with this).

Determine Compatibility of hadoop-aws and aws-java-sdk-bundle JARs

When you’re integrating hadoop and other big-data frameworks into AWS s3, you will quickly run into a situation where you need to include the hadoop-aws and aws-java-sdk-bundle JARs into your class path.

Unfortunately, these JARs are separately versioned and it is hard to figure out compatibility.  The hadoop-aws JAR has to match your hadoop version exactly, so that one is fine.

Determining the Right Version

  1. Check your hadoop version.
  2. Get the hadoop-aws.jar with the same exact version.
  3. Go to the maven central page for the correct version of the hadoop-aws.jar and look at its compile dependencies.  E.g. at https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-aws/2.9 you can see the SDK dependency is com.amazonaws » aws-java-sdk-bundle 1.11.199.

Hive Server 2 – Required field ‘serverProtocolVersion’ is unset!

Issue Context and Error

I have been working to install hive server 2 in order to work with Presto, among other things.  I wanted to ensure I had Hive’s JDBC interface (to port 10000) working well as I need it to enable users to easily submit partition repair queries (msck repair table) and similar things.  Unfortunately, when I went to connect over JDBC, I got this error (a small part of a huge stack trace):

Required field 'serverProtocolVersion' is unset!

The Solution

I think if you carefully read the full stack-trace, you’ll see something about user impersonation… but missed it. I actually figured it out by increasing the logging level when running hive server. You can do that like this:

./hive --service hiveserver2 --hiveconf hive.server2.thrift.port=10000 --hiveconf hive.root.logger=DEBUG,console

Once I did this, I clearly saw this error:

2019-06-06T13:53:13,183  WARN [HiveServer2-Handler-Pool: Thread-36] thrift.ThriftCLIService: Error opening session:
org.apache.hive.service.cli.HiveSQLException: Failed to open new session: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: centos is not allowed to impersonate centos

Googling this quickly helped me to find this stack overflow: https://stackoverflow.com/a/50753233/857994. The proposed solution there is to add this entry to your hive-site.xml:

<property>
  <name>hive.server2.enable.doAs</name>
  <value>false</value> 
</property>

After that, everything works great :).