Skip to main content

Impala and Kerberos

Listen:
First, Impala is beta software and has some limitations. Stay tuned and test this, you'll see it can be change your BI world dramatically.

What is Impala? 
Impala provides fast, interactive SQL queries directly on your Apache Hadoop data stored in HDFS or HBase. In addition to using the same unified storage platform, Impala also uses the same metadata, SQL syntax (Hive SQL), ODBC driver and user interface (Hue Beeswax) as Apache Hive. This provides a familiar and unified platform for batch-oriented or real-time queries.
(https://ccp.cloudera.com/display/IMPALA10BETADOC/Introducing+Cloudera+Impala)
You can build Impala by source (https://github.com/cloudera/impala) or you can grab them by using yum on a RHEL / CentOS 6x server. Imapla doesn't support RHEL / CentOS prior 6, since the most part of Impala is written in C++.

I choose the rpm-version for this article, but the compiled version will work in the same manner. To grab impala directly per yum setup a new repository:

#> cat /etc/yum.repos.d/impala.repo
[cloudera-impala]
name=Impala
baseurl=http://beta.cloudera.com/impala/redhat/6/x86_64/impala/0/
gpgkey = http://beta.cloudera.com/impala/redhat/6/x86_64/impala/RPM-GPG-KEY-cloudera
gpgcheck = 1

and install impala and all needed libs per yum:

yum install impala impala-shell cyrus-sasl-devel cyrus-sasl-gssapi gcc-c++ gcc c++ python-setuptools -y && easy_install sasl

You should use the newest JDK from Oracle and you have to install it along your cluster, in this article jdk-6u37-linux-x64-rpm.bin was the actual release. Note, you have to install the JDK after you have installed Impala per yum, as the dependencies install OpenJDK too. To avoid the using of OpenJDK point your system(s) per alternative to the release you want to use:

alternatives --install /usr/bin/javaws javaws /usr/java/latest/jre/bin/javaws 20000
alternatives --install /usr/bin/java java /usr/java/latest/jre/bin/java 20000


To be sure you're running with the JDK you've installed ago you should check it:

java -version
java version "1.6.0_37"
Java(TM) SE Runtime Environment (build 1.6.0_37-b06)
Java HotSpot(TM) 64-Bit Server VM (build 20.12-b01, mixed mode)


One of the things which can be go wrong are some missed libs, they are dynamically linked from impalad and not present in the default library stores. Check it with ldd and link the missed libs into /usr/lib64/, in my case I did:

ln -s /usr/java/jdk1.6.0_37/jre/lib/amd64/server/libjvm.so /usr/lib64/libjvm.so
ln -s /usr/java/jdk1.6.0_37/jre/lib/amd64/libjsig.so /usr/lib64/libjsig.so
ln -s /usr/lib/impala/lib/libhdfs.so.0.0.0 /usr/lib64/libhdfs.so.0.0.0


you should check if you find any missed libraries by using ldd (ldd /usr/lib/impala/sbin/impalad) .

Copy your hive, hdfs and hbase config files into the config directory of Impala and create a log4.properties file within $IMPALA_HOME/conf/:

log.threshold=INFO
main.logger=FA
impala.root.logger=${log.threshold},${main.logger}
log4j.rootLogger=${impala.root.logger}
log.dir=/var/log/impalad
log.file=impalad.INFO
log4j.appender.FA=org.apache.log4j.FileAppender
log4j.appender.FA.File=${log.dir}/${log.file}
log4j.appender.FA.layout=org.apache.log4j.PatternLayout
log4j.appender.FA.layout.ConversionPattern=%p%d{MMdd HH:mm:ss.SSS'000'} %t %c] %m%n
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n


The config directory within impalas home should have the following files present, to determine which home-directory the user impala use check it with "echo ~impala".

ll /usr/lib/impala/conf/
total 24
-rw-r--r-- 1 root root 1243 Dec 10 14:59 core-site.xml
-rw-r--r-- 1 root root 4596 Sep 2 09:35 hdfs-site.xml
-rw-r--r-- 1 root root 1157 Dec 10 10:36 hive-site.xml
-rw------- 1 impala impala 594 Dec 11 12:29 impala.keytab
-rw-r--r-- 1 root root 647 Dec 11 12:31 log4j.properties


Sync the content of the directory to all other nodes in your cluster.

Kerberos integration
If you use RHEL kerberos KDC packages you have to tweak your principals. From RHEL4 on principals getting a default renew_lifetime by zero. Means, you can get a renewable ticket, but you can't renew this.

To solve this you have to modify the krbtgt principal as well all other principals who should have the availability to renew their tickets.
kadmin.local: modprinc -maxrenewlife 1day krbtgt/ALO.ALT@ALO.ALT
kadmin.local: addprinc -randkey -maxrenewlife 1day +allow_renewable impala/hadoop1.alo.alt@ALO.ALT

Export the keytab (xst -norandkey -k impala.keytab impala/hadoop1.alo.alt@ALO.ALT HTTP/hadoop1.alo.alt@ALO.ALT), place it in $IMPALA_HOME/conf and obtain a renewable ticket with
sudo -u impala kinit -r 1day -k -t /usr/lib/impala/conf/impala.keytab impala/hadoop1.alo.alt@ALO.ALT

I created a poor startscript to check if all is working as expected and start statestore as well as  impalad on your server:

CONF=/usr/lib/impala/conf
USER=impala
PWD=`echo ~$USER`
HOST=`hostname`
REALM=ALO.ALT
export GLOG_minloglevel=0
export GLOG_logbuflevel=-1
export GLOG_log_dir=/var/log/impala
export GLOG_max_log_size=200

mkdir -p /var/log/impala
chown -R impala: /var/log/impala
# obtain a new ticket
sudo -u impala kinit -r 1day -k -t $CONF/$USER.keytab $USER/$HOST@$REALM
#start it up
statestored -state_store_port=24000 -enable_webserver=true -webserver_port=25010 -log_filename=impala-state-store -principal=$USER/$HOST@$REALM -keytab_file=$CONF/impala.keytab &

impalad -state_store_host=hadoop1.alo.alt -nn=hadoop1.alo.alt -nn_port=9000 -hostname=hadoop1.alo.alt -ipaddress=192.168.56.101 -enable_webserver=true -webserver_port=25000 -principal=$USER/$HOST@$REALM -keytab_file=$CONF/impala.keytab -kerberos_ticket_life=36000 -log_filename=impala &


To control if all is running well you can now point your browser to the configured webservices:
statestore: http://<statestore-server>:25010
impalad: http://<impala-server>:25000

Both services deliver a bunch of monitoring features, as example you can grab metrics from the /metrics endpoint.

Using impala-shell with kerberos
To use impala shell with kerberos you have to get a valid ticket for your user first and have to invoke the shell per impala-shell -k. Note, that on all clients you have to install python sasl (best way per easy_install sasl)

[~]$ impala-shell -k
Using service name 'impala' for kerberos
Welcome to the Impala shell. Press TAB twice to see a list of available commands.

Copyright (c) 2012 Cloudera, Inc. All rights reserved.

(Build version: Impala v0.3 (3cb725b) built on Fri Nov 23 13:51:59 PST 2012)
[Not connected] > connect hadoop1:21000
[hadoop1:21000] > show tables
hbase_test
hbase_test2
hivetest1
hivetest2
[hadoop1:21000] >


Comments

Popular posts from this blog

Deal with corrupted messages in Apache Kafka

Under some strange circumstances it can happen that a message in a Kafka topic is corrupted. This happens often by using 3rd party frameworks together with Kafka. Additionally, Kafka < 0.9 has no lock at Log.read() at the consumer read level, but has a lock on Log.write(). This can cause a rare race condition, as described in KAKFA-2477 [1]. Probably a log entry looks like: ERROR Error processing message, stopping consumer: (kafka.tools.ConsoleConsumer$) kafka.message.InvalidMessageException: Message is corrupt (stored crc = xxxxxxxxxx, computed crc = yyyyyyyyyy Kafka-Tools Kafka stores the offset of every consumer in Zookeeper. To read out the offsets, Kafka provides handy tools [2]. But also zkCli.sh can be used, at least to display the consumer and the stored offsets. First we need to find the consumer for a topic (> Kafka 0.9): bin/kafka-consumer-groups.sh --zookeeper management01:2181 --describe --group test Prior to Kafka 0.9 the only possibility to get this inform

Hive query shows ERROR "too many counters"

A hive job face the odd " Too many counters:"  like Ended Job = job_xxxxxx with exception 'org.apache.hadoop.mapreduce.counters.LimitExceededException(Too many counters: 201 max=200)' FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask Intercepting System.exit(1) These happens when operators are used in queries ( Hive Operators ). Hive creates 4 counters per operator, max upto 1000, plus a few additional counters like file read/write, partitions and tables. Hence the number of counter required is going to be dependent upon the query.  To avoid such exception, configure " mapreduce.job.counters.max " in mapreduce-site.xml to a value above 1000. Hive will fail when he is hitting the 1k counts, but other MR jobs not. A number around 1120 should be a good choice. Using " EXPLAIN EXTENDED " and " grep -ri operators | wc -l " print out the used numbers of operators. Use this value to tweak the MR s

Life hacks for your startup with OpenAI and Bard prompts

OpenAI and Bard   are the most used GenAI tools today; the first one has a massive Microsoft investment, and the other one is an experiment from Google. But did you know that you can also use them to optimize and hack your startup?  For startups, reating pitch scripts, sales emails, and elevator pitches with one (or both) of them helps you not only save time but also validate your marketing and wording. Curios? Here a few prompt hacks for startups to create / improve / validate buyer personas, your startups mission / vision statements, and USP definitions. First Step: Introduce yourself and your startup Introduce yourself, your startup, your website, your idea, your position, and in a few words what you are doing to the chatbot: Prompt : I'm NAME and our startup NAME, with website URL, is doing WHATEVER. With PRODUCT NAME, we aim to change or disrupt INDUSTRY. Bard is able to pull information from your website. I'm not sure if ChatGPT can do that, though. But nevertheless, now