Note (2025): The commands, package names and versions in this article describe an Impala 0.3 beta setup on RHEL/CentOS 6 with Oracle JDK 6 and Cloudera’s early repos. Modern Impala deployments use different packaging, Java versions and security defaults. Use this only for maintaining or understanding legacy CDH-era clusters.
What Impala is (in this historical context)
Impala provides fast, interactive SQL directly on data stored in Apache Hadoop, primarily HDFS and HBase. It reuses:
- The Hive Metastore and table metadata
- Hive-compatible SQL syntax
- ODBC/JDBC drivers and UI components (e.g. Hue Beeswax in early days)
This made it possible to run both batch-oriented Hive queries and low-latency Impala queries on the same data, using a shared schema and security model.
Historical install outline on RHEL/CentOS 6
In 2013, you could either build Impala from source or install beta packages from Cloudera’s repository on RHEL/CentOS 6. A minimal example repo file looked like:
# /etc/yum.repos.d/impala.repo
[cloudera-impala]
name=Impala
baseurl=http://beta.cloudera.com/impala/redhat/6/x86_64/impala/0/
gpgkey=http://beta.cloudera.com/impala/redhat/6/x86_64/impala/RPM-GPG-KEY-cloudera
gpgcheck=1
After adding the repo you would install Impala and dependencies:
yum install -y \
impala impala-shell \
cyrus-sasl-devel cyrus-sasl-gssapi \
gcc-c++ gcc \
python-setuptools
easy_install sasl
The early beta builds required a recent Oracle JDK alongside the cluster (for example jdk-6u37-linux-x64-rpm.bin). Because the Impala RPMs would also pull in OpenJDK, you had to point the system to the Oracle JDK using the alternatives mechanism:
alternatives --install /usr/bin/java java \
/usr/java/latest/jre/bin/java 20000
alternatives --install /usr/bin/javaws javaws \
/usr/java/latest/jre/bin/javaws 20000
A quick check confirmed the expected JDK version:
java -version
java version "1.6.0_37"
Java(TM) SE Runtime Environment (build 1.6.0_37-b06)
Java HotSpot(TM) 64-Bit Server VM (build 20.12-b01, mixed mode)
Checking native libraries for impalad
Impala’s daemon impalad links against several native libraries (JVM, HDFS, etc.). If any were missing from the standard library paths, you would see unresolved symbols in ldd output:
ldd /usr/lib/impala/sbin/impalad
A typical workaround was to create symlinks into /usr/lib64/:
ln -s /usr/java/jdk1.6.0_37/jre/lib/amd64/server/libjvm.so /usr/lib64/libjvm.so
ln -s /usr/java/jdk1.6.0_37/jre/lib/amd64/libjsig.so /usr/lib64/libjsig.so
ln -s /usr/lib/impala/lib/libhdfs.so.0.0.0 /usr/lib64/libhdfs.so.0.0.0
Once ldd showed all libraries resolved, Impala’s native components could start correctly.
Configuring Impala
Impala reuses the existing Hadoop and Hive configuration. In a simple setup you would copy the core Hadoop and Hive configs into Impala’s configuration directory:
# Example IMPALA_HOME
IMPALA_HOME=/usr/lib/impala
cp /etc/hadoop/conf/core-site.xml $IMPALA_HOME/conf/
cp /etc/hadoop/conf/hdfs-site.xml $IMPALA_HOME/conf/
cp /etc/hive/conf/hive-site.xml $IMPALA_HOME/conf/
Additionally, you needed a log4j.properties in $IMPALA_HOME/conf/:
log.threshold=INFO
main.logger=FA
impala.root.logger=${log.threshold},${main.logger}
log4j.rootLogger=${impala.root.logger}
log.dir=/var/log/impala
log.file=impalad.INFO
log4j.appender.FA=org.apache.log4j.FileAppender
log4j.appender.FA.File=${log.dir}/${log.file}
log4j.appender.FA.layout=org.apache.log4j.PatternLayout
log4j.appender.FA.layout.ConversionPattern=%p%d{MMdd HH:mm:ss.SSS'000'} %t %c] %m%n
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n
A typical directory listing for /usr/lib/impala/conf looked like:
ll /usr/lib/impala/conf/
total 24
-rw-r--r-- 1 root root 1243 Dec 10 14:59 core-site.xml
-rw-r--r-- 1 root root 4596 Sep 2 09:35 hdfs-site.xml
-rw-r--r-- 1 root root 1157 Dec 10 10:36 hive-site.xml
-rw------- 1 impala impala 594 Dec 11 12:29 impala.keytab
-rw-r--r-- 1 root root 647 Dec 11 12:31 log4j.properties
You would then sync this directory to all Impala nodes. To confirm the home directory used by the impala user:
echo ~impala
Kerberos integration (historical pattern)
In a Kerberos-secured cluster, Impala runs with a service principal and keytab. One nuance with RHEL KDC packages at the time was that principals got a default renew_lifetime of zero: tickets were “renewable” in theory but could not actually be renewed.
To fix that, you had to adjust the krbtgt principal and the Impala service principal to have a non-zero max renew lifetime:
kadmin.local: modprinc -maxrenewlife 1day krbtgt/ALO.ALT@ALO.ALT
kadmin.local: addprinc -randkey -maxrenewlife 1day +allow_renewable \
impala/hadoop1.alo.alt@ALO.ALT
Next, export the keytab and place it into $IMPALA_HOME/conf:
kadmin.local: xst -norandkey \
-k impala.keytab \
impala/hadoop1.alo.alt@ALO.ALT \
HTTP/hadoop1.alo.alt@ALO.ALT
mv impala.keytab /usr/lib/impala/conf/impala.keytab
chown impala:impala /usr/lib/impala/conf/impala.keytab
chmod 600 /usr/lib/impala/conf/impala.keytab
To obtain a renewable ticket for the impala service:
sudo -u impala kinit -r 1day \
-k -t /usr/lib/impala/conf/impala.keytab \
impala/hadoop1.alo.alt@ALO.ALT
Starting statestored and impalad (legacy script)
Before proper service scripts were integrated, a simple start script could wire together environment variables, Kerberos tickets and the Impala daemons:
CONF=/usr/lib/impala/conf
USER=impala
PWD=$(echo ~$USER)
HOST=$(hostname)
REALM=ALO.ALT
export GLOG_minloglevel=0
export GLOG_logbuflevel=-1
export GLOG_log_dir=/var/log/impala
export GLOG_max_log_size=200
mkdir -p /var/log/impala
chown -R impala: /var/log/impala
# Obtain a fresh ticket for the impala service
sudo -u impala kinit -r 1day -k -t "$CONF/$USER.keytab" "$USER/$HOST@$REALM"
# Start statestored
statestored \
-state_store_port=24000 \
-enable_webserver=true \
-webserver_port=25010 \
-log_filename=impala-state-store \
-principal="$USER/$HOST@$REALM" \
-keytab_file="$CONF/impala.keytab" &
# Start impalad
impalad \
-state_store_host=hadoop1.alo.alt \
-nn=hadoop1.alo.alt \
-nn_port=9000 \
-hostname=hadoop1.alo.alt \
-ipaddress=192.168.56.101 \
-enable_webserver=true \
-webserver_port=25000 \
-principal="$USER/$HOST@$REALM" \
-keytab_file="$CONF/impala.keytab" \
-kerberos_ticket_life=36000 \
-log_filename=impala &
With this script in place, starting Impala meant launching the statestore and at least one impalad with Kerberos authentication enabled.
Web UIs and metrics
Once running, both services exposed simple web UIs with metrics:
- Statestore:
http://<statestore-server>:25010 - Impalad:
http://<impala-server>:25000
Both endpoints provided basic monitoring pages and a /metrics endpoint that could be scraped for statistics.
Using impala-shell with Kerberos
On Kerberos-enabled clusters, impala-shell uses Kerberos tickets for authentication. The process was:
- Install Python SASL on the client host, for example via
easy_install sasl. - Obtain a Kerberos ticket for your user account (
kinit). - Run
impala-shellwith the-koption.
Example session:
[~]$ impala-shell -k
Using service name 'impala' for kerberos
Welcome to the Impala shell. Press TAB twice to see a list of available commands.
Copyright (c) 2012 Cloudera, Inc. All rights reserved.
(Build version: Impala v0.3 (3cb725b) built on Fri Nov 23 13:51:59 PST 2012)
[Not connected] > connect hadoop1:21000
[hadoop1:21000] > show tables;
hbase_test
hbase_test2
hivetest1
hivetest2
[hadoop1:21000] >
Today this is mainly interesting as a snapshot of how early Impala clusters were brought up and secured, but the overall concepts—service principals, keytabs, ticket renewal and client-side SASL—remain relevant in modern Kerberized data platforms.
If you need help with distributed systems, backend engineering, or data platforms, check my Services.