I help teams fix systemic engineering issues across distributed systems, data platforms, and streaming pipelines.
→ See how I work with teams.
Note (2025): This article describes an older CDH/Impala/HBase pattern based on Hive’s HBaseStorageHandler. It is useful if you still maintain legacy Impala-on-HBase workloads or need to understand how such systems were wired. For new designs you will usually land data in Parquet or Iceberg tables and query them with Impala, Trino or Spark instead of reading directly from HBase.
Context: Impala, Hive Metastore and HBase
Impala uses the Hive Metastore Service to discover tables and their underlying storage. That means you can expose data sitting in HDFS, object storage or even HBase as regular tables and query them from Impala.
In this article we use Hive’s HBase handler (HBaseStorageHandler) to connect Hive and HBase, and then query the same data from Impala. If you have not seen this pattern before, think of it as:
- HBase table → mapped as a Hive external table via
HBaseStorageHandler - Hive external table → visible to Impala via the shared Hive Metastore
- Impala → can run SQL against the HBase data with low latency, as long as the mapping is correct
In an older post I described how to connect Hive and HBase in more detail. That tutorial still applies to the basics of the HBase handler, so if you have never configured it before, read that first and then come back here.
Demo scenario: US census ZIP code incomes
For the demo we use public US census data (circa 2000) that contains ZIP codes and income values. The goal:
- Load the raw CSV into HBase.
- Expose the HBase table as a Hive external table via
HBaseStorageHandler. - Run queries against the data from Impala.
The scripts for this demo are available in my GitHub repository.
Sample CSV data
The raw CSV looks roughly like this:
cat DEC_00_SF3_P077_with_ann_noheader.csv
8600000US00601,00601,006015-DigitZCTA,0063-DigitZCTA,11102
8600000US00602,00602,006025-DigitZCTA,0063-DigitZCTA,12869
8600000US00603,00603,006035-DigitZCTA,0063-DigitZCTA,12423
8600000US00604,00604,006045-DigitZCTA,0063-DigitZCTA,33548
8600000US00606,00606,006065-DigitZCTA,0063-DigitZCTA,10603
Each row contains:
- id: a unique key (we will use this as the HBase row key)
- zip: ZIP code
- desc1, desc2: descriptive fields
- income: income value from the census dataset
Step 1 – Create the HBase table
Create a table in HBase with column families for each group of attributes:
create 'zipcode_hive', 'id', 'zip', 'desc', 'income'
In this example the row key is the census identifier (e.g. 8600000US00601), and we keep ZIP, descriptions and income in separate column families.
Step 2 – Expose HBase via a Hive external table
Next we create an external table in Hive that maps to the HBase table using HBaseStorageHandler and the hbase.columns.mapping property:
CREATE EXTERNAL TABLE ZIPCODE_HBASE (
key STRING,
zip STRING,
desc1 STRING,
desc2 STRING,
income STRING
)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES (
"hbase.columns.mapping" = ":key,zip:zip,desc:desc1,desc:desc2,income:income"
)
TBLPROPERTIES (
"hbase.table.name" = "zipcode_hive"
);
Here we bind each Hive column to a corresponding HBase column:
:key→ HBase row keyzip:zip→ columnzipin familyzipdesc:desc1,desc:desc2→ two columns in thedescfamilyincome:income→ columnincomein familyincome
Step 3 – Load CSV data into HBase with Pig
We now copy the CSV file into HDFS and use Pig to populate the HBase table. You could also use a staging table in Hive, but Pig keeps the example compact.
cat PopulateData.pig
copyFromLocal DEC_00_SF3_P077_with_ann_noheader.csv ziptest.csv
A = LOAD 'ziptest.csv'
USING PigStorage(',')
AS (id:chararray,
zip:chararray,
desc1:chararray,
desc2:chararray,
income:chararray);
STORE A INTO 'hbase://zipcode_hive'
USING org.apache.pig.backend.hadoop.hbase.HBaseStorage(
'zip:zip,desc:desc1,desc:desc2,income:income'
);
The job takes a short time to run and then the data is available directly in HBase. A quick sanity check:
scan 'zipcode_hive', LIMIT => 2
ROW COLUMN+CELL
8600000US00601 column=desc:desc1, timestamp=..., value=006015-DigitZCTA
8600000US00601 column=desc:desc2, timestamp=..., value=0063-DigitZCTA
8600000US00601 column=income:income,timestamp=..., value=11102
8600000US00601 column=zip:zip, timestamp=..., value=00601
8600000US00602 column=desc:desc1, timestamp=..., value=006025-DigitZCTA
8600000US00602 column=desc:desc2, timestamp=..., value=0063-DigitZCTA
8600000US00602 column=income:income,timestamp=..., value=12869
8600000US00602 column=zip:zip, timestamp=..., value=00602
Step 4 – Query the same data from Impala
Because Impala shares the Hive Metastore, the ZIPCODE_HBASE table we just created is immediately visible to Impala after you run INVALIDATE METADATA or an equivalent refresh.
A simple Impala session might look like this:
-- connect to your Impala daemon
Using service name 'impala' for kerberos
Connected to hadoop1:21000
Server version: impalad version 1.0 RELEASE (build d1bf0d1dac339af3692ffa17a5e3fdae0aed751f)
INVALIDATE METADATA ZIPCODE_HBASE;
-- basic sanity query
SELECT * FROM ZIPCODE_HBASE LIMIT 4;
+----------------+------------------+----------------+--------+-------+
| key | desc1 | desc2 | income | zip |
+----------------+------------------+----------------+--------+-------+
| 8600000US00601 | 006015-DigitZCTA | 0063-DigitZCTA | 11102 | 00601 |
| 8600000US00602 | 006025-DigitZCTA | 0063-DigitZCTA | 12869 | 00602 |
| 8600000US00603 | 006035-DigitZCTA | 0063-DigitZCTA | 12423 | 00603 |
| 8600000US00604 | 006045-DigitZCTA | 0063-DigitZCTA | 33548 | 00604 |
+----------------+------------------+----------------+--------+-------+
Returned 4 row(s) in 0.42s
Example: simple income filter
Now we can run “normal” SQL against the census income data even though it lives in HBase:
SELECT *
FROM ZIPCODE_HBASE
WHERE income BETWEEN '1000' AND '5000'
ORDER BY income DESC
LIMIT 20;
+----------------+------------------+----------------+--------+-------+
| key | desc1 | desc2 | income | zip |
+----------------+------------------+----------------+--------+-------+
| 8600000US64138 | 641385-DigitZCTA | 6413-DigitZCTA | 49995 | 64138 |
| 8600000US12477 | 124775-DigitZCTA | 1243-DigitZCTA | 49993 | 12477 |
| 8600000US33025 | 330255-DigitZCTA | 3303-DigitZCTA | 49991 | 33025 |
| 8600000US44119 | 441195-DigitZCTA | 4413-DigitZCTA | 49988 | 44119 |
| 8600000US34997 | 349975-DigitZCTA | 3493-DigitZCTA | 49982 | 34997 |
| 8600000US70665 | 706655-DigitZCTA | 7063-DigitZCTA | 49981 | 70665 |
| 8600000US28625 | 286255-DigitZCTA | 2863-DigitZCTA | 49981 | 28625 |
| 8600000US76134 | 761345-DigitZCTA | 7613-DigitZCTA | 49979 | 76134 |
| 8600000US44618 | 446185-DigitZCTA | 4463-DigitZCTA | 49978 | 44618 |
| 8600000US65714 | 657145-DigitZCTA | 6573-DigitZCTA | 49978 | 65714 |
| 8600000US77338 | 773385-DigitZCTA | 7733-DigitZCTA | 49976 | 77338 |
| 8600000US14622 | 146225-DigitZCTA | 1463-DigitZCTA | 49972 | 14622 |
| 8600000US84339 | 843395-DigitZCTA | 8433-DigitZCTA | 49972 | 84339 |
| 8600000US85020 | 850205-DigitZCTA | 8503-DigitZCTA | 49967 | 85020 |
| 8600000US64061 | 640615-DigitZCTA | 6403-DigitZCTA | 49964 | 64061 |
| 8600000US97361 | 973615-DigitZCTA | 9733-DigitZCTA | 49961 | 97361 |
| 8600000US30008 | 300085-DigitZCTA | 3003-DigitZCTA | 49960 | 30008 |
| 8600000US48634 | 486345-DigitZCTA | 4863-DigitZCTA | 49958 | 48634 |
| 8600000US47923 | 479235-DigitZCTA | 4793-DigitZCTA | 49946 | 47923 |
| 8600000US46958 | 469585-DigitZCTA | 4693-DigitZCTA | 49946 | 46958 |
+----------------+------------------+----------------+--------+-------+
Returned 20 row(s) in 1.08s
The important point is not the exact numbers but the pattern: once you have the Hive <-> HBase mapping in place, Impala can run SQL over your HBase data as if it were a regular table, while still reading from HBase at query time.
Where this pattern still makes sense (and where it doesn’t)
This Impala–Hive–HBase setup can still be useful when:
- You maintain a legacy CDH cluster with HBase-backed tables that must stay online.
- You need to troubleshoot or gradually migrate old Impala-on-HBase workloads.
- You want a reference example for how
HBaseStorageHandlerwas wired.
For new projects, you will usually:
- Land raw data in HDFS or object storage,
- Store it as Parquet or in an open table format like Iceberg,
- Query it from engines such as Impala, Trino, Spark or Flink SQL,
- Keep HBase (or other key-value stores) only for operational access paths, not as the main analytics store.
If you are in the process of modernizing such a stack, treat this article as a historical snapshot and migration reference, not as a recommended target architecture.
If platform instability, unclear ownership, or architecture drift are slowing your teams down,
review my Services
or book a 30-minute call.