When you run benchmarks, clean up old data or just want to understand how much space each Hive table consumes, it is useful to see HDFS locations and sizes side by side. Instead of clicking through UIs, you can ask Hive for every table location and then call hdfs dfs -du -h on each path.
The Hive + HDFS one-liner
The following bash one-liner queries Hive for table locations, extracts the HDFS paths and then prints a human-readable size for each table directory:
for file in $(hive -S -e "SHOW TABLE EXTENDED LIKE '\*'" \
| grep "location:" \
| awk 'BEGIN { FS=":" } { printf("hdfs:%s:%s\n",$3,$4) }'); do
hdfs dfs -du -h "$file"
done
Typical output looks like this (shortened):
Time taken: 2.494 seconds
12.6m hdfs://hadoop1:8020/hive/tpcds/customer/customer.dat
5.2m hdfs://hadoop1:8020/hive/tpcds/customer_address/customer_address.dat
76.9m hdfs://hadoop1:8020/hive/tpcds/customer_demographics/customer_demographics.dat
9.8m hdfs://hadoop1:8020/hive/tpcds/date_dim/date_dim.dat
...
3.1m hdfs://hadoop1:8020/user/alexander/transactions/part-m-00003
1.9m hdfs://hadoop1:8020/user/hive/warehouse/zipcode_incomes_plain/DEC_00_SF3_P077_with_ann_noheader.csv
What the command does
hive -S -e "SHOW TABLE EXTENDED LIKE '\*'"asks Hive for metadata of all tables in the current database.- The output contains lines like
location:hdfs://hadoop1:8020/.... grep "location:"keeps only those lines.awk 'BEGIN { FS=":" } { printf("hdfs:%s:%s\n",$3,$4) }'rebuilds a clean HDFS URL from the colon-separated parts.- The
forloop iterates over each location and callshdfs dfs -du -hto print the size in a human-readable format.
Adapting it for Beeline and specific databases
On newer clusters you might prefer Beeline and HiveServer2. The pattern stays the same; only the Hive call changes. For example:
for file in $(beeline -u "jdbc:hive2://hs2-host:10000/default" --silent=true \
-e "USE tpcds; SHOW TABLE EXTENDED LIKE '\*'" \
| grep "location:" \
| awk 'BEGIN { FS=":" } { printf("hdfs:%s:%s\n",$3,$4) }'); do
hdfs dfs -du -h "$file"
done
Key tweaks:
- Add
USE your_db;beforeSHOW TABLE EXTENDEDif you only want table sizes for a single database (for example,tpcds). - Use
--silent=trueor similar options so Beeline outputs only query results, not banners.
Limitations and caveats
- This inspects the table location directory, not logical row counts or column sizes.
- Partitioned tables may have many subdirectories; the HDFS
-duoutput will reflect the total across all files under the path. - If you are heavy on external tables, make sure you understand that sizes may include shared locations used by multiple tables.
- On large warehouses, running
dufor every table will generate some load on the NameNode and DataNodes; use with care in peak hours.
When to still use this approach
Even with modern observability, table statistics and catalogs, a small shell snippet like this remains useful for quick sanity checks, cluster cleanups, migration planning or just understanding where your HDFS space went. It is simple, transparent and works anywhere you have Hive CLI or Beeline plus HDFS access.
If you need help with distributed systems, backend engineering, or data platforms, check my Services.