I help teams fix systemic engineering issues: processes, architecture, and clarity.
→ See how I work with teams.
When developing new pipelines, it is often necessary to rebase a development or staging Hadoop environment with data from production. Historically this was done with simple DistCp scripts. Today, DistCp v2, YARN scheduling, and improved HDFS tooling allow for safer, more performant cluster synchronization while avoiding operational pitfalls.
Below is an updated version of a classic rebase workflow: copying the previous day's log data from a production cluster to a development cluster and applying retention by removing older datasets.
1. Variables and runtime setup
COPYDATE=$(date -d '-1 day' +"%Y-%m-%d")
DELDATE=$(date -d '-3 day' +"%Y-%m-%d")
SRC_NN="hdfs://prod-nn-ha"
TGT_NN="hdfs://dev-nn-ha"
PATH="/user/flume/logs"
LOG="/var/log/jobs/distcp-sync.log"
# Logging redirection
exec >> "$LOG" 2>&1
echo -e "\n------- sync $COPYDATE -------\n"
A modern best practice is to reference HA nameservices (e.g., hdfs://prod-nn-ha) instead of single hostnames. This supports automatic failover and avoids outages during DistCp operations.
2. Modern DistCp execution
Legacy DistCp used -i and -m <maps>. DistCp v2 adds a number of new controls:
- -update – copy only changed files
- -delete – remove files on the target that no longer exist on the source
- -bandwidth – throttle bandwidth to avoid saturating production
- -strategy dynamic – improved load balancing for large file trees
hadoop distcp \
-update \
-bandwidth 100 \
-strategy dynamic \
${SRC_NN}${PATH}/${COPYDATE} \
${TGT_NN}${PATH}/${COPYDATE}/
Adjust -bandwidth based on your production cluster's capacity. In busy environments, using the YARN queue configuration to limit DistCp resource usage is strongly recommended.
3. Retention: remove datasets older than 3 days
In the original workflow, logs older than 3 days were deleted. Modern HDFS commands replace deprecated flags:
echo -e "\n------- delete $DELDATE -------\n"
hdfs dfs -rm -r -skipTrash ${PATH}/${DELDATE}
hdfs dfs -rm -r -skipTrash ${PATH}/_distcp_logs*
If you run DistCp frequently, consider storing DistCp logs elsewhere or cleaning them through automated retention to avoid clutter.
4. Permissions handling (modernized)
The legacy script used chmod -R 777 to accommodate missing users in the dev cluster. This is unsafe and not recommended.
Modern alternatives:
- Create the correct user or group in the target cluster (
flumein this case) - Use
setfaclto grant dev teams access without breaking security - Set directory ownership appropriately:
hdfs dfs -chown -R flume:hadoop ${PATH}/
This keeps both clusters consistent and avoids privilege escalation.
5. Scheduling & execution time
The original workflow ran daily via cron at 02:00 PM and processed ~1 TB in about an hour. Today's clusters often run DistCp using YARN queues and resource limits:
- Assign DistCp to a low-impact queue (e.g.,
utilityoroffpeak) - Throttle bandwidth using the
-bandwidthflag - Use snapshots to create consistent sources for large directories
Example snapshot-based DistCp (production-safe):
hdfs dfs -createSnapshot /user/flume/logs snapshot_${COPYDATE}
hadoop distcp \
-update \
${SRC_NN}/user/flume/logs/.snapshot/snapshot_${COPYDATE} \
${TGT_NN}/user/flume/logs/${COPYDATE}
Snapshots ensure dataset consistency even while ingestion continues.
6. Hybrid cloud & object storage notes
In modern environments, clusters often sync to or from S3, GCS, ADLS or MinIO. DistCp supports these paths directly through s3a://, gs://, and abfs:// URLs.
For very large directory trees, S3DistCp (AWS EMR) or DistCp on Kubernetes with scalable containers may be used.
Conclusion
DistCp remains the canonical tool for moving large datasets between Hadoop clusters. By combining modern DistCp features, HA name services, safe retention policies, snapshots, strict permissions, and YARN queue isolation, you can maintain a reliable daily rebase workflow that does not interfere with production operations.
Reference
Official DistCp documentation: https://hadoop.apache.org/docs/stable/hadoop-distcp/DistCp.html
If you need help with distributed systems, backend engineering, or data platforms, check my Services.