This guide shows how to mount an HDFS filesystem using FUSE and then export part of it over NFS so that remote systems can access HDFS like a local filesystem. The approach is based on classic Hadoop and Linux tools, and includes notes on security, user mapping, and kernel limitations. In some environments, it can be useful to make an HDFS filesystem available across networks as an exported share. This walkthrough describes a working scenario using Linux and Hadoop with tools that are typically included in older Hadoop distributions. The setup uses hadoop-fuse-dfs and libhdfs to mount HDFS locally, and then exports that mount over NFS. Replace namenode.local and <PORT> with values appropriate for your cluster. 1. Install FUSE and libhdfs yum install hadoop-0.20-fuse.x86_64 hadoop-0.20-libhdfs.x86_64 2. Create a mountpoint mkdir /hdfs-mount 3. Test mounting HDFS via FUSE hadoop-fuse-dfs dfs://namenode.local:<PORT> /hdfs-mount -d If the mount succee...
Fractional CTO / Chief Architect for Big Data Systems & Distributed Data Processing