For some reasons it could be a good idea to make a hdfs filesystem available across networks as a exported share. Here I describe a working scenario with linux and hadoop with tools both have on board. I used fuse and libhdfs to mount a hdfs filesystem. Change namenode.local and <PORT> to fit your environment. Install: yum install hadoop-0.20-fuse.x86_64 hadoop-0.20-libhdfs.x86_64 Create a mountpoint: mkdir /hdfs-mount Mount your hdfs (testing): hadoop-fuse-dfs dfs://namenode.local:<PORT> /hdfs-mount -d You will show like that: INFO fuse_options.c:162 Adding FUSE arg /hdfs-mount INFO fuse_options.c:110 Ignoring option -d unique: 1, opcode: INIT (26), nodeid: 0, insize: 56 INIT: 7.10 flags=0x0000000b max_readahead=0x00020000 INFO fuse_init.c:101 Mounting namenode.local:<PORT> INIT: 7.8 flags=0x00000001 max_readahead=0x00020000 max_write=0x00020000 uniqu...
Hey, I'm Alex. I founded X-Warp, Infinimesh, Infinite Devices, Scalytics and worked with Cloudera, E.On, Google, Evariant, and had the incredible luck to build products with outstanding people in my life, across the globe.