Skip to main content

NFS exported HDFS (CDH3)


For some reasons it could be a good idea to make a hdfs filesystem available across networks as a exported share. Here I describe a working scenario with linux and hadoop with tools both have on board.
I used fuse and libhdfs to mount a hdfs filesystem. Change namenode.local and <PORT> to fit your environment.

 yum install hadoop-0.20-fuse.x86_64 hadoop-0.20-libhdfs.x86_64

Create a mountpoint:
 mkdir /hdfs-mount

Mount your hdfs (testing):
 hadoop-fuse-dfs dfs://namenode.local:<PORT> /hdfs-mount -d

You will show like that:
 INFO fuse_options.c:162 Adding FUSE arg /hdfs-mount
 INFO fuse_options.c:110 Ignoring option -d
 unique: 1, opcode: INIT (26), nodeid: 0, insize: 56
 INIT: 7.10
 INFO fuse_init.c:101 Mounting namenode.local:<PORT>
 INIT: 7.8
 unique: 1, error: 0 (Success), outsize: 40

Hit crtl-C after you see "Success".

Make the mount available at boot time:
 echo "hadoop-fuse-dfs#dfs://namenode.local:<PORT> /hdfs-mount fuse usetrash,rw 0 0" >> /etc/fstab

#> mount -a
#> mount
 sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
 fuse on /hdfs-mount type fuse (rw,nosuid,nodev,allow_other,default_permissions)

To tune the memory for each JVM process take a look into /etc/default/hadoop-0.20-fuse and adjust the settings there.

Export via NFS (unsecure):
First we have to decide which user we use, I suppose the user hdfs. Use "id hdfs":
 uid=104(hdfs) gid=105(hdfs) groups=105(hdfs),104(hadoop) context=root:staff_r:staff_t:SystemLow-SystemHigh

Create an exports-file:
 cat /etc/exports
 /hdfs-mount/user    (fsid=111,rw,wdelay,anonuid=104,anongid=105,sync,insecure,no_subtree_check,no_root_squash)

Expl.: read-write, fsid=unused ID (man 5 exports), write-delay, hdfs user, sync

To export only the user-directory from HDFS prevents you from unwanted changes in system relevant directories (mapred as example).
Restart your NFS Server (service nfs restart).

Now you can use your hdfs as a "local" filesystem, which makes some tasks easier. Note that the "use user" are mapped to the local user, to using root is a bad idea.
Mount the exported NFS on your machine and create / copy your jobdefinitions or files simply.

PS: works only from kernel 2.6.27 upwards


  1. What kind of throughput do you observe with this setup?

    Does rsync work?

    Does NFS reorder writes?

  2. Upps, did see your post now, sry Ted. Of course, I'm agree with you, performance looks other. Its only a PoC and my private playground.

    I tested with a scp:
    scp /tmp/10GB hdfs@dn-node:/123/hdfs/user/
    10GB 100% 10GB 45.7MB/s 03:44

    - alex

  3. I have followed your steps and it works fine on centos(although there are still some issues). But when I export the mount point to windows7 via NFS, here comes some problems. I can mount NFS on win7, but I can't see anything in the directory. Can win7 access HDFS via NFS? Or should I use samba3?

  4. Did you have installed the Unix Support Tools? For my experience, I use smb3/4 for, since we have here also xattr as well as kerberos support.

    1. Thanks for reply! Do you mean SFU or cygwin or something else? I have no idea about Unix Support Tools. Have you ever access HDFS via NFS on windows?


Post a Comment

Popular posts from this blog

Deal with corrupted messages in Apache Kafka

Under some strange circumstances it can happen that a message in a Kafka topic is corrupted. This happens often by using 3rd party frameworks together with Kafka. Additionally, Kafka < 0.9 has no lock at at the consumer read level, but has a lock on Log.write(). This can cause a rare race condition, as described in KAKFA-2477 [1]. Probably a log entry looks like: ERROR Error processing message, stopping consumer: ($) kafka.message.InvalidMessageException: Message is corrupt (stored crc = xxxxxxxxxx, computed crc = yyyyyyyyyy Kafka-Tools Kafka stores the offset of every consumer in Zookeeper. To read out the offsets, Kafka provides handy tools [2]. But also can be used, at least to display the consumer and the stored offsets. First we need to find the consumer for a topic (> Kafka 0.9): bin/ --zookeeper management01:2181 --describe --group test Prior to Kafka 0.9 the only possibility to get this inform

Hive query shows ERROR "too many counters"

A hive job face the odd " Too many counters:"  like Ended Job = job_xxxxxx with exception 'org.apache.hadoop.mapreduce.counters.LimitExceededException(Too many counters: 201 max=200)' FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask Intercepting System.exit(1) These happens when operators are used in queries ( Hive Operators ). Hive creates 4 counters per operator, max upto 1000, plus a few additional counters like file read/write, partitions and tables. Hence the number of counter required is going to be dependent upon the query.  To avoid such exception, configure " mapreduce.job.counters.max " in mapreduce-site.xml to a value above 1000. Hive will fail when he is hitting the 1k counts, but other MR jobs not. A number around 1120 should be a good choice. Using " EXPLAIN EXTENDED " and " grep -ri operators | wc -l " print out the used numbers of operators. Use this value to tweak the MR s

GPT & GenAI for Startup Storytelling

OpenAI and Bard   are the most used GenAI tools today; the first one has a massive Microsoft investment, and the other one is an experiment from Google. But did you know that you can also use them to optimize and hack your startup?  For startups, creating pitch scripts, sales emails, and elevator pitches with generative AI (GenAI) can help you not only save time but also validate your marketing and wording. Curious? Here are a few prompt hacks for startups to create,improve, and validate buyer personas, your startup's mission/vision statements, and unique selling proposition (USP) definitions. First Step: Introduce yourself and your startup Introduce yourself, your startup, your website, your idea, your position, and in a few words what you are doing to the chatbot: Prompt : I'm NAME and our startup NAME, with website URL, is doing WHATEVER. With PRODUCT NAME, we aim to change or disrupt INDUSTRY. Bard is able to pull information from your website. I'm not sure if ChatGPT