Skip to main content


Showing posts from October, 2011

Syncing hdfs-clusters

Mostly it is a good idea to test new code on a reference cluster with a nearly live dataset. To sync files from a cluster to another use the hadoop builtin tool distcp [1]. With a small script I "rebase" a development cluster with logfiles we collected over the past day. COPYDATE=`date -d '-1 Day' +"%Y-%m-%d"` DELDATE=`date -d '-3 Day' +"%Y-%m-%d"` SNAMENODE=namenode1 TNAMENODE=namenode2 PATH="/user/flume/logs" LOG="/var/log/jobs/sync.log" #logging exec >> $LOG 2>&1 echo -e "\n ------- sync $COPYDATE ------- \n" /usr/bin/hadoop distcp -i -m 100 hdfs://$SNAMENODE:9000/$PATH/$COPYDATE hdfs://$TNAMENODE:9000/$PATH/$COPYDATE/ sleep 60 echo -e "\n ------- delete $DELDATE ------- \n" /usr/bin/hadoop dfs -rmr /$PATH/$DELDATE /usr/bin/hadoop dfs -rmr /$PATH/_distcp_logs* sleep 60 /usr/bin/hadoop dfs -chmod -R 777 /$PATH/ The script copy logfiles from the past day and the giv

Secure your hadoop cluster, Part II

To get absolutely safe you need a bit more time, coffee and Aspirin. You will get headaches, for sure. First the good news, hadoop and the ecosystem run out of the box with an enabled SELinux system in targeting mode. You have to consider a performance loss of 5 - 10%. To enable SELiux on a box use setenforce 1,  to check the system use sestatus . # sestatus  SELinux status:                 enabled SELinuxfs mount:                /selinux Current mode:                   enforcing Mode from config file:          enforcing Policy version:                 21 Policy from config file:        targeted Fine. Thats all. Now we enable SELinux at boot time: # cat /etc/selinux/config SELINUX=enforcing SELINUXTYPE=targeted SETLOCALDEFS=0 If you use fuse-hdfs check [1] for a valid rule. The best way to get a system running is always to use SELINUXTYPE=targeted. But in some environments it is neccessary to protect the systems much more (Healthcare, Bank, Military etc.), here we us

Sqoop and Microsoft SQL Server

From Microsoft's technet: With SQL Server-Hadoop Connector [1], you import data from: Tables in SQL Server to delimited text files on HDFS Tables in SQL Server to SequenceFiles files on HDFS Tables in SQL Server to tables in Hive* Queries executed in SQL Server to delimited text files on HDFS Queries executed in SQL Server to SequenceFiles files on HDFS Queries executed in SQL Server to tables in Hive*   With SQL Server-Hadoop Connector, you can export data from: Delimited text files on HDFS to SQL Server SequenceFiles on HDFS to SQL Server Hive Tables* to tables in SQL Server But before it works you have to setup the connector. First get the MS JDBC driver [2]: You have just to download the driver, unpack them and copy the driver (sqljdbc4.jar) file to the $SQOOP_HOME/lib/ directory. Now download the connector (.tar.gz) from [1], unpack them and set the MSSQL_CONNECTOR_HOME into that directory. Let's assume you unpack into /usr/sqoop/connector/mssql , do: # ex

Centralized logfile management across networks with flume

Facebooks's scribe was the first available service for managing a hughe amount on logfiles. We didn't talk over 2 GB / day or so, I mean more as 1 TB per day. Compressed. Now, a new apache incubator project is flume [1]. It is a pretty nice piece of software, so I love it. It is reliable, fast, safe and has no proprietary stack inside. And you can create really cool logging tasks. If you use Clouderas Distribution you get flume easy with a " yum install flume-master " on the master and " yum install flume-node " on a node. Check [2] for more infos about. Flume has a lot of sources to get logfiles: - from a text-file - as a tail (one or more files) - syslog UDP or TCP - synthetic sources Flume's design belongs to a large logfile distribution process. Let's assume, we have a 100 Node Webcluster and incoming traffic around 3 GB/s. The farm produce 700 MB raw weblogs per minute. Through the processing over flume we can compress the files, so

Secure your hadoop cluster, Part I

Use mapred with Active Directory (basic auth) The most cases I managed in past weeks concerned hadoop security. That means firewalling, SELinux, authentication and user management. It is usually a difficult process, related to the companys security profile and processes. So I start with the most interesting part - authentication. And, in all cases I worked on the main authentication system was a Windows Active Directory Forest (AD). Since hadoop is shipped with more taskcontroller-classes we can use LinuxTaskController. I use RHEL5 server, but it can be adapted similar to other installations. To enable the UNIX services in Windows Server > 2003 you have to extend the existing schema with UNIX templates, delivered from Microsoft. After that you have to install the "Identity Management for UNIX", in 2008 located in Server Manager => Roles => AD DS => common tasks => Add Role => Select Role Services. Install the software, restart your server and it shoul