Skip to main content

Sqoop and Microsoft SQL Server

Listen:
From Microsoft's technet:
With SQL Server-Hadoop Connector [1], you import data from:
Tables in SQL Server to delimited text files on HDFS
Tables in SQL Server to SequenceFiles files on HDFS
Tables in SQL Server to tables in Hive*
Queries executed in SQL Server to delimited text files on HDFS
Queries executed in SQL Server to SequenceFiles files on HDFS
Queries executed in SQL Server to tables in Hive*
 
With SQL Server-Hadoop Connector, you can export data from:
Delimited text files on HDFS to SQL Server
SequenceFiles on HDFS to SQL Server
Hive Tables* to tables in SQL Server
But before it works you have to setup the connector. First get the MS JDBC driver [2]:
You have just to download the driver, unpack them and copy the driver (sqljdbc4.jar) file to the $SQOOP_HOME/lib/ directory. Now download the connector (.tar.gz) from [1], unpack them and set the MSSQL_CONNECTOR_HOME into that directory. Let's assume you unpack into /usr/sqoop/connector/mssql, do:
# export MSSQL_CONNECTOR_HOME=/usr/sqoop/connector/mssql 

control the export:
# echo $MSSQL_CONNECTOR_HOME
/usr/sqoop/connector/mssql


and run the install.sh in the unpacked directory.
sh ./install.sh

Tip: create a profile.d file:
# cat /etc/profile.d/mssql.sh
export MSSQL_CONNECTOR_HOME=/usr/sqoop/connector/mssql
and chmod into 755

An example:
Sqoop <=> MS SQL Server and hadoop processing works well. Just setup a larger PoC with split the data in 3 maps:
# sqoop import --connect 'jdbc:sqlserver://<IP>;username=dbuser;password=dbpasswd;database=<DB>' --table <table> --target-dir /path/to/hdfs/dir --split-by <KEY> -m 3

=> export of 1.3 GB data tooks around one minute. After processing just send back:
# sqoop export --connect 'jdbc:sqlserver://<IP>;username=dbuser;password=dbpasswd;database=<DB>' --table=<table> --direct --export-dir /path/from/hdfs/dir

You can do the same operations as you know from oracle or mysql sqoop scripts.

[1] http://www.microsoft.com/download/en/details.aspx?id=27584
[2] http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=21599

Comments

  1. hi,
    I am trying to import from sql server into HDFS, but I am getting errors as:

    hadoop@ubuntu:~/sqoop-1.1.0/bin$ ./sqoop import --connect 'jdbc:sqlserver://192.168.230.1;username=xxx;password=xxxxx;database=HadoopTest' --table PersonInfo --target-dir /home/hadoop/hadoop-0.21.0/

    11/12/10 12:13:20 ERROR tool.BaseSqoopTool: Got error creating database manager: java.io.IOException: No manager for connect string: jdbc:sqlserver://192.168.230.1;username=xxx;password=xxxxx;database=HadoopTest
    at com.cloudera.sqoop.ConnFactory.getManager(ConnFactory.java:119)
    at com.cloudera.sqoop.tool.BaseSqoopTool.init(BaseSqoopTool.java:178)
    at com.cloudera.sqoop.tool.ImportTool.init(ImportTool.java:81)
    at com.cloudera.sqoop.tool.ImportTool.run(ImportTool.java:411)
    at com.cloudera.sqoop.Sqoop.run(Sqoop.java:134)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:69)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:83)
    at com.cloudera.sqoop.Sqoop.runSqoop(Sqoop.java:170)
    at com.cloudera.sqoop.Sqoop.runTool(Sqoop.java:196)
    at com.cloudera.sqoop.Sqoop.main(Sqoop.java:205)

    What is the problem I am not getting?
    My Hadoop version : hadoop-0.21.0
    Sqoop version : sqoop-1.1.0

    Pls suggest me solution.
    Thanks.

    ReplyDelete
  2. The driver is installed and sqoop can find it? The install.sh was running without an error?

    ReplyDelete
  3. I followed all the steps but can't get sqoop running.I am getting this error.Can you please tell what is wrong

    [hduser@master bin]$ ./sqoop-help
    Warning: $HADOOP_HOME is deprecated.

    Exception in thread "main" java.lang.NoClassDefFoundError: com/cloudera/sqoop/Sqoop
    Caused by: java.lang.ClassNotFoundException: com.cloudera.sqoop.Sqoop
    at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
    Could not find the main class: com.cloudera.sqoop.Sqoop. Program will exit.

    ReplyDelete
  4. Anonymous18 July, 2012

    How do you have sqoop installed? What shows sqoop -version?

    ReplyDelete
  5. Untar the sqoop to /usr/local/sqoop
    downloaded sqoop-sqlserver connector and copied to connectors folder
    and ran install.sh
    Copied hadoop-core-1.0.3.jar in sqoop lib
    Copied sqoop-sqlserver-1.0.jar,mysql-connector-java-5.1.21-bin.jar in sqoop lib
    Set the environment variables

    MSSQL_CONNECTOR_HOME=/usr/local/sqoop/sqoop-sqlserver-1.0/
    HADOOP_HOME=/usr/local/hadoop
    SQOOP_CONF_DIR=/usr/local/sqoop/conf
    SQOOP_HOME=/usr/local/sqoop
    HBASE_HOME=/usr/local/hbase-0.92.1/
    HADOOP_CLASSPATH=:/usr/local/sqoop/sqoop-1.4.1-incubating.jar

    ReplyDelete
  6. Laura,

    I am new to hadoop and sqoop.
    Can you tell me the steps to install hadoop and sqoop on my ubuntu 12.04. I did install hadoop 1.0.3 but unable to install sqoop.

    ReplyDelete
  7. when i export the to SQL server,it cause the Exception below:
    SQLServerException:incorrect sytnax near ','

    what's wrong??

    ReplyDelete
  8. Anonymous25 July, 2012

    @Andy: Follow the instructions here:
    http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/

    After you've got them running, download the latest sqoop release from sqoop.apache.org

    ReplyDelete
  9. how can we export using sqoop to mssql using select statements??

    ReplyDelete

Post a Comment

Popular posts from this blog

Deal with corrupted messages in Apache Kafka

Under some strange circumstances it can happen that a message in a Kafka topic is corrupted. This happens often by using 3rd party frameworks together with Kafka. Additionally, Kafka < 0.9 has no lock at Log.read() at the consumer read level, but has a lock on Log.write(). This can cause a rare race condition, as described in KAKFA-2477 [1]. Probably a log entry looks like: ERROR Error processing message, stopping consumer: (kafka.tools.ConsoleConsumer$) kafka.message.InvalidMessageException: Message is corrupt (stored crc = xxxxxxxxxx, computed crc = yyyyyyyyyy Kafka-Tools Kafka stores the offset of every consumer in Zookeeper. To read out the offsets, Kafka provides handy tools [2]. But also zkCli.sh can be used, at least to display the consumer and the stored offsets. First we need to find the consumer for a topic (> Kafka 0.9): bin/kafka-consumer-groups.sh --zookeeper management01:2181 --describe --group test Prior to Kafka 0.9 the only possibility to get this inform

Hive query shows ERROR "too many counters"

A hive job face the odd " Too many counters:"  like Ended Job = job_xxxxxx with exception 'org.apache.hadoop.mapreduce.counters.LimitExceededException(Too many counters: 201 max=200)' FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask Intercepting System.exit(1) These happens when operators are used in queries ( Hive Operators ). Hive creates 4 counters per operator, max upto 1000, plus a few additional counters like file read/write, partitions and tables. Hence the number of counter required is going to be dependent upon the query.  To avoid such exception, configure " mapreduce.job.counters.max " in mapreduce-site.xml to a value above 1000. Hive will fail when he is hitting the 1k counts, but other MR jobs not. A number around 1120 should be a good choice. Using " EXPLAIN EXTENDED " and " grep -ri operators | wc -l " print out the used numbers of operators. Use this value to tweak the MR s

GPT & GenAI for Startup Storytelling

OpenAI and Bard   are the most used GenAI tools today; the first one has a massive Microsoft investment, and the other one is an experiment from Google. But did you know that you can also use them to optimize and hack your startup?  For startups, creating pitch scripts, sales emails, and elevator pitches with generative AI (GenAI) can help you not only save time but also validate your marketing and wording. Curious? Here are a few prompt hacks for startups to create,improve, and validate buyer personas, your startup's mission/vision statements, and unique selling proposition (USP) definitions. First Step: Introduce yourself and your startup Introduce yourself, your startup, your website, your idea, your position, and in a few words what you are doing to the chatbot: Prompt : I'm NAME and our startup NAME, with website URL, is doing WHATEVER. With PRODUCT NAME, we aim to change or disrupt INDUSTRY. Bard is able to pull information from your website. I'm not sure if ChatGPT