Sqoop - moving data from DB2 to HDFS hive partition table

0 votes

I need inputs for below office project where I need to move data from DB2 to HDFS using Sqoop.

We have more than 1 Billion rows in DB2 tables and planning to move that to hdfs and use hive table to run analytics.
Records get inserted and updated daily on these DB2 tables. So planning to move the data in two steps using sqoop. The first step to load existing data and then setup daily sqoop import to move previous day data.
I want to store this data with partitions by date. Data existence is top priority, so I am thinking External hive table would be right option. Please provide inputs on below query.

In DB2 table, I am having a date column. When I move to hdfs location, I want to store them as date partitions in external table. Looks like there are limitations in using external hive table partitions.

If I create the external hive table first with partitions by date like below example

create external tables sample (ID string, name string) partitioned by (date string)
location "/sampledata/;

a) how to sqoop import existing data to hdfs in different date folders under /sampledata ? I have 3 years of data in DB2.

b) how to sqoop incremental data daily to /sampledata under few folder for that date?

c) Do you recommend going for internal table instead?

I think we can achieve this using below command by passing currentdatefolder value as input to this command. 

sqoop import "" --username abc -password abc --table source_table --target-dir /sampledata/currentdatefolder -m1 --check-column modified_date --incremental lastmodified --last-value {last_import_date}

But after that, do we need to alter table to add this new folder to partitions?

ALTER TABLE sample ADD PARTITION (date="currentdatefolder") LOCATION "/sampledata/currentdatefolder";
Aug 9, 2019 in Big Data Hadoop by Tina
4,237 views

1 answer to this question.

0 votes
sqoop import --driver com.ibm.db2.jcc.DB2Driver --connect jdbc:db2://db2.my.com:50000/databaseName --username database_name --password database_password --table table_name --split-by tbl_primarykey --target-dir sqoopimports

You just need the DB2 driver connection and its username password to use incremental import using sqoop you can follow syntax such as:

sqoop import --connect jdbc:oracle:thin:@//jfadboc1.jfa.unibet.com:1521/xxx --username

xxx --password xxx --query "SELECT TIME_KEY,PUNTER_KEY,PRODUCT_KEY,INDICATOR_KEY,INDICATOR_VALUE,INSERT_TIME FROM DW_FACT_PUNTER_TEST WHERE \$CONDITIONS AND (TIME_KEY >=20010101)" --split-by TIME_KEY --target-dir unica/data/FACT_PUNTER_IUD_UNICA_INCR --hive-import --hive-overwrite --hive-drop-import-delims --null-non-string '\\N' --null-string '\\N' --hive-table unica.DW_FACT_PUNTER_TEST_TEMP --hive-partition-key "TIME_YEAR"

But this will only allow you to use one partitioned column such as static partitioning and to use more than one partition columns you need to use HCATALOG hive table format.

And then you can schedule the same sqoop job in oozie and set a timestamp for it and it will load the data everyday for the same time stamp.

To know more, It's recommended to join Big Data Certification today.

answered Aug 9, 2019 by Payal

Related Questions In Big Data Hadoop

0 votes
1 answer

How to transfer data from Netezza to HDFS using Apache Sqoop?

Remove the --direct option. It gives issue ...READ MORE

answered Apr 23, 2018 in Big Data Hadoop by kurt_cobain
• 9,390 points
1,453 views
0 votes
1 answer

Not able to create Hive table from HDFS file

You dont have to specify the file name ...READ MORE

answered Dec 5, 2018 in Big Data Hadoop by Omkar
• 69,210 points
2,269 views
0 votes
1 answer

How to create a Hive table from sequence file stored in HDFS?

There are two SerDe for SequenceFile as ...READ MORE

answered Dec 18, 2018 in Big Data Hadoop by Omkar
• 69,210 points
4,511 views
0 votes
1 answer

How to securely transfer data from rdms to hdfs using sqoop?

Sqoop stores metadata in a repository and ...READ MORE

answered Dec 18, 2018 in Big Data Hadoop by Omkar
• 69,210 points
1,152 views
+1 vote
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 11,380 points
10,521 views
0 votes
1 answer

hadoop.mapred vs hadoop.mapreduce?

org.apache.hadoop.mapred is the Old API  org.apache.hadoop.mapreduce is the ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 11,380 points
2,165 views
+2 votes
11 answers

hadoop fs -put command?

Hi, You can create one directory in HDFS ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by nitinrawat895
• 11,380 points
103,811 views
–1 vote
1 answer

Hadoop dfs -ls command?

In your case there is no difference ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by kurt_cobain
• 9,390 points
4,233 views
0 votes
1 answer

How to load data from flat files into different partitions of Hive table?

Hi, You can load data from flat files ...READ MORE

answered Jun 26, 2019 in Big Data Hadoop by Gitika
• 65,910 points
1,329 views
0 votes
1 answer

How to load data from HDFS into pig relation?

Hey, To load data from HDFS to pig ...READ MORE

answered May 7, 2019 in Big Data Hadoop by Gitika
• 65,910 points
2,812 views
webinar REGISTER FOR FREE WEBINAR X
REGISTER NOW
webinar_success Thank you for registering Join Edureka Meetup community for 100+ Free Webinars each month JOIN MEETUP GROUP