SQOOP Export command failing

0 votes

I don't know what's wrong but I am not able to import using sqoop. I am running the following command:

sqoop export --connect jdbc:mysql://localhost/retail_db \--username root \--password cloudera \--table testing \--export-dir /user/cloudera/orders/part-m-00000

And I am getting the following error:

Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail.

Please set $ACCUMULO_HOME to the root of your Accumulo installation.

19/01/07 07:23:24 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.13.0

19/01/07 07:23:24 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.

19/01/07 07:23:24 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.

19/01/07 07:23:24 INFO tool.CodeGenTool: Beginning code generation

19/01/07 07:23:25 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `testing` AS t LIMIT 1

19/01/07 07:23:26 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `testing` AS t LIMIT 1

19/01/07 07:23:26 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-mapreduce

Note: /tmp/sqoop-cloudera/compile/497aa1d2f7cf3d901ffd86677c274556/testing.java uses or overrides a deprecated API.

Note: Recompile with -Xlint:deprecation for details.

19/01/07 07:23:30 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-cloudera/compile/497aa1d2f7cf3d901ffd86677c274556/testing.jar

19/01/07 07:23:30 INFO mapreduce.ExportJobBase: Beginning export of testing

19/01/07 07:23:30 INFO Configuration.deprecation: mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address

19/01/07 07:23:31 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar

19/01/07 07:23:33 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative

19/01/07 07:23:33 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative

19/01/07 07:23:33 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps

19/01/07 07:23:33 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032

19/01/07 07:23:37 INFO mapreduce.JobSubmitter: number of splits:4

19/01/07 07:23:37 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative

19/01/07 07:23:37 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1541664532120_0106

19/01/07 07:23:38 INFO impl.YarnClientImpl: Submitted application application_1541664532120_0106

19/01/07 07:23:38 INFO mapreduce.Job: The url to track the job: http://quickstart.cloudera:8088/proxy/application_1541664532120_0106/

19/01/07 07:23:38 INFO mapreduce.Job: Running job: job_1541664532120_0106

19/01/07 07:23:50 INFO mapreduce.Job: Job job_1541664532120_0106 running in uber mode : false

19/01/07 07:23:50 INFO mapreduce.Job:  map 0% reduce 0%

19/01/07 07:24:19 INFO mapreduce.Job:  map 100% reduce 0%

19/01/07 07:24:19 INFO mapreduce.Job: Job job_1541664532120_0106 failed with state FAILED due to: Task failed task_1541664532120_0106_m_000000

Job failed as tasks failed. failedMaps:1 failedReduces:0


19/01/07 07:24:19 INFO mapreduce.Job: Counters: 12

        Job Counters

               Failed map tasks=1

               Killed map tasks=3

               Launched map tasks=4

               Data-local map tasks=4

               Total time spent by all maps in occupied slots (ms)=101251

               Total time spent by all reduces in occupied slots (ms)=0

               Total time spent by all map tasks (ms)=101251

               Total vcore-milliseconds taken by all map tasks=101251

               Total megabyte-milliseconds taken by all map tasks=103681024

        Map-Reduce Framework

               CPU time spent (ms)=0

               Physical memory (bytes) snapshot=0

               Virtual memory (bytes) snapshot=0

19/01/07 07:24:19 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead

19/01/07 07:24:19 INFO mapreduce.ExportJobBase: Transferred 0 bytes in 46.2381 seconds (0 bytes/sec)

19/01/07 07:24:19 INFO mapreduce.ExportJobBase: Exported 0 records.

19/01/07 07:24:19 ERROR tool.ExportTool: Error during export:

Export job failed!

        at org.apache.sqoop.mapreduce.ExportJobBase.runExport(ExportJobBase.java:439)

        at org.apache.sqoop.manager.SqlManager.exportTable(SqlManager.java:931)

        at org.apache.sqoop.tool.ExportTool.exportTable(ExportTool.java:80)

        at org.apache.sqoop.tool.ExportTool.run(ExportTool.java:99)

        at org.apache.sqoop.Sqoop.run(Sqoop.java:147)

        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)

        at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)

        at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)

        at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243)

        at org.apache.sqoop.Sqoop.main(Sqoop.java:252)

Jan 7 in Big Data Hadoop by slayer
• 29,040 points
217 views

1 answer to this question.

Your answer

Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.
0 votes

First create a table, which should have the same structure as the flat file has. For example, I have created a table “db12” in the database named “mysql”, with the structure

Now see the file in the HDFS directory, since I have created the above table based on this file only,means according to the columns present.

Now the sqoop command is written below to save the flat file data into the table

sqoop export --connect jdbc:mysql://localhost:3306/mysql --table db12 --username root --P --export-dir  /myfolder/dboutput1/part-m-00000 -m 1

answered Jan 7 by Omkar
• 65,820 points

You've told me how to fix it. Can you also say why this happened?

Related Questions In Big Data Hadoop

0 votes
1 answer

How Sqoop import command works internally?

First when client execute Sqoop import commands through ...READ MORE

answered Apr 11 in Big Data Hadoop by Gitika
• 7,140 points
16 views
0 votes
1 answer

Why we use --split by command in Sqoop?

The command --split-by is used to specify the ...READ MORE

answered Apr 11 in Big Data Hadoop by Gitika
• 7,140 points
42 views
0 votes
1 answer

What is the difference in DistCP command and Sqoop command in Hadoop?

Both the distCP (Distributed copy in Hadoop) ...READ MORE

answered Apr 11 in Big Data Hadoop by Gitika
• 7,140 points
18 views
0 votes
0 answers

Why we use 'help' command in Hadoop Sqoop?

Use of help command in Hadoop sqoop. READ MORE

Apr 11 in Big Data Hadoop by amrita
18 views
0 votes
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 9,030 points
1,643 views
0 votes
1 answer

hadoop.mapred vs hadoop.mapreduce?

org.apache.hadoop.mapred is the Old API  org.apache.hadoop.mapreduce is the ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 9,030 points
130 views
0 votes
10 answers

hadoop fs -put command?

copy command can be used to copy files ...READ MORE

answered Dec 7, 2018 in Big Data Hadoop by Sujay
7,971 views
0 votes
1 answer

Hadoop dfs -ls command?

In your case there is no difference ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by kurt_cobain
• 9,260 points
555 views
0 votes
1 answer

How to export file from sqoop to sql table?

For SQOOP export please try below command: bin/sqoop ...READ MORE

answered Dec 14, 2018 in Big Data Hadoop by Omkar
• 65,820 points
58 views
0 votes
2 answers

Hadoop fs -stat command

hadoop fs -stat is as hadoop command used ...READ MORE

answered Oct 25, 2018 in Big Data Hadoop by Omkar
• 65,820 points
809 views

© 2018 Brain4ce Education Solutions Pvt. Ltd. All rights Reserved.
"PMP®","PMI®", "PMI-ACP®" and "PMBOK®" are registered marks of the Project Management Institute, Inc. MongoDB®, Mongo and the leaf logo are the registered trademarks of MongoDB, Inc.