Job failed as tasks failed failedMaps

+1 vote

Hello All,

I am new to hadoop, i need help with one issue which i am facing. There was one requirement to add new columns in table after the changes when i am trying to test the changes. The M/R job trying 2-3 times and it is getting failed, so SQOOP export is also getting failed. When i see logs i didn't have any specific exception. I am sharing the log details, if any have idea please do let me know and thanks in adv

19/07/30 04:06:18 INFO tool.CodeGenTool: Beginning code generation

19/07/30 04:06:18 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM [staging].[SR] AS t WHERE 1=0

19/07/30 04:06:19 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /opt/mapr/hadoop/hadoop-2.7.0

Note: /tmp/sqoop-/SR.java uses or overrides a deprecated API.

Note: Recompile with -Xlint:deprecation for details.

19/07/30 04:06:23 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-/SR.jar

19/07/30 04:06:28 INFO mapreduce.ExportJobBase: Beginning export of SR

19/07/30 04:06:28 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar

19/07/30 04:06:28 INFO mapreduce.JobBase: Setting default value for hadoop.job.history.user.location=none

19/07/30 04:06:29 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative

19/07/30 04:06:29 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative

19/07/30 04:06:29 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps

19/07/30 04:06:29 INFO client.MapRZKBasedRMFailoverProxyProvider: Updated RM address to 

19/07/30 04:06:31 INFO input.FileInputFormat: Total input paths to process : 1

19/07/30 04:06:31 INFO input.FileInputFormat: Total input paths to process : 1

19/07/30 04:06:31 INFO mapreduce.JobSubmitter: number of splits:1

19/07/30 04:06:31 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative

19/07/30 04:06:31 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative

19/07/30 04:06:31 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps

19/07/30 04:06:31 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1563651888010_477141

19/07/30 04:06:32 INFO security.ExternalTokenManagerFactory: Initialized external token manager class - com.mapr.hadoop.yarn.security.MapRTicketManager

19/07/30 04:06:32 INFO impl.YarnClientImpl: Submitted application application_1563651888010_477141

19/07/30 04:06:32 INFO mapreduce.Job: The url to track the job: https://proxy/application_1563651888010_477141/

19/07/30 04:06:32 INFO mapreduce.Job: Running job: job_1563651888010_477141

19/07/30 04:06:49 INFO mapreduce.Job: Job job_1563651888010_477141 running in uber mode : false

19/07/30 04:06:49 INFO mapreduce.Job: map 0% reduce 0%

19/07/30 04:07:08 INFO mapreduce.Job: map 6% reduce 0%

19/07/30 04:17:18 INFO mapreduce.Job: Task Id : attempt_1563651888010_477141_m_000000_0, Status : FAILED

AttemptID:attempt_1563651888010_477141_m_000000_0 Timed out after 600 secs

19/07/30 04:17:19 INFO mapreduce.Job: map 0% reduce 0%

19/07/30 04:17:37 INFO mapreduce.Job: map 6% reduce 0%

19/07/30 04:27:48 INFO mapreduce.Job: Task Id : attempt_1563651888010_477141_m_000000_1, Status : FAILED

AttemptID:attempt_1563651888010_477141_m_000000_1 Timed out after 600 secs

19/07/30 04:27:49 INFO mapreduce.Job: map 0% reduce 0%

19/07/30 04:28:06 INFO mapreduce.Job: map 6% reduce 0%

19/07/30 04:38:18 INFO mapreduce.Job: Task Id : attempt_1563651888010_477141_m_000000_2, Status : FAILED

AttemptID:attempt_1563651888010_477141_m_000000_2 Timed out after 600 secs

Container killed by the ApplicationMaster.

Container killed on request. Exit code is 143

Container exited with a non-zero exit code 143


19/07/30 04:38:19 INFO mapreduce.Job: map 0% reduce 0%

19/07/30 04:38:44 INFO mapreduce.Job: map 6% reduce 0%

19/07/30 04:48:48 INFO mapreduce.Job: map 100% reduce 0%

19/07/30 04:48:48 INFO mapreduce.Job: Job job_1563651888010_477141 failed with state FAILED due to: Task failed task_1563651888010_477141_m_000000

Job failed as tasks failed. failedMaps:1 failedReduces:0


19/07/30 04:48:48 INFO mapreduce.Job: Counters: 10

Job Counters 

Failed map tasks=4

Launched map tasks=4

Other local map tasks=3

Rack-local map tasks=1

Total time spent by all maps in occupied slots (ms)=2513029

Total time spent by all reduces in occupied slots (ms)=0

Total time spent by all map tasks (ms)=2513029

Total vcore-seconds taken by all map tasks=2513029

Total megabyte-seconds taken by all map tasks=2573341696

DISK_MILLIS_MAPS=1256515

19/07/30 04:48:48 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead

19/07/30 04:48:48 INFO mapreduce.ExportJobBase: Transferred 0 bytes in 2,539.3569 seconds (0 bytes/sec)

19/07/30 04:48:48 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead

19/07/30 04:48:48 INFO mapreduce.ExportJobBase: Exported 0 records.

19/07/30 04:48:48 ERROR tool.ExportTool: Error during export: Export job failed!
Aug 1, 2019 in Big Data Hadoop by Hemanth
• 250 points

edited Aug 1, 2019 by Omkar 6,936 views

@Hemanth, the error tells 

Zookeeper address not found from MapR Filesystem and is also not configured in Yarn configuration.

Configure Zookeeper and configure it in yarn. It should work. 

@shruthi Thanks a lot i will do as u suggested will keep u posted after that

@shruthi, HI my yarn-site.xml looks like this. What else i need to configure

<configuration>
  <!-- Resource Manager MapR HA Configs -->
  <property>
    <name>yarn.resourcemanager.ha.custom-ha-enabled</name>
    <value>true</value>
    <description>MapR Zookeeper based RM Reconnect Enabled. If this is true, set the failover proxy to be the class MapRZKBasedRMFailoverProxyProvider</description>
  </property>
  <property>
    <name>yarn.client.failover-proxy-provider</name>
    <value>org.apache.hadoop.yarn.client.MapRZKBasedRMFailoverProxyProvider</value>
    <description>Zookeeper based reconnect proxy provider. Should be set if and only if mapr-ha-enabled property is true.</description>
  </property>
  <property>
    <name>yarn.resourcemanager.recovery.enabled</name>
    <value>true</value>
    <description>RM Recovery Enabled</description>
  </property>
  <property>
   <name>yarn.resourcemanager.ha.custom-ha-rmaddressfinder</name>
   <value>org.apache.hadoop.yarn.client.MapRZKBasedRMAddressFinder</value>
  </property>

  <property>
    <name>yarn.acl.enable</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.admin.acl</name>
    <value> </value>
  </property>

  <!-- :::CAUTION::: DO NOT EDIT ANYTHING ON OR ABOVE THIS LINE -->
</configuration>
HI @shruthi, once confusion here i have checked my failed job and success jobs. I was checking yarn logs for each job_ID's. both success and failed jobs all showing same exception pasted above "Zookeeper address not found from MapR Filesystem and is also not configured in Yarn configuration".

Success Jobs application_ID's giving same log. So that means this is not the issue.

we have daily job that is running successfully, so new requirement came i have done some changes and i have created seperate shell script file. when i testing that requirement i am facing this issue. The daily job is normally running and success. So I might thinking this would be code issue. If it is code issue, how to make sure to get clarify on this and also i just want to put debug logger at present we are getting INFO only. If possible can u let me know location of log4j.prop file.

TIA

Hi @Hemant,

As your error says: 

Zookeeper address not found from MapR Filesystem and is also not configured in Yarn configuration.

By default, the Resource Manager stores its state in the MapR-FS, to configure the ResourceManager to use the Zookeeper:

  • Set the value of
yarn.resourcemanager.store.class to org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore in the yarn-site.xml.
  • Set the value of 
yarn.resourcemanager.zk-address 

to a comma-separated list of host:port pairs for each ZooKeeper server used by the ResourceManager. This property needs to be set in yarn-site.xml. These hosts are used by the ResourceManager to store state.

No answer to this question. Be the first to respond.

Your answer

Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.

Related Questions In Big Data Hadoop

+2 votes
0 answers

MapReduce Job is failed: "Job failed as tasks failed. failed Maps:1 failed Reduces:0"

Hi everyone This is an error that I'm ...READ MORE

Jan 17, 2020 in Big Data Hadoop by akhtar
• 38,230 points
1,131 views
0 votes
1 answer
0 votes
1 answer
0 votes
1 answer

How will you submit job as particular user?

You need to create a new user ...READ MORE

answered Jan 31, 2019 in Big Data Hadoop by Omkar
• 69,210 points
938 views
0 votes
1 answer
0 votes
0 answers
+1 vote
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 11,380 points
10,558 views
0 votes
1 answer

hadoop.mapred vs hadoop.mapreduce?

org.apache.hadoop.mapred is the Old API  org.apache.hadoop.mapreduce is the ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 11,380 points
2,185 views
+2 votes
11 answers

hadoop fs -put command?

Hi, You can create one directory in HDFS ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by nitinrawat895
• 11,380 points
104,215 views
–1 vote
1 answer

Hadoop dfs -ls command?

In your case there is no difference ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by kurt_cobain
• 9,390 points
4,260 views
webinar REGISTER FOR FREE WEBINAR X
REGISTER NOW
webinar_success Thank you for registering Join Edureka Meetup community for 100+ Free Webinars each month JOIN MEETUP GROUP