Error: Container is running beyond Memory Limits

0 votes

I am using an 8GB RAM, an octa-core processor with Hadoop Version-1, I have set up 7 Mappers and 7 Reducers in the system and each one of them has a dedicated RAM size of 1 GB. All the Mappers and Reducers work fine. but, now when I tried to run the same application and encountered container error.

The following is the configuration that I followed:

  <property>
    <name>yarn.scheduler.minimum-allocation-mb</name>
    <value>1024</value>
  </property>
  <property>
    <name>yarn.scheduler.maximum-allocation-mb</name>
    <value>8192</value>
  </property>
  <property>
    <name>yarn.nodemanager.resource.memory-mb</name>
    <value>8192</value>
  </property>

It gave me an error:

Container [pid=28920,containerID=container_1389136889967_0001_01_000121] is running beyond virtual memory limits. Current usage: 1.2 GB of 1 GB physical memory used; 2.2 GB of 2.1 GB virtual memory used. Killing container.

I then tried to set the memory limit in mapred-site.xml:

  <property>
    <name>mapreduce.map.memory.mb</name>
    <value>4096</value>
  </property>
  <property>
    <name>mapreduce.reduce.memory.mb</name>
    <value>4096</value>
  </property>

But still getting an error:

Container [pid=26783,containerID=container_1389136889967_0009_01_000002] is running beyond physical memory limits. Current usage: 4.2 GB of 4 GB physical memory used; 5.2 GB of 8.4 GB virtual memory used. Killing container.

I am in a dilemma about the memory consumption of my mappers. Is it because of the excessive splits the container is facing? do I have a way to make sure that containers will not have to face excessive splits that they can't handle?

Jun 20, 2019 in Big Data Hadoop by nitinrawat895
• 10,920 points
1,733 views

1 answer to this question.

0 votes

I had a similar problem while I was working with HIVE in EMR. None of the extant solutions worked for me. 

None of the MapReduce configurations was functional and I did not set setting yarn.nodemanager.vmem-check-enabled to false.

All I did was to set this: 

#tez.am.resource.memory.mb

for example:

#hive -hiveconf tez.am.resource.memory.mb=4096

Another setting to consider tweaking is 

#yarn.app.mapreduce.am.resource.mb
answered Jun 20, 2019 by ravikiran
• 4,600 points

Related Questions In Big Data Hadoop

0 votes
1 answer
0 votes
1 answer

ERROR: org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet!

Hi@akhtar, This error occurs if the hbase server ...READ MORE

answered Mar 30 in Big Data Hadoop by MD
• 35,340 points
298 views
0 votes
1 answer

Error running hadoop mapreduce in Python using Hadoop Streaming

Hi As you write mapper and reducer program  ...READ MORE

answered Jan 21 in Big Data Hadoop by anonymous
416 views
+1 vote
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 10,920 points
5,345 views
0 votes
1 answer

hadoop.mapred vs hadoop.mapreduce?

org.apache.hadoop.mapred is the Old API  org.apache.hadoop.mapreduce is the ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 10,920 points
785 views
+1 vote
11 answers

hadoop fs -put command?

put syntax: put <localSrc> <dest> copy syntax: copyF ...READ MORE

answered Dec 7, 2018 in Big Data Hadoop by Aditya
32,746 views
–1 vote
1 answer

Hadoop dfs -ls command?

In your case there is no difference ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by kurt_cobain
• 9,310 points
2,009 views
0 votes
1 answer

What is Kafka? what is its importance in Big-Data?

Apache Kafka is a tool used in ...READ MORE

answered Apr 11, 2019 in Big Data Hadoop by ravikiran
• 4,600 points
361 views
0 votes
1 answer

Is it possible to run Apache Spark without Hadoop?

Though Spark and Hadoop were the frameworks designed ...READ MORE

answered May 2, 2019 in Big Data Hadoop by ravikiran
• 4,600 points
213 views