Error: Container is running beyond Memory Limits

0 votes

I am using an 8GB RAM, an octa-core processor with Hadoop Version-1, I have set up 7 Mappers and 7 Reducers in the system and each one of them has a dedicated RAM size of 1 GB. All the Mappers and Reducers work fine. but, now when I tried to run the same application and encountered container error.

The following is the configuration that I followed:

  <property>
    <name>yarn.scheduler.minimum-allocation-mb</name>
    <value>1024</value>
  </property>
  <property>
    <name>yarn.scheduler.maximum-allocation-mb</name>
    <value>8192</value>
  </property>
  <property>
    <name>yarn.nodemanager.resource.memory-mb</name>
    <value>8192</value>
  </property>

It gave me an error:

Container [pid=28920,containerID=container_1389136889967_0001_01_000121] is running beyond virtual memory limits. Current usage: 1.2 GB of 1 GB physical memory used; 2.2 GB of 2.1 GB virtual memory used. Killing container.

I then tried to set the memory limit in mapred-site.xml:

  <property>
    <name>mapreduce.map.memory.mb</name>
    <value>4096</value>
  </property>
  <property>
    <name>mapreduce.reduce.memory.mb</name>
    <value>4096</value>
  </property>

But still getting an error:

Container [pid=26783,containerID=container_1389136889967_0009_01_000002] is running beyond physical memory limits. Current usage: 4.2 GB of 4 GB physical memory used; 5.2 GB of 8.4 GB virtual memory used. Killing container.

I am in a dilemma about the memory consumption of my mappers. Is it because of the excessive splits the container is facing? do I have a way to make sure that containers will not have to face excessive splits that they can't handle?

Jun 20 in Big Data Hadoop by nitinrawat895
• 10,110 points
10 views

1 answer to this question.

0 votes

I had a similar problem while I was working with HIVE in EMR. None of the extant solutions worked for me. 

None of the MapReduce configurations was functional and I did not set setting yarn.nodemanager.vmem-check-enabled to false.

All I did was to set this: 

#tez.am.resource.memory.mb

for example:

#hive -hiveconf tez.am.resource.memory.mb=4096

Another setting to consider tweaking is 

#yarn.app.mapreduce.am.resource.mb
answered Jun 20 by ravikiran
• 3,560 points

Related Questions In Big Data Hadoop

0 votes
1 answer
0 votes
0 answers

Error running hadoop mapreduce in Python using Hadoop Streaming

I was trying a sample mapredyce code ...READ MORE

Apr 2, 2018 in Big Data Hadoop by nitinrawat895
• 10,110 points
77 views
0 votes
1 answer

What is a container in YARN?

A container basically represents a resource on ...READ MORE

answered Apr 9, 2018 in Big Data Hadoop by kurt_cobain
• 9,240 points
688 views
0 votes
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 10,110 points
2,052 views
0 votes
1 answer

hadoop.mapred vs hadoop.mapreduce?

org.apache.hadoop.mapred is the Old API  org.apache.hadoop.mapreduce is the ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 10,110 points
196 views
0 votes
10 answers

hadoop fs -put command?

copy command can be used to copy files ...READ MORE

answered Dec 7, 2018 in Big Data Hadoop by Sujay
10,495 views
0 votes
1 answer

Hadoop dfs -ls command?

In your case there is no difference ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by kurt_cobain
• 9,240 points
765 views
0 votes
1 answer

What is Kafka? what is its importance in Big-Data?

Apache Kafka is a tool used in ...READ MORE

answered Apr 11 in Big Data Hadoop by ravikiran
• 3,560 points
39 views
0 votes
1 answer

Is it possible to run Apache Spark without Hadoop?

Though Spark and Hadoop were the frameworks designed ...READ MORE

answered May 2 in Big Data Hadoop by ravikiran
• 3,560 points
44 views