Why ResourceManager crashes after sometime or while accessing HDFS in Hadoop 2.8.1 and Ubuntu 16.04?

0 votes

I have setup a Hadoop 2.8.1 cluster on Ubuntu 16.04 LTS where I have 1 machine with NameNode Daemon & 2 machines with DataNode Daemon. I using it for test purpose now, so I have allocated them 20GB space for now.

Whenever I am starting all the daemons, my ResourceManager breaks either in the 1st minute or when I try to access my HDFS.

My configurations are as following:

/etc/hosts:

192.168.15.20 slave1 slave1
192.168.15.21 master2 master2
192.168.15.22 slave3 slave3

hdfs-site.xml (master)

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <property>
       <name>dfs.replication</name>
       <value>3</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>/home/usr1/hadoop/store/hdfs/namenode</value>
    </property>
</configuration>

hdfs-site.xml (slaves)

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration
    <property>
       <name>dfs.replication</name>
       <value>3</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>/home/usr1/hadoop/store/hdfs/datanode</value>
    </property>
</configuration>

core-site.xml (master & slaves)

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property> 
    <name>fs.default.name</name>
   <value>hdfs://master2:9000</value>
</property>
</configuration>

JAVA HOME (hadoop-env)

# The java implementation to use.
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64

.bashrc

# -- HADOOP ENVIRONMENT VARIABLES START -- #
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export PATH=$PATH:$JAVA_HOME/bin
export HADOOP_HOME=/usr/lib/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
export CLASSPATH=$CLASSPATH:/usr/lib/hadoop/lib/*:.
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HADOOP_OPTS="$HADOOP_OPTS -Djava.security.egd=file:/dev/../dev/urandom"

mapred-site.xml

<?xml version="1.0"?>
<!-- mapred-site.xml -->
<configuration>
<property>
 <name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master2:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master2:19888</value>
</property>
<property>
<name>mapred.child.java.opts</name>
<value>-Djava.security.egd=file:/dev/../dev/urandom</value>
</property>
</configuration>

yarn-site.xml

<?xml version="1.0"?>
<configuration>
<property>
    <name>yarn.resourcemanager.resource-tracker.address</name>
    <value>master2:8025</value>
</property>
<property>
    <name>yarn.resourcemanager.scheduler.address</name>
    <value>master2:8030</value>
</property>
<property>
    <name>yarn.resourcemanager.address</name>
    <value>master2:8051</value>
</property>
</configuration>

looking at the ports gives the following results

Apr 15, 2018 in Big Data Hadoop by coldcode
• 2,010 points
116 views

1 answer to this question.

0 votes
I was facing the same problem and later I realized this problem was due to the lack of the RAM. So, I increased the RAM of DataNodes by 2 times & NameNode by 4 times and everything started working fine.

According to my experience, in your case 8-10 GB total RAM is a good fit. Generally the Java heap size of ResourceManager, NodeManager & DataNode should be 0.6-0.7 GB minimum. So, we should allocated the machines around 2 Gb of RAM minimum. And as NameNode keep hash maps of Data blocks so it requires more memory. I would recommend to allocate it 2-4GB.
answered Apr 15, 2018 by Shubham
• 13,110 points

Related Questions In Big Data Hadoop

0 votes
1 answer

Not able to start Job History Server in Hadoop 2.8.1

You have to start JobHistoryServer process specifically ...READ MORE

answered Mar 29, 2018 in Big Data Hadoop by Ashish
• 2,630 points
314 views
0 votes
1 answer
0 votes
2 answers

Difference between Hadoop 1 and 2

Hadoop V.1.x Components Apache Hadoop V.1.x has the ...READ MORE

answered Aug 27, 2018 in Big Data Hadoop by zombie
• 3,690 points
84 views
0 votes
3 answers

Hadoop hdfs: list all files in a directory and its subdirectories

You can do it using queue: private static ...READ MORE

answered Dec 4, 2018 in Big Data Hadoop by Ishwar
1,203 views
0 votes
1 answer

Apache Hadoop Yarn example program

You can go to this location $Yarn_Home/share/hadoop/mapreduce . You'll ...READ MORE

answered Apr 4, 2018 in Big Data Hadoop by nitinrawat895
• 10,030 points
204 views
0 votes
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 10,030 points
2,037 views
0 votes
10 answers

hadoop fs -put command?

copy command can be used to copy files ...READ MORE

answered Dec 7, 2018 in Big Data Hadoop by Sujay
10,358 views
0 votes
1 answer

Hadoop dfs -ls command?

In your case there is no difference ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by kurt_cobain
• 9,240 points
759 views
0 votes
1 answer

Files for Configuring HDFS in Hadoop 2.2.0?

By default these Hadoop configuration files are ...READ MORE

answered Apr 15, 2018 in Big Data Hadoop by Shubham
• 13,110 points
16 views
+1 vote
2 answers

How to authenticate username & password while using Connector for Cloudera Hadoop in Tableau?

Hadoop server installed was kerberos enabled server. ...READ MORE

answered Aug 21, 2018 in Big Data Hadoop by Priyaj
• 56,160 points
146 views