About Available Space Block Placement Policy

+1 vote
First of all, i'm learning hadoop recently I'm looking for a block placement policy other than the standard one, because from what I know the standard block placement policy places random data blocks. Is there a block placement policy besides that? What I found was the Available space block placement policy, but I found no documentation. Has anyone ever used this block placement policy? can you explain it
Jan 21 in Big Data Hadoop by anonymous
• 130 points
103 views

1 answer to this question.

0 votes

Hi,

We know, In HDFS data is divided into blocks by the parameter dfs.block.size in the config file named hdfs-site.xml. The default block size is 64MB or 128MB, depending on your machine. And the default replicas is 3 by the parameter dfs.replication.

We know, HDFS uses rack aware data placement strategy that means if the blocks are placed in one rack then their copy will be placed in another rack so as to achieve fault tolerance when there is node failure or switch failure.

Default block placement policy present in HDFS are given below:

  1. When a client uploads data to HDFS, then the first replica of the block stores either local node or on a random node depending upon the HDFS client running in the cluster. 

  2. Second replica of the block is stores on a rack other than first replica placement.

  3. Third replica of the block stores  in the rack where the second replica is placed.

  4. If there are replicas remaining; distribute them randomly across the racks present in network with the restriction that, in the same rack there are no more than two replicas.

Hope this will help you to clear your doubt.

Thank You

answered Jan 29 by MD
• 24,500 points

Related Questions In Big Data Hadoop

0 votes
1 answer

How to analyze block placement on datanodes and rebalancing data across Hadoop nodes?

HDFS provides a tool for administrators i.e. ...READ MORE

answered Jun 21, 2018 in Big Data Hadoop by nitinrawat895
• 10,920 points
229 views
0 votes
1 answer

How does Hadoop's block replacement policy work?

In your case, One copy of the ...READ MORE

answered Aug 16, 2018 in Big Data Hadoop by nitinrawat895
• 10,920 points
156 views
0 votes
1 answer

What is the hdfs command to check free space available?

You can see the free available space ...READ MORE

answered Nov 21, 2018 in Big Data Hadoop by Omkar
• 69,060 points
210 views
–1 vote
1 answer

Hdfs command to see available free space in hdfs

Yes, you check the free space using ...READ MORE

answered Jan 21, 2019 in Big Data Hadoop by Omkar
• 69,060 points
82 views
+1 vote
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 10,920 points
5,116 views
0 votes
1 answer

hadoop.mapred vs hadoop.mapreduce?

org.apache.hadoop.mapred is the Old API  org.apache.hadoop.mapreduce is the ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 10,920 points
732 views
+1 vote
11 answers

hadoop fs -put command?

put syntax: put <localSrc> <dest> copy syntax: copyF ...READ MORE

answered Dec 7, 2018 in Big Data Hadoop by Aditya
30,405 views
–1 vote
1 answer

Hadoop dfs -ls command?

In your case there is no difference ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by kurt_cobain
• 9,310 points
1,907 views
0 votes
1 answer

ERROR 1819 (HY000): Your password does not satisfy the current policy requirements

Hi@akhtar, Before setting password in mysql, check the ...READ MORE

answered Feb 13 in Big Data Hadoop by MD
• 24,500 points
92 views