About Available Space Block Placement Policy

+1 vote
First of all, i'm learning hadoop recently I'm looking for a block placement policy other than the standard one, because from what I know the standard block placement policy places random data blocks. Is there a block placement policy besides that? What I found was the Available space block placement policy, but I found no documentation. Has anyone ever used this block placement policy? can you explain it
Jan 21, 2020 in Big Data Hadoop by anonymous
• 130 points
792 views

1 answer to this question.

0 votes

Hi,

We know, In HDFS data is divided into blocks by the parameter dfs.block.size in the config file named hdfs-site.xml. The default block size is 64MB or 128MB, depending on your machine. And the default replicas is 3 by the parameter dfs.replication.

We know, HDFS uses rack aware data placement strategy that means if the blocks are placed in one rack then their copy will be placed in another rack so as to achieve fault tolerance when there is node failure or switch failure.

Default block placement policy present in HDFS are given below:

  1. When a client uploads data to HDFS, then the first replica of the block stores either local node or on a random node depending upon the HDFS client running in the cluster. 

  2. Second replica of the block is stores on a rack other than first replica placement.

  3. Third replica of the block stores  in the rack where the second replica is placed.

  4. If there are replicas remaining; distribute them randomly across the racks present in network with the restriction that, in the same rack there are no more than two replicas.

Hope this will help you to clear your doubt.

Thank You

answered Jan 29, 2020 by MD
• 95,440 points

Related Questions In Big Data Hadoop

0 votes
1 answer

How to analyze block placement on datanodes and rebalancing data across Hadoop nodes?

HDFS provides a tool for administrators i.e. ...READ MORE

answered Jun 21, 2018 in Big Data Hadoop by nitinrawat895
• 11,380 points
818 views
0 votes
1 answer

How does Hadoop's block replacement policy work?

In your case, One copy of the ...READ MORE

answered Aug 16, 2018 in Big Data Hadoop by nitinrawat895
• 11,380 points
972 views
0 votes
1 answer

What is the hdfs command to check free space available?

You can see the free available space ...READ MORE

answered Nov 21, 2018 in Big Data Hadoop by Omkar
• 69,210 points
3,190 views
–1 vote
1 answer

Hdfs command to see available free space in hdfs

Yes, you check the free space using ...READ MORE

answered Jan 22, 2019 in Big Data Hadoop by Omkar
• 69,210 points
438 views
+1 vote
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 11,380 points
10,523 views
0 votes
1 answer

hadoop.mapred vs hadoop.mapreduce?

org.apache.hadoop.mapred is the Old API  org.apache.hadoop.mapreduce is the ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 11,380 points
2,165 views
+2 votes
11 answers

hadoop fs -put command?

Hi, You can create one directory in HDFS ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by nitinrawat895
• 11,380 points
103,819 views
–1 vote
1 answer

Hadoop dfs -ls command?

In your case there is no difference ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by kurt_cobain
• 9,390 points
4,234 views
0 votes
1 answer

ERROR 1819 (HY000): Your password does not satisfy the current policy requirements

Hi@akhtar, Before setting password in mysql, check the ...READ MORE

answered Feb 13, 2020 in Big Data Hadoop by MD
• 95,440 points
856 views
0 votes
1 answer
webinar REGISTER FOR FREE WEBINAR X
REGISTER NOW
webinar_success Thank you for registering Join Edureka Meetup community for 100+ Free Webinars each month JOIN MEETUP GROUP