How are blocks created while written file in hdfs?

0 votes
Suppose I have a file of 1GB that I want to store in hdfs. When I copy it to hdfs, how is this file divided and stored in hdfs?
Dec 21, 2018 in Big Data Hadoop by slayer
• 29,170 points
33 views

1 answer to this question.

0 votes
Suppose we want to write a 1 gb file on hdfs then that 1 gb is broken into multiple 128 mb blocks.

The writing operation takes place in pipeline.

Every block of data is written into a datanode and that is replicated at 3 datanodes and by default, the replication factor is 3.

So basically the replication of the data is also written in some 128 mb block but from a different rack. The data size is not reduced in this process.

Once the client finishes on all the three nodes for one particular block then it repeats the same writing process for another block of data.

Thus the writing happens in pipeline, please refer the below link which will explain the concept in more detailed way.
answered Dec 21, 2018 by Omkar
• 67,660 points

Related Questions In Big Data Hadoop

0 votes
1 answer

How to upload file to HDFS in Ubuntu

you can use  hadoop fs -copyFromLocal  "/home/ritwik ...READ MORE

answered Apr 18, 2018 in Big Data Hadoop by kurt_cobain
• 9,260 points
152 views
0 votes
1 answer

How to print the content of a file in console present in HDFS?

Yes, you can use hdfs dfs command ...READ MORE

answered Apr 19, 2018 in Big Data Hadoop by Shubham
• 13,300 points
654 views
0 votes
1 answer

How to count lines in a file on hdfs command?

Use the below commands: Total number of files: hadoop ...READ MORE

answered Aug 10, 2018 in Big Data Hadoop by Neha
• 6,280 points
3,576 views
0 votes
1 answer

How to execute python script in hadoop file system (hdfs)?

If you are simply looking to distribute ...READ MORE

answered Sep 19, 2018 in Big Data Hadoop by digger
• 26,550 points
2,383 views
0 votes
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 10,710 points
3,323 views
0 votes
1 answer

hadoop.mapred vs hadoop.mapreduce?

org.apache.hadoop.mapred is the Old API  org.apache.hadoop.mapreduce is the ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 10,710 points
398 views
0 votes
10 answers

hadoop fs -put command?

put syntax: put <localSrc> <dest> copy syntax: copyFr ...READ MORE

answered Dec 7, 2018 in Big Data Hadoop by Aditya
16,417 views
0 votes
1 answer

Hadoop dfs -ls command?

In your case there is no difference ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by kurt_cobain
• 9,260 points
1,196 views
0 votes
1 answer

How is a file written in hdfs?

When you copy a file from the ...READ MORE

answered Dec 21, 2018 in Big Data Hadoop by Omkar
• 67,660 points
36 views
0 votes
1 answer

How to check the size of a file in Hadoop HDFS?

You can use the  hadoop fs -ls command to ...READ MORE

answered Nov 21, 2018 in Big Data Hadoop by Omkar
• 67,660 points
1,042 views