DP 203: Data Engineering on Microsoft Azure
- 6k Enrolled Learners
- Live Class
In this blog, I will deep dive into Hadoop 2.0 Cluster Architecture Federation. Apache Hadoop has evolved a lot since the release of Apache Hadoop 1.x. As you know from my previous blog that the HDFS Architecture follows Master/Slave Topology where NameNode acts as a master daemon and is responsible for managing other slave nodes called DataNodes. In this ecosystem, this single Master Daemon or NameNode becomes a bottleneck and on the contrary, companies need to have NameNode which is highly available. This very reason became the foundation of HDFS Federation Architecture and HA (High Availability) Architecture.
The topics that I have covered in this blog are as follows:
As you can see in the figure above, the current HDFS has two layers:
2. Physical Storage: It is managed by DataNodes which are responsible for storing data and thereby provides Read/Write access to the data stored in HDFS.
So, the current HDFS Architecture allows you to have a single namespace for a cluster. In this architecture, a single NameNode is responsible for managing the namespace. This architecture is very convenient and easy to implement. Also, it provides sufficient capability to cater the needs of the small production cluster.
Here this Online Big Data Certification course will explain to you more about HDFS with real-time project experience, which was well designed by Top Industry working Experts.
Find out our Big Data Hadoop Course in Top Cities
|India||United States||Other Popular Cities|
|Big Data Course in Bangalore||Big Data Training in Chicago||Big Data Course in Canada|
|Big Data Training in Chennai||Big Data Training in Dallas||Big Data Course in UAE|
|Big Data Course in Hyderabad||Big Data Training in Washington||Big Data Course in Singapore|
As discussed earlier, the current HDFS did suffice to the needs and use cases of a small production cluster. But, big organizations like Yahoo, Facebook found some limitations as the HDFS cluster grew exponentially. Let us have a quick look at some of the limitations:
Master the art of data engineering and revolutionize the way organizations process, store, and analyze data with Data Engineer Certification Program.
The pictorial representation of the HDFS Federation Architecture is given below:
Before moving ahead, let me briefly talk about the above architectural image:
Now, let’s understand the components of the HDFS Federation Architecture in detail:
Block pool is nothing but set of blocks belonging to a specific Namespace. So, we have a collection of block pool where each block pool is managed independently from the other. This independence where each block pool is managed independently allows the namespace to create Block IDs for new blocks without the coordination with other namespaces. The data blocks present in all the block pool are stored in all the DataNodes. Basically, block pool provides an abstraction such that the data blocks residing in the DataNodes (as in the Single Namespace Architecture) can be grouped corresponding to a particular namespace.
Namespace volume is nothing but namespace along with its block pool. Therefore, in HDFS Federation we have multiple namespace volumes. It is a self-contained unit of management, i.e. Each namespace volume can function independently. If a NameNode or namespace is deleted, the corresponding block pool which is residing on the DataNodes will also be deleted.
Now, I guess you have a pretty good idea about HDFS Federation Architecture. It is more of a theoretical concept and people do not use it in a practical production system generally. There are some implementation issues with HDFS Federation that makes it difficult to deploy. Therefore, the HA (High Availability) Architecture is preferred to solve the Single Point of Failure problem. I have covered the HDFS HA Architecture in my next blog.
Now that you have understood Hadoop HDFS Federation Architecture, check out the Big Data training in chennai by Edureka, a trusted online learning company with a network of more than 250,000 satisfied learners spread across the globe. The Edureka Big Data Hadoop Certification Training course helps learners become expert in HDFS, Yarn, MapReduce, Pig, Hive, HBase, Oozie, Flume and Sqoop using real-time use cases on Retail, Social Media, Aviation, Tourism, Finance domain.
Got a question for us? Please mention it in the comments section and we will get back to you.
|Big Data Hadoop Certification Training Course|
Class Starts on 4th November,2023
4th NovemberSAT&SUN (Weekend Batch)