Reading Performance in Hadoop Cluster

+1 vote
I have an average key-value pair size of 100 bytes. My primary access is random needs on the table. I want to speed up my reading performance on Hadoop Cluster. So, what should I do that will speed up random reading performance on my cluster?

Can someone help!

Thanks in advance!
Jul 25, 2018 in Big Data Hadoop by Meci Matt
• 9,460 points
529 views

1 answer to this question.

0 votes
For speeding up reading performance of your cluster one thing you can do is that you can decrease the block size because larger block size is preferred if files are primarily for sequential access.  Smaller blocks are good for random access, but require more memory to hold the block index, and may be slower to create.

Hope this will clear your doubt.
answered Jul 25, 2018 by nitinrawat895
• 11,380 points

Related Questions In Big Data Hadoop

0 votes
2 answers

How can I list NameNode & DataNodes from any machine in the Hadoop cluster?

You can browse hadoop page from any ...READ MORE

answered Jan 23, 2020 in Big Data Hadoop by MD
• 95,440 points
11,160 views
+1 vote
0 answers

How to set up Hadoop cluster on Mac in intelliJ IDEA

I have Installed hadoop using brew and ...READ MORE

Jul 25, 2018 in Big Data Hadoop by Neha
• 6,300 points
929 views
–1 vote
1 answer

Hadoop dfs -ls command?

In your case there is no difference ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by kurt_cobain
• 9,390 points
4,298 views
0 votes
1 answer
+1 vote
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 11,380 points
10,619 views
0 votes
1 answer

How to get started with Hadoop?

Well, hadoop is actually a framework that ...READ MORE

answered Mar 21, 2018 in Big Data Hadoop by coldcode
• 2,080 points
921 views
0 votes
1 answer

Different ports in a Hadoop cluster environment?

Below image will help you in understanding ...READ MORE

answered Apr 6, 2018 in Big Data Hadoop by nitinrawat895
• 11,380 points
1,544 views
0 votes
1 answer

How to delete a directory from Hadoop cluster which is having comma(,) in its name?

Just try the following command: hadoop fs -rm ...READ MORE

answered May 7, 2018 in Big Data Hadoop by nitinrawat895
• 11,380 points
2,802 views
webinar REGISTER FOR FREE WEBINAR X
REGISTER NOW
webinar_success Thank you for registering Join Edureka Meetup community for 100+ Free Webinars each month JOIN MEETUP GROUP