Reading Performance in Hadoop Cluster

+1 vote
I have an average key-value pair size of 100 bytes. My primary access is random needs on the table. I want to speed up my reading performance on Hadoop Cluster. So, what should I do that will speed up random reading performance on my cluster?

Can someone help!

Thanks in advance!
Jul 25, 2018 in Big Data Hadoop by Meci Matt
• 9,400 points
26 views

1 answer to this question.

0 votes
For speeding up reading performance of your cluster one thing you can do is that you can decrease the block size because larger block size is preferred if files are primarily for sequential access.  Smaller blocks are good for random access, but require more memory to hold the block index, and may be slower to create.

Hope this will clear your doubt.
answered Jul 25, 2018 by nitinrawat895
• 10,030 points

Related Questions In Big Data Hadoop

0 votes
1 answer
+1 vote
0 answers

How to set up Hadoop cluster on Mac in intelliJ IDEA

I have Installed hadoop using brew and ...READ MORE

Jul 25, 2018 in Big Data Hadoop by Neha
• 6,260 points
88 views
0 votes
1 answer

Hadoop dfs -ls command?

In your case there is no difference ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by kurt_cobain
• 9,240 points
758 views
0 votes
1 answer
0 votes
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 10,030 points
2,037 views
0 votes
1 answer

How to get started with Hadoop?

Well, hadoop is actually a framework that ...READ MORE

answered Mar 21, 2018 in Big Data Hadoop by coldcode
• 2,010 points
62 views
0 votes
1 answer

Different ports in a Hadoop cluster environment?

Below image will help you in understanding ...READ MORE

answered Apr 6, 2018 in Big Data Hadoop by nitinrawat895
• 10,030 points
87 views
0 votes
1 answer