If there are two joins in hive, how many mapreduce jobs will run?

0 votes
If there are two joins in hive, how many mapreduce jobs will run?
Dec 19, 2018 in Big Data Hadoop by slayer
• 29,040 points
53 views

1 answer to this question.

Your answer

Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.
0 votes

There are two conditions for no. of mappers.
(1) No. of Mappers per slave
(2) No. of Mappers per MapReduce job

(1) No. of Mappers per slave: There is no exact formula. It depends on how many cores and how much memory you have on each slave. Generally, one mapper should get 1 to 1.5 cores of processors. So if you have 15 cores then one can run 10 Mappers per Node. So if you have 100 data nodes in Hadoop Cluster then one can run 1000 Mappers in a Cluster.

(2) No. of Mappers per MapReduce job:The number of mappers depends on the amount of InputSplit generated by trong>InputFormat (getInputSplits method). If you have 640MB file and Data Block size is 128 MB then we need to run 5 Mappers per MapReduce job.

Reducers:
There are two conditions for no. of reducers.
(1) No. of Reducers per slave
(2) No. of Reducers per MapReduce job

(1) No. of Reducers per slave: It is same as No of Mappers per slave

(2) No. of Reducers per MapReduce job:
The right no. reducer we can set with following formula:
0.95 * no. of nodes * mapred.tasktracker.reduce.tasks.maximum
or
1.75 * no. of nodes * mapred.tasktracker.reduce.tasks.maximum

With 0.95 all of the reducers can launch immediately and start transferring map o/p when map finished.
With 1.75 faster nodes will finish their first round of reduces and launch the second wave of reduces.

answered Dec 19, 2018 by Omkar
• 65,820 points

Related Questions In Big Data Hadoop

0 votes
1 answer

Determining how many mappers will run?

Here what happens is, each file would ...READ MORE

answered Aug 8, 2018 in Big Data Hadoop by nitinrawat895
• 9,030 points
51 views
0 votes
1 answer

How to run mapreduce program in terminal?

You can reference the below steps: Step 1: ...READ MORE

answered Jan 31 in Big Data Hadoop by Srishti
65 views
0 votes
2 answers

How are Partitioning and Bucketing different from each other in Apache Hive?

Let us consider a student database table ...READ MORE

answered Apr 15 in Big Data Hadoop by nitinrawat895
• 9,030 points
108 views
0 votes
1 answer

How to perform joins in Apache hive?

Hey, Join operation will work always with two ...READ MORE

answered 4 days ago in Big Data Hadoop by Gitika
• 6,700 points
9 views
0 votes
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 9,030 points
1,639 views
0 votes
1 answer

hadoop.mapred vs hadoop.mapreduce?

org.apache.hadoop.mapred is the Old API  org.apache.hadoop.mapreduce is the ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 9,030 points
130 views
0 votes
10 answers

hadoop fs -put command?

copy command can be used to copy files ...READ MORE

answered Dec 7, 2018 in Big Data Hadoop by Sujay
7,954 views
0 votes
1 answer

Hadoop dfs -ls command?

In your case there is no difference ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by kurt_cobain
• 9,260 points
552 views
0 votes
1 answer

How many FSimage files will be created in hard disk?

In Hdfs, data and metadata are decoupled. ...READ MORE

answered Dec 20, 2018 in Big Data Hadoop by Omkar
• 65,820 points
43 views
0 votes
1 answer

How we can run spark SQL over hive tables in our cluster?

Open spark-shell. scala> import org.apache.spark.sql.hive._ scala> val hc = ...READ MORE

answered Dec 26, 2018 in Big Data Hadoop by Omkar
• 65,820 points
29 views

© 2018 Brain4ce Education Solutions Pvt. Ltd. All rights Reserved.
"PMP®","PMI®", "PMI-ACP®" and "PMBOK®" are registered marks of the Project Management Institute, Inc. MongoDB®, Mongo and the leaf logo are the registered trademarks of MongoDB, Inc.