How does Hive job execution flow works?

0 votes

Hi,

Can anyone help me to understand how the Hive execution flow works?

May 13 in Big Data Hadoop by disha
148 views

1 answer to this question.

0 votes

Hey,

Hive query is received from UI or from thrift server or CLI and it is received by driver. So the driver contacts the compiler to validate the hive query. So, the compiler interns contact the metastore to check the schema or syntactic validation of the query, then metastore response to the compiler, compiler sends the response back to the driver. Driver then takes the plan generated by compiler and sends it to the execution engine. Execution engine validates the plan, Plan contains direct acyclic graph of MapReduce jobs or tasks that has to be executed on the cluster for getting response or results to the query. So execution engine sends the DAG to Hadoop cluster, where its actually gets executed and the results again sends back to the execution engine. Those results, execution engines sends back to the driver and driver finally sends back to the client program. This is how Hive communication happens.

answered May 13 by Gitika
• 25,340 points

Related Questions In Big Data Hadoop

0 votes
1 answer

How to pause and resume hive job?

Practically speaking, it's difficult/impossible to pause and resume ...READ MORE

answered Jul 17, 2018 in Big Data Hadoop by Neha
• 6,280 points
88 views
0 votes
1 answer
0 votes
1 answer

How Impala is fast compared to Hive in terms of query response?

Impala provides faster response as it uses MPP(massively ...READ MORE

answered Mar 21, 2018 in Big Data Hadoop by nitinrawat895
• 10,670 points
295 views
0 votes
1 answer

How does the HDFS Client knows the block size while writing?

HDFS is designed in a way where ...READ MORE

answered Mar 27, 2018 in Big Data Hadoop by kurt_cobain
• 9,240 points
35 views
0 votes
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 10,670 points
2,761 views
0 votes
10 answers

hadoop fs -put command?

put syntax: put <localSrc> <dest> copy syntax: copyFr ...READ MORE

answered Dec 7, 2018 in Big Data Hadoop by Aditya
13,664 views
0 votes
1 answer

Hadoop dfs -ls command?

In your case there is no difference ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by kurt_cobain
• 9,240 points
1,002 views
0 votes
1 answer
0 votes
3 answers

How to connect Spark to a remote Hive server?

JDBC is not required here. Create a hive ...READ MORE

answered Mar 8 in Big Data Hadoop by Vijay Dixon
• 180 points
1,293 views
0 votes
1 answer

How Sqoop import command works internally?

First when client execute Sqoop import commands through ...READ MORE

answered Apr 11 in Big Data Hadoop by Gitika
• 25,340 points
53 views