How can we retrieve/get complete HQL hive query from hive,spark and tez?

0 votes
Hi team,

I want to know that if I have an application id and I want to check what hive query was executed for that particular application id so how I find that hive query using Hive, Tez view, and spark.

Is there anything we can know what would be the hql for a particular app id? I know we can see this from the resource manager. but it does not show the complete query, It just shows some part of it. If you have any information on this.pls provide me with the examples.
Jul 23 in Big Data Hadoop by Karan
159 views

1 answer to this question.

0 votes
To get full query running for the applicationid goto TEZ ui from ambari(there you can see query history)

Steps go to tez view via ambari:-

1) From the Ambari home page, hover over the top right corner, and select "Tez View"

2) Next, you can either search by application ID or the hive query itself to find your application.

3) Select your application - the entire hive query should be displayed here and also you can see the status of the query.

(or) from hiveserver2.log
answered Jul 23 by Lohit

Related Questions In Big Data Hadoop

0 votes
1 answer
0 votes
1 answer
0 votes
1 answer

Can hive queries be executed from script files? And how?

Hey, It is possible by using the source ...READ MORE

answered Jun 25 in Big Data Hadoop by Gitika
• 25,420 points
39 views
0 votes
1 answer

How to create a FileSystem object that can be used for reading from and writing to HDFS?

Read operation on HDFS In order to read ...READ MORE

answered Mar 21, 2018 in Big Data Hadoop by nitinrawat895
• 10,760 points

edited Mar 21, 2018 by nitinrawat895 398 views
0 votes
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 10,760 points
3,549 views
0 votes
1 answer

hadoop.mapred vs hadoop.mapreduce?

org.apache.hadoop.mapred is the Old API  org.apache.hadoop.mapreduce is the ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 10,760 points
440 views
+1 vote
11 answers

hadoop fs -put command?

put syntax: put <localSrc> <dest> copy syntax: copyFr ...READ MORE

answered Dec 7, 2018 in Big Data Hadoop by Aditya
18,169 views
0 votes
1 answer

Hadoop dfs -ls command?

In your case there is no difference ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by kurt_cobain
• 9,280 points
1,317 views
–1 vote
1 answer

How we can run spark SQL over hive tables in our cluster?

Open spark-shell. scala> import org.apache.spark.sql.hive._ scala> val hc = ...READ MORE

answered Dec 26, 2018 in Big Data Hadoop by Omkar
• 68,160 points
83 views
0 votes
1 answer

How can we access data with simple SQL knowledge in hive?

Hey, The following examples are simple, common data ...READ MORE

answered May 16 in Big Data Hadoop by Gitika
• 25,420 points
48 views