How we can run spark SQL over hive tables in our cluster?

–1 vote
Dec 26, 2018 in Big Data Hadoop by digger
• 26,550 points
75 views

1 answer to this question.

0 votes

Open spark-shell.

scala> import org.apache.spark.sql.hive._
scala> val hc = new HiveContext(sc)
hc.sql("your query").show()
answered Dec 26, 2018 by Omkar
• 67,660 points

Open Spark-shell

scala>val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)
scala>sqlContext.sql("CREATE TABLE IF NOT EXISTS employee(id INT, name STRING, age INT) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n'")
scala>sqlContext.sql("LOAD DATA LOCAL INPATH 'employee.txt' INTO TABLE employee")
scala>val result = sqlContext.sql("FROM employee SELECT id, name, age")
scala>result.show()

Related Questions In Big Data Hadoop

0 votes
1 answer

How can we access data with simple SQL knowledge in hive?

Hey, The following examples are simple, common data ...READ MORE

answered May 16 in Big Data Hadoop by Gitika
• 25,360 points
42 views
0 votes
1 answer

How can we configure remote metastore mode in Hive?

Hey,  Basically, hive-site.xml file has to be configured ...READ MORE

answered Jun 18 in Big Data Hadoop by Gitika
• 25,360 points
58 views
0 votes
1 answer

How can we use IN/EXIST operator in Hive?

Hey, Yes, now Hive supports IN or EXIST, ...READ MORE

answered Jun 19 in Big Data Hadoop by Gitika
• 25,360 points
34 views
0 votes
1 answer

Update hdfs data before stroring in MySql

Yes, you can update the data before ...READ MORE

answered Jan 27 in Big Data Hadoop by Omkar
• 67,660 points
38 views
0 votes
1 answer

Using Hadoop for Data Analytics.

Yes, your approach is correct - you ...READ MORE

answered Sep 28, 2018 in Big Data Hadoop by Frankie
• 9,810 points
43 views
0 votes
1 answer

How to open MySql console in Ubuntu?

sudo service mysqld restart mysql -u <username> root ...READ MORE

answered Dec 14, 2018 in Big Data Hadoop by Omkar
• 67,660 points
121 views
–1 vote
1 answer

Facing the below error while installing mysql in VM

We would like to say that the ...READ MORE

answered Dec 21, 2018 in Big Data Hadoop by Omkar
• 67,660 points
68 views
0 votes
1 answer

How to save Spark dataframe as dynamic partitioned table in Hive?

Hey, you can try something like this: df.write.partitionBy('year', ...READ MORE

answered Nov 6, 2018 in Big Data Hadoop by Omkar
• 67,660 points
1,909 views
0 votes
1 answer

If there are two joins in hive, how many mapreduce jobs will run?

There are two conditions for no. of ...READ MORE

answered Dec 19, 2018 in Big Data Hadoop by Omkar
• 67,660 points
173 views