How to use multiple spark version

–1 vote
I want to install Spark 1 and 2 without uninstalling either and use them depending on my need. How can I do this?
Dec 27, 2018 in Big Data Hadoop by digger
• 26,740 points
1,942 views

1 answer to this question.

0 votes

You can use the  SPARK_MAJOR_VERSION for this. Suppose you want to use version 2, set this:

export SPARK_MAJOR_VERSION=2 

Then to run, use:

spark-submit --version
answered Dec 27, 2018 by Omkar
• 69,210 points

Related Questions In Big Data Hadoop

0 votes
1 answer

How to sync Hadoop configuration files to multiple nodes?

For syncing Hadoop configuration files, you have ...READ MORE

answered Jun 21, 2018 in Big Data Hadoop by HackTheCode
1,183 views
0 votes
1 answer

How to find hadoop distribution and version?

Just Use the command Hadoop version ...READ MORE

answered Apr 6, 2018 in Big Data Hadoop by kurt_cobain
• 9,390 points

edited Apr 6, 2018 by kurt_cobain 1,813 views
0 votes
1 answer

How to use custom FileInputFormat in MapReduce?

You have to override isSplitable method. ...READ MORE

answered Apr 10, 2018 in Big Data Hadoop by Shubham
• 13,490 points
885 views
0 votes
1 answer

How do I connect my Spark based HDInsight cluster to my blob storage?

Go through this blog: https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-hadoop-use-blob-storage#access-blobs I went through this ...READ MORE

answered Apr 15, 2018 in Big Data Hadoop by Shubham
• 13,490 points
1,937 views
+1 vote
2 answers
+1 vote
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 11,380 points
10,614 views
0 votes
1 answer

hadoop.mapred vs hadoop.mapreduce?

org.apache.hadoop.mapred is the Old API  org.apache.hadoop.mapreduce is the ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 11,380 points
2,214 views
+2 votes
11 answers

hadoop fs -put command?

Hi, You can create one directory in HDFS ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by nitinrawat895
• 11,380 points
104,883 views
0 votes
1 answer

How to read Spark elements having multiple lines each?

Try this: val new_records = sc.newAPIHadoopRDD(hadoopConf,classOf[ ...READ MORE

answered Dec 12, 2018 in Big Data Hadoop by Omkar
• 69,210 points
1,171 views
0 votes
3 answers

Hadoop Spark: How to iterate hdfs directories?

Using PySpark  hadoop = sc._jvm.org.apache.hadoop fs = hadoop.fs.FileSystem conf = ...READ MORE

answered Dec 5, 2018 in Big Data Hadoop by Kiran
10,577 views
webinar REGISTER FOR FREE WEBINAR X
REGISTER NOW
webinar_success Thank you for registering Join Edureka Meetup community for 100+ Free Webinars each month JOIN MEETUP GROUP