Can we run Spark without using Hadoop

0 votes

Are Spark & Hadoop tightly coupled? Can we use Spark in a standalone mode without Hadoop?

Mar 23, 2018 in Big Data Hadoop by kurt_cobain
• 9,350 points
2,364 views

3 answers to this question.

0 votes

Yes you can. To install Spark Standalone mode, you simply place a compiled version of Spark on each node on the cluster. Spark Standalone cluster is Spark's own built-in clustered environment. Since Spark Standalone is available in the default distribution of Apache Spark it is the easiest way to run your Spark applications in a clustered environment in many cases. 

answered Mar 23, 2018 by Shubham
• 13,490 points
+1 vote
No, you can run spark without hadoop.
Spark is just a processing engine. If you run it on hadoop then you have an option to utilize HDFS features.
answered May 7, 2019 by pradeep
0 votes

Hey,

Yes, Spark can run without Hadoop. All core spark features will continue to work, but you will miss things like, easily distributed all your files to all the nodes in the cluster in HDFS, etc.

But, Spark is only doing processing and it uses dynamic memory to perform the task, but to store the data you need some data storage system. Here Hadoop comes into the role with the spark, it provides the storage for spark. One more reason for using with spark is they are open source and both can integrate with each other as easily compare to other storage systems.

answered May 8, 2019 by Gitika
• 65,770 points

Related Questions In Big Data Hadoop

0 votes
1 answer

Can we use apache Mahout without Hadoop dependency?

There is a number of algorithm implementations ...READ MORE

answered Apr 26, 2018 in Big Data Hadoop by nitinrawat895
• 11,380 points
1,167 views
0 votes
1 answer

Can we use Apache Mahout without Hadoop dependency?

Yes. Not all of the Mahout depends ...READ MORE

answered May 17, 2018 in Big Data Hadoop by nitinrawat895
• 11,380 points
1,104 views
–1 vote
1 answer

How we can run spark SQL over hive tables in our cluster?

Open spark-shell. scala> import org.apache.spark.sql.hive._ scala> val hc = ...READ MORE

answered Dec 26, 2018 in Big Data Hadoop by Omkar
• 69,220 points
1,589 views
0 votes
1 answer
0 votes
1 answer

Is it possible to run Apache Spark without Hadoop?

Though Spark and Hadoop were the frameworks designed ...READ MORE

answered May 3, 2019 in Big Data Hadoop by ravikiran
• 4,620 points
1,292 views
0 votes
1 answer

Is there a possibility to run Apache Spark without Hadoop?

Spark and Hadoop both are the open-source ...READ MORE

answered Jun 6, 2019 in Big Data Hadoop by ravikiran
• 4,620 points
1,396 views
+1 vote
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 11,380 points
11,078 views
+2 votes
11 answers

hadoop fs -put command?

Hi, You can create one directory in HDFS ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by nitinrawat895
• 11,380 points
109,076 views
–1 vote
1 answer

Hadoop dfs -ls command?

In your case there is no difference ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by kurt_cobain
• 9,350 points
4,644 views
0 votes
1 answer
webinar REGISTER FOR FREE WEBINAR X
REGISTER NOW
webinar_success Thank you for registering Join Edureka Meetup community for 100+ Free Webinars each month JOIN MEETUP GROUP