Can we run Spark without using Hadoop?

0 votes

Are Spark & Hadoop tightly coupled? Can we use Spark in a standalone mode without Hadoop?

Mar 22, 2018 in Big Data Hadoop by kurt_cobain
• 9,240 points
139 views

3 answers to this question.

0 votes

Yes you can. To install Spark Standalone mode, you simply place a compiled version of Spark on each node on the cluster. Spark Standalone cluster is Spark's own built-in clustered environment. Since Spark Standalone is available in the default distribution of Apache Spark it is the easiest way to run your Spark applications in a clustered environment in many cases. 

answered Mar 22, 2018 by Shubham
• 13,290 points
+1 vote
No, you can run spark without hadoop.
Spark is just a processing engine. If you run it on hadoop then you have an option to utilize HDFS features.
answered May 7 by pradeep
0 votes

Hey,

Yes, Spark can run without Hadoop. All core spark features will continue to work, but you will miss things like, easily distributed all your files to all the nodes in the cluster in HDFS, etc.

But, Spark is only doing processing and it uses dynamic memory to perform the task, but to store the data you need some data storage system. Here Hadoop comes into the role with the spark, it provides the storage for spark. One more reason for using with spark is they are open source and both can integrate with each other as easily compare to other storage systems.

answered May 8 by Gitika
• 25,300 points

Related Questions In Big Data Hadoop

0 votes
1 answer

Can we use apache Mahout without Hadoop dependency?

There is a number of algorithm implementations ...READ MORE

answered Apr 26, 2018 in Big Data Hadoop by nitinrawat895
• 10,510 points
53 views
0 votes
1 answer

Can we use Apache Mahout without Hadoop dependency?

Yes. Not all of the Mahout depends ...READ MORE

answered May 17, 2018 in Big Data Hadoop by nitinrawat895
• 10,510 points
74 views
0 votes
1 answer

How we can run spark SQL over hive tables in our cluster?

Open spark-shell. scala> import org.apache.spark.sql.hive._ scala> val hc = ...READ MORE

answered Dec 26, 2018 in Big Data Hadoop by Omkar
• 67,290 points
42 views
0 votes
1 answer

Is it possible to run Apache Spark without Hadoop?

Though Spark and Hadoop were the frameworks designed ...READ MORE

answered May 2 in Big Data Hadoop by ravikiran
• 4,200 points
56 views
0 votes
1 answer

Is there a possibility to run Apache Spark without Hadoop?

Spark and Hadoop both are the open-source ...READ MORE

answered Jun 6 in Big Data Hadoop by ravikiran
• 4,200 points
29 views
0 votes
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 10,510 points
2,399 views
0 votes
10 answers

hadoop fs -put command?

put syntax: put <localSrc> <dest> copy syntax: copyFr ...READ MORE

answered Dec 7, 2018 in Big Data Hadoop by Aditya
12,211 views
0 votes
1 answer

Hadoop dfs -ls command?

In your case there is no difference ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by kurt_cobain
• 9,240 points
896 views
0 votes
1 answer