How to retain Spark jar and app jar after staging?

0 votes
Hi. I have some staging function going on in my Spark application. The problem is that the Spark jar and app jar files are automatically getting deleted after staging. I want to retain these files to study them and learn as I am new to this concept. Can someone tell me how to do this?
Mar 26, 2019 in Apache Spark by Jimmy
108 views

1 answer to this question.

0 votes

By default, Spark jar, app jar, and cache files are configured to be deleted after the job. If you want to preserve these files you will have to change the configuration. You can do it using the following command:

val sc = new SparkContext(new SparkConf())

./bin/spark-submit <all your existing options> --spark.yarn.preserve.staging.files=true
answered Mar 26, 2019 by Ginni

Related Questions In Apache Spark

0 votes
1 answer

How to save and retrieve the Spark RDD from HDFS?

You can save the RDD using saveAsObjectFile and saveAsTextFile method. ...READ MORE

answered May 29, 2018 in Apache Spark by Shubham
• 13,380 points
5,272 views
0 votes
1 answer

where can i get spark-terasort.jar and not .scala file, to do spark terasort in windows.

Hi! I found 2 links on github where ...READ MORE

answered Feb 13, 2019 in Apache Spark by Omkar
• 69,040 points
327 views
0 votes
1 answer

How to enable SASL authentication after Spark authentication?

You can do this by setting the ...READ MORE

answered Mar 13, 2019 in Apache Spark by Venu
132 views
0 votes
1 answer

How to connect to Zookeeper after setting Spark recovery mode?

You have set Zookeeper as the recovery ...READ MORE

answered Mar 25, 2019 in Apache Spark by Hari
176 views
+1 vote
2 answers
+1 vote
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 10,920 points
5,303 views
0 votes
1 answer

hadoop.mapred vs hadoop.mapreduce?

org.apache.hadoop.mapred is the Old API  org.apache.hadoop.mapreduce is the ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 10,920 points
778 views
+1 vote
11 answers

hadoop fs -put command?

put syntax: put <localSrc> <dest> copy syntax: copyF ...READ MORE

answered Dec 7, 2018 in Big Data Hadoop by Aditya
32,360 views
0 votes
1 answer

How to increase HDFS replication level in Spark?

Hi @Raunak. You can change the replication ...READ MORE

answered Mar 26, 2019 in Apache Spark by Yash
266 views
0 votes
1 answer

Need help setting Spark yarn history server address

If you are running history server and ...READ MORE

answered Mar 26, 2019 in Apache Spark by Neha
404 views