Spark Yarn Changing maximum number of time to submit application

0 votes
What is the default value set for the maximum number of times that the application is submitted and how can I change it?
Mar 28, 2019 in Apache Spark by Karuna
2,012 views

1 answer to this question.

0 votes

By default, the maximum number of times to submit the application is set to the property yarn.resourcemanager.am.max-attempts

If you want to change this, refer to the below command:

val sc = new SparkContext(new SparkConf())

./bin/spark-submit <all your existing options> --spark.yarn.maxAppAttempts= <max number of attempt>
answered Mar 28, 2019 by Raj

Related Questions In Apache Spark

0 votes
1 answer
0 votes
1 answer

When running Spark on Yarn, do I need to install Spark on all nodes of Yarn Cluster?

No, it is not necessary to install ...READ MORE

answered Jun 14, 2018 in Apache Spark by nitinrawat895
• 11,380 points
6,290 views
0 votes
1 answer

How to use ftp scheme using Yarn in Spark application?

In case Yarn does not support schemes ...READ MORE

answered Mar 28, 2019 in Apache Spark by Raj
1,213 views
0 votes
1 answer

Changing Yarn queue in Spark application

To change the default queue to which ...READ MORE

answered Mar 28, 2019 in Apache Spark by Raj
5,455 views
+1 vote
2 answers
+1 vote
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 11,380 points
11,077 views
0 votes
1 answer

hadoop.mapred vs hadoop.mapreduce?

org.apache.hadoop.mapred is the Old API  org.apache.hadoop.mapreduce is the ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 11,380 points
2,575 views
+2 votes
11 answers

hadoop fs -put command?

Hi, You can create one directory in HDFS ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by nitinrawat895
• 11,380 points
109,073 views
+1 vote
8 answers

How to print the contents of RDD in Apache Spark?

Save it to a text file: line.saveAsTextFile("alicia.txt") Print contains ...READ MORE

answered Dec 10, 2018 in Apache Spark by Akshay
61,867 views
0 votes
1 answer

How to get ID of a map task in Spark?

you can access task information using TaskContext: import org.apache.spark.TaskContext sc.parallelize(Seq[Int](), ...READ MORE

answered Nov 20, 2018 in Apache Spark by Frankie
• 9,830 points
3,484 views
webinar REGISTER FOR FREE WEBINAR X
REGISTER NOW
webinar_success Thank you for registering Join Edureka Meetup community for 100+ Free Webinars each month JOIN MEETUP GROUP