Monitoring Spark application

0 votes
Hi, in Hadoop map-reduce form edge node we had run jar file. But in spark on which node we will execute spark-submit
1) How do we monitor the spark application, is there any UI,
2) If UI is there which node of the cluster it is representing.
Aug 9 in Apache Spark by Rishi
18 views

1 answer to this question.

0 votes

Spark-submit jobs are also run from client/edge node but its execution environment will depend on the parameters --master and --deploy-mode such as if the --deploy-mode is cluster then the driver class of the code will be executed in Spark cluster and if its client then it will get executed in the edge node itself.

1. To monitor the spark application there will a spark web UI from which you can see the spark application current status as well as history. The image attached below will help you understand how.

image

2. The Spark Web UI will show you all the jobs running in all worker nodes as a whole and no specific number will be mentioned as its an automated thing to choose which worker node will get assigned to which job, only the master node log has the details.

answered Aug 9 by Umesh

Related Questions In Apache Spark

0 votes
1 answer

Spark Kill Running Application

you can copy the application id from ...READ MORE

answered Apr 25, 2018 in Apache Spark by kurt_cobain
• 9,240 points
100 views
0 votes
1 answer

Spark Monitoring with Ganglia

Ganglia looks like a good option for ...READ MORE

answered May 4, 2018 in Apache Spark by kurt_cobain
• 9,240 points
212 views
0 votes
1 answer

Is it mandatory to start Hadoop to run spark application?

No, it is not mandatory, but there ...READ MORE

answered Jun 14, 2018 in Apache Spark by nitinrawat895
• 10,510 points
47 views
0 votes
1 answer

Passing condition dynamically to Spark application.

You can try this: d.filter(col("value").isin(desiredThings: _*)) and if you ...READ MORE

answered Feb 19 in Apache Spark by Omkar
• 67,290 points
54 views
0 votes
1 answer
0 votes
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 10,510 points
2,408 views
0 votes
1 answer

hadoop.mapred vs hadoop.mapreduce?

org.apache.hadoop.mapred is the Old API  org.apache.hadoop.mapreduce is the ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 10,510 points
246 views
0 votes
10 answers

hadoop fs -put command?

put syntax: put <localSrc> <dest> copy syntax: copyFr ...READ MORE

answered Dec 7, 2018 in Big Data Hadoop by Aditya
12,237 views
0 votes
1 answer

What is Executor Memory in a Spark application?

Every spark application has same fixed heap ...READ MORE

answered Jan 4 in Apache Spark by Frankie
• 9,810 points
241 views
0 votes
1 answer

Change encryption key length fro Spark application

You can do this by running the ...READ MORE

answered Mar 13 in Apache Spark by Venu
31 views