Spark Kill Running Application

0 votes
I cannot allocate resources to any spark application as one of the spark application is occupying all the cores.

Help needed.

Thanks in advance
Apr 25, 2018 in Apache Spark by Ashish
• 2,650 points
1,014 views

1 answer to this question.

0 votes
you can copy the application id from spark scheduler

connect to the server which is running the big application you want to kill and just use the command

yarn application -kill "application_id"
answered Apr 25, 2018 by kurt_cobain
• 9,390 points

Related Questions In Apache Spark

0 votes
1 answer

Is it mandatory to start Hadoop to run spark application?

No, it is not mandatory, but there ...READ MORE

answered Jun 14, 2018 in Apache Spark by nitinrawat895
• 11,380 points
360 views
0 votes
1 answer

When running Spark on Yarn, do I need to install Spark on all nodes of Yarn Cluster?

No, it is not necessary to install ...READ MORE

answered Jun 14, 2018 in Apache Spark by nitinrawat895
• 11,380 points
4,209 views
0 votes
1 answer

What is Executor Memory in a Spark application?

Every spark application has same fixed heap ...READ MORE

answered Jan 5, 2019 in Apache Spark by Frankie
• 9,830 points
4,899 views
0 votes
1 answer

Passing condition dynamically to Spark application.

You can try this: d.filter(col("value").isin(desiredThings: _*)) and if you ...READ MORE

answered Feb 19, 2019 in Apache Spark by Omkar
• 69,210 points
6,618 views
0 votes
1 answer
0 votes
0 answers

How can I kill a process by name instead of PID, on Linux?

Sometimes when I try to start Firefox ...READ MORE

Apr 13 in Linux Administration by Rahul
• 9,000 points
24 views
0 votes
0 answers

What killed my process and why?

This happened two times. I asked if ...READ MORE

Apr 13 in Linux Administration by Aditya
• 7,300 points
31 views
0 votes
1 answer

How to stop messages from being displayed on spark console?

In your log4j.properties file you need to ...READ MORE

answered Apr 24, 2018 in Apache Spark by kurt_cobain
• 9,390 points
4,275 views
+1 vote
2 answers

Hadoop 3 compatibility with older versions of Hive, Pig, Sqoop and Spark

Hadoop 3 is not widely used in ...READ MORE

answered Apr 20, 2018 in Apache Spark by kurt_cobain
• 9,390 points
4,391 views
0 votes
1 answer

Efficient way to read specific columns from parquet file in spark

As parquet is a column based storage ...READ MORE

answered Apr 20, 2018 in Apache Spark by kurt_cobain
• 9,390 points
5,511 views
webinar REGISTER FOR FREE WEBINAR X
Send OTP
REGISTER NOW
webinar_success Thank you for registering Join Edureka Meetup community for 100+ Free Webinars each month JOIN MEETUP GROUP