Why doesn t my Spark Yarn client runs on all available worker machines

0 votes

I am running an application on Spark cluster using yarn client mode with 4 nodes. Other then Master node there are three worker nodes available but spark execute the application on only two workers. Workers are selected at random, there aren't any specific workers that get selected each time application is run.

For the worker not being used following lines got printed in the logs

**INFO  Client:54*

     client token: N/A
      diagnostics: N/A
      ApplicationMaster host: 192.168.0.67
      ApplicationMaster RPC port: 0
      queue: default
      start time: 1550748030360
      final status: UNDEFINED
      tracking URL: http://aiserver:8088/proxy/application_1550744631375_0004/
      user: root

Find below Spark submit command:

spark-submit --master yarn --class com.i2c.chprofiling.App App.jar --num-executors 4 --executor-cores 3 --conf "spark.locality.wait.node=0"

Why doesn't my Spark Yarn client runs on all available worker machines?

Feb 22, 2019 in Apache Spark by Uzair Ahmad

edited Feb 22, 2019 by Omkar 8,251 views

Have you tried using --deploy-mode cluster option?

I tried using cluster mode but getting following exception.

diagnostics: Application application_1550748865132_0022 failed 2 times due to AM Container for appattempt_1550748865132_0022_000002 exited with  exitCode: 13
For more detailed output, check application tracking page:http://aiserver:8088/cluster/app/application_1550748865132_0022Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1550748865132_0022_02_000001
Exit code: 13
Stack trace: ExitCodeException exitCode=13:
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:585)
        at org.apache.hadoop.util.Shell.run(Shell.java:482)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:776)
        at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)


Container exited with a non-zero exit code 13
Failing this attempt. Failing the application.
         ApplicationMaster host: N/A
         ApplicationMaster RPC port: -1
         queue: default
         start time: 1550819137278
         final status: FAILED
         tracking URL: http://aiserver:8088/cluster/app/application_1550748865132_0022
         user: root


Any help will be highly appreciated

Have you set the master in your code to be local?

SparkConf.setMaster("local[*]")

No it is set to "yarn"

Try --master yarn-client. This is the solution for most of the error code 13

Hi receive above mentioned exception only when using --deploy-mode cluster as suggested by you. In client mode I dont receive this exception but my available number of worker nodes are not being utilized. Please refer to description

No answer to this question. Be the first to respond.

Your answer

Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.

Related Questions In Apache Spark

0 votes
1 answer

When running Spark on Yarn, do I need to install Spark on all nodes of Yarn Cluster?

No, it is not necessary to install ...READ MORE

answered Jun 14, 2018 in Apache Spark by nitinrawat895
• 11,380 points
6,288 views
0 votes
1 answer

Spark on Yarn

If you just want to get your ...READ MORE

answered Jul 18, 2019 in Apache Spark by ravikiran
• 4,620 points
891 views
0 votes
1 answer

Unable to submit the spark job in deployment mode - multinode cluster(using ubuntu machines) with yarn master

Hi@Ganendra, As you said you launched a multinode cluster, ...READ MORE

answered Jul 29, 2020 in Apache Spark by MD
• 95,460 points
2,319 views
0 votes
1 answer

How to stop messages from being displayed on spark console?

In your log4j.properties file you need to ...READ MORE

answered Apr 24, 2018 in Apache Spark by kurt_cobain
• 9,350 points
5,587 views
0 votes
1 answer

Why is Spark faster than Hadoop Map Reduce

Firstly, it's the In-memory computation, if the file ...READ MORE

answered Apr 30, 2018 in Apache Spark by shams
• 3,670 points
1,395 views
0 votes
1 answer

Why does sortBy transformation trigger a Spark job?

Actually, sortBy/sortByKey depends on RangePartitioner (JVM). So ...READ MORE

answered May 8, 2018 in Apache Spark by kurt_cobain
• 9,350 points
1,928 views
0 votes
1 answer

Apache Hadoop Yarn example program

You can go to this location $Yarn_Home/share/hadoop/mapreduce . You'll ...READ MORE

answered Apr 4, 2018 in Big Data Hadoop by nitinrawat895
• 11,380 points
1,229 views
+1 vote
2 answers
+1 vote
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 11,380 points
11,072 views
0 votes
1 answer

hadoop.mapred vs hadoop.mapreduce?

org.apache.hadoop.mapred is the Old API  org.apache.hadoop.mapreduce is the ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 11,380 points
2,571 views
webinar REGISTER FOR FREE WEBINAR X
REGISTER NOW
webinar_success Thank you for registering Join Edureka Meetup community for 100+ Free Webinars each month JOIN MEETUP GROUP