How to change the spark Session configuration in Pyspark?

0 votes

I am trying to change the default configuration of Spark Session. But it is not working.

spark_session  = SparkSession.builder
                      .master("ip")
                      .enableHiveSupport()
                      .getOrCreate()

spark_session.conf.set("spark.executor.memory", '8g')
spark_session.conf.set('spark.executor.cores', '3')
spark_session.conf.set('spark.cores.max', '3')
spark_session.conf.set("spark.driver.memory",'8g')
sc = spark_session.sparkContext

But if I put the configuration in Spark submit, then it works fine for me.

spark-submit --master ip --executor-cores=3 --diver 8G sample.py
May 29, 2018 in Apache Spark by code799
17,575 views

4 answers to this question.

0 votes

You are not changing the configuration of PySpark. Just open pyspark shell and check the settings:

sc.getConf().getAll()

Now you can execute the code and again check the setting of the Pyspark shell.

You first have to create conf and then you can create the Spark Context using that configuration object.

config = pyspark.SparkConf().setAll([('spark.executor.memory', '8g'), ('spark.executor.cores', '3'), ('spark.cores.max', '3'), ('spark.driver.memory','8g')])
sc.stop()
sc = pyspark.SparkContext(conf=config)
answered May 29, 2018 by Shubham
• 13,350 points
0 votes

Adding to Shubham's answer, after updating the configuration, you have to stop the spark session and create a new spark session.

spark.sparkContext.stop()
spark = SparkSession.builder.config(conf=conf).getOrCreate()
answered Dec 10, 2018 by Hilight
0 votes

This should work

spark = SparkSession.builder.config(conf=conf1).getOrCreate()
sc = spark.sparkContext
answered Dec 10, 2018 by Shikar
0 votes

You can dynamically load properties. First create a new empty conf and then pass your conf on run-time:

val sc = new SparkContext(new SparkConf())
spark-submit --master ip --executor-cores=3 --diver 8G sample.py
answered Dec 10, 2018 by Vini

Related Questions In Apache Spark

0 votes
7 answers

How to print the contents of RDD in Apache Spark?

Simple and easy: line.foreach(println) READ MORE

answered Dec 10, 2018 in Apache Spark by Kuber
13,882 views
0 votes
1 answer

How to change the location of Spark event logs?

You can change the location where you ...READ MORE

answered Mar 6 in Apache Spark by Rohit
315 views
0 votes
1 answer

How to change commiter algorithm version in Spark?

To change to version 2, run the ...READ MORE

answered Mar 10 in Apache Spark by Siri
455 views
0 votes
1 answer

How to change scheduling mode in Spark?

You can change the scheduling mode as ...READ MORE

answered Mar 11 in Apache Spark by Raj
257 views
+1 vote
1 answer
0 votes
1 answer

Is it possible to run Apache Spark without Hadoop?

Though Spark and Hadoop were the frameworks designed ...READ MORE

answered May 2 in Big Data Hadoop by ravikiran
• 4,580 points
92 views
0 votes
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 10,760 points
3,536 views
+1 vote
11 answers

hadoop fs -put command?

put syntax: put <localSrc> <dest> copy syntax: copyFr ...READ MORE

answered Dec 7, 2018 in Big Data Hadoop by Aditya
17,991 views
0 votes
1 answer
0 votes
2 answers

In a Spark DataFrame how can I flatten the struct?

// Collect data from input avro file ...READ MORE

answered Jul 4 in Apache Spark by Dhara dhruve
1,322 views