py4j protocol Py4JError org apache spark api python PythonUtils getEncryptionEnabled does not exist in the JVM

+2 votes

I tried to integrate python with pyspark, but I am gettting this below error.

20/04/06 10:46:17 WARN Utils: Your hostname, localhost.localdomain resolves to a loopback address:; using instead (on interface enp0s3) 20/04/06 10:46:17 WARN Utils: Set SPARK_LOCAL_IP if
you need to bind to another address 20/04/06 10:46:18 WARN NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable Using Spark's default log4j
profile: org/apache/spark/ Setting default log level to "WARN". To adjust logging
level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). Traceback (most recent call
last): File "", line 5, in <module> sc = SparkContext.getOrCreate(SparkConf()) File
"/usr/local/lib/python3.6/site-packages/pyspark/", line 367, in getOrCreate SparkContext(conf=conf
or SparkConf()) File "/usr/local/lib/python3.6/site-packages/pyspark/", line 136, in __init__
conf, jsc, profiler_cls) File "/usr/local/lib/python3.6/site-packages/pyspark/", line 213, in
_do_init self._encryption_enabled = self._jvm.PythonUtils.getEncryptionEnabled(self._jsc) File
hadoop2.7/python/lib/", line 1516, in __getattr__
py4j.protocol.Py4JError: org.apache.spark.api.python.PythonUtils.getEncryptionEnabled does not exist in the JVM

How can I solve this error?
Thank You

Apr 7, 2020 in Apache Spark by akhtar
• 38,210 points

2 answers to this question.

0 votes


This error may occur, if you don't set environment variable in .bashrc file. Set your python environment variable as follows.

export PATH=$SPARK_HOME/bin:$SPARK_HOME/python:$PATH

Hope this will solve your error.

If you are a beginner and need to know more about Python, It's recommended to go for Python Certification course today.

Thank You

answered Apr 7, 2020 by MD
• 95,320 points
Where can I find the bashrc file?Location?

You can find .bashrc file in root(/root/.bashrc).

0 votes

Using findspark is expected to solve the problem:

Install findspark
$pip install findspark
In you code use:
import findspark

Optionally you can specify "/path/to/spark" in the `init` method above;findspark.init("/path/to/spark")
answered Jun 21, 2020 by suvasish

I think findspark module is used to connect spark from a remote system. But this error occurs because of the python library issue.

Related Questions In Apache Spark

0 votes
1 answer
+1 vote
3 answers

What is the difference between rdd and dataframes in Apache Spark ?

Comparison between Spark RDD vs DataFrame 1. Release ...READ MORE

answered Aug 28, 2018 in Apache Spark by shams
• 3,660 points
0 votes
1 answer

What is the difference between persist() and cache() in apache spark?

Hi, persist () allows the user to specify ...READ MORE

answered Jul 3, 2019 in Apache Spark by Gitika
• 65,970 points
+1 vote
1 answer

Error: value textfile is not a member of org.apache.spark.SparkContext

Hi, Regarding this error, you just need to change ...READ MORE

answered Jul 4, 2019 in Apache Spark by Gitika
• 65,970 points
+1 vote
2 answers
+1 vote
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 11,380 points
0 votes
1 answer

hadoop.mapred vs hadoop.mapreduce?

org.apache.hadoop.mapred is the Old API  org.apache.hadoop.mapreduce is the ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 11,380 points
+2 votes
11 answers

hadoop fs -put command?

Hi, You can create one directory in HDFS ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by nitinrawat895
• 11,380 points
0 votes
1 answer

I am not able to run the apache spark program in mac oc

Hi@Srinath, It seems you didn't set Hadoop for ...READ MORE

answered Sep 21, 2020 in Apache Spark by MD
• 95,320 points
+1 vote
8 answers

How to print the contents of RDD in Apache Spark?

Save it to a text file: line.saveAsTextFile("alicia.txt") Print contains ...READ MORE

answered Dec 10, 2018 in Apache Spark by Akshay