How to handle exceptions in Spark and Scala

0 votes

Could you please help me to understand exceptions in Scala and Spark. And what are the common exceptions that we need to handle while writing spark code?
 

Jan 21, 2019 in Big Data Hadoop by slayer
• 29,300 points
3,336 views

1 answer to this question.

0 votes

There is no particular format to handle exception caused in spark. There are a couple of exceptions that you will face on everyday basis, such as StringOutOfBoundException/FileNotFoundException which actually explains itself like if the number of columns mentioned in the dataset is more than number of columns mentioned in dataframe schema then you will find a StringOutOfBoundException or if the dataset path is incorrect while creating an rdd/dataframe then you will face FileNotFoundException.

So, in short, it completely depends on the type of code you are executing or mistakes you are going to commit while coding them. That is why we have interpreter such as spark shell that helps you execute the code line by line to understand the exception and get rid of them a little early.

Hope this helps!

To know more about Spark Scala, It's recommended to join Apache Spark training online today.

Thanks!!

answered Jan 21, 2019 by Omkar
• 69,150 points

Related Questions In Big Data Hadoop

0 votes
1 answer

How to groupBy/count then filter on count in Scala

I think the exception is caused because ...READ MORE

answered Apr 19, 2018 in Big Data Hadoop by kurt_cobain
• 9,390 points
23,197 views
0 votes
1 answer

What is Modeling data in Hadoop and how to do it?

I suggest spending some time with Apache ...READ MORE

answered Sep 19, 2018 in Big Data Hadoop by Frankie
• 9,810 points
848 views
+1 vote
1 answer

How to read HDFS and local files with the same code in Java?

You can try something like this: ​ ...READ MORE

answered Nov 22, 2018 in Big Data Hadoop by Omkar
• 69,150 points
3,014 views
0 votes
1 answer

How to find the running namenodes and secondary name nodes in hadoop?

Name nodes: hdfs getconf -namenodes Secondary name nodes: hdfs getconf ...READ MORE

answered Nov 26, 2018 in Big Data Hadoop by Omkar
• 69,150 points
454 views
+1 vote
2 answers
0 votes
1 answer

Setting textinputformat.record.delimiter in spark

I got this working with plain uncompressed ...READ MORE

answered Oct 10, 2018 in Big Data Hadoop by Omkar
• 69,150 points
1,262 views
0 votes
1 answer

Spark and Scale Auxiliary constructor doubt

println("Slayer") is an anonymous block and gets ...READ MORE

answered Jan 8, 2019 in Apache Spark by Omkar
• 69,150 points
144 views
+1 vote
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 11,380 points
7,577 views
0 votes
3 answers

Spark Scala: How to list all folders in directory

val spark = SparkSession.builder().appName("Demo").getOrCreate() val path = new ...READ MORE

answered Dec 4, 2018 in Big Data Hadoop by Mark
9,406 views
0 votes
1 answer

How to save Spark dataframe as dynamic partitioned table in Hive?

Hey, you can try something like this: df.write.partitionBy('year', ...READ MORE

answered Nov 6, 2018 in Big Data Hadoop by Omkar
• 69,150 points
6,664 views