How to handle exceptions in Spark and Scala?

0 votes

Could you please help me to understand exceptions in Scala and Spark. And what are the common exceptions that we need to handle while writing spark code?
 

Jan 21 in Big Data Hadoop by slayer
• 29,040 points
50 views

1 answer to this question.

Your answer

Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.
0 votes

There is no particular format to handle exception caused in spark. There are a couple of exceptions that you will face on everyday basis, such as StringOutOfBoundException/FileNotFoundException which actually explains itself like if the number of columns mentioned in the dataset is more than number of columns mentioned in dataframe schema then you will find a StringOutOfBoundException or if the dataset path is incorrect while creating an rdd/dataframe then you will face FileNotFoundException.

So, in short, it completely depends on the type of code you are executing or mistakes you are going to commit while coding them. That is why we have interpreter such as spark shell that helps you execute the code line by line to understand the exception and get rid of them a little early.

answered Jan 21 by Omkar
• 65,810 points

Related Questions In Big Data Hadoop

0 votes
1 answer

How to groupBy/count then filter on count in Scala

I think the exception is caused because ...READ MORE

answered Apr 19, 2018 in Big Data Hadoop by kurt_cobain
• 9,260 points
3,481 views
0 votes
1 answer

What is Modeling data in Hadoop and how to do it?

I suggest spending some time with Apache ...READ MORE

answered Sep 19, 2018 in Big Data Hadoop by Frankie
• 9,570 points
35 views
0 votes
1 answer

How to read HDFS and local files with the same code in Java?

You can try something like this: ​ ...READ MORE

answered Nov 22, 2018 in Big Data Hadoop by Omkar
• 65,810 points
71 views
0 votes
1 answer

How to find the running namenodes and secondary name nodes in hadoop?

Name nodes: hdfs getconf -namenodes Secondary name nodes: hdfs getconf ...READ MORE

answered Nov 26, 2018 in Big Data Hadoop by Omkar
• 65,810 points
37 views
0 votes
0 answers
0 votes
1 answer

Setting textinputformat.record.delimiter in spark

I got this working with plain uncompressed ...READ MORE

answered Oct 10, 2018 in Big Data Hadoop by Omkar
• 65,810 points
286 views
0 votes
1 answer

Spark and Scale Auxiliary constructor doubt

println("Slayer") is an anonymous block and gets ...READ MORE

answered Jan 8 in Apache Spark by Omkar
• 65,810 points
24 views
0 votes
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 9,030 points
1,622 views
0 votes
3 answers

Spark Scala: How to list all folders in directory

val spark = SparkSession.builder().appName("Demo").getOrCreate() val path = new ...READ MORE

answered Dec 4, 2018 in Big Data Hadoop by Mark
620 views
0 votes
1 answer

How to save Spark dataframe as dynamic partitioned table in Hive?

Hey, you can try something like this: df.write.partitionBy('year', ...READ MORE

answered Nov 6, 2018 in Big Data Hadoop by Omkar
• 65,810 points
306 views

© 2018 Brain4ce Education Solutions Pvt. Ltd. All rights Reserved.
"PMP®","PMI®", "PMI-ACP®" and "PMBOK®" are registered marks of the Project Management Institute, Inc. MongoDB®, Mongo and the leaf logo are the registered trademarks of MongoDB, Inc.