Spark and Scale Auxiliary constructor doubt

0 votes

Refer to the code snippet---
 

class AuxDuck()
{
var size = 0
var age = 0

println("Slayer")

def this(size:Int)
{
this()//calls primary constructor
this.age=age
}

def this(size:Int, age:Int)
{
this(size) // calls previous auxiliary constructor
this.age=age
}
}

object AuxilaryConstructor extends App 
{
var d1 = new AuxDuck()
println(d1.size + "," +d1.age)

It gives output---
Slayer
0,0

May I know how the Slayer got printed?
Is there any role of primary constructor in this printing of Slayer?

Jan 8 in Apache Spark by slayer
• 29,050 points
28 views

1 answer to this question.

0 votes
println("Slayer") is an anonymous block and gets executed each time an object is created.

Hence Slayer is printed.
answered Jan 8 by Omkar
• 67,140 points

Related Questions In Apache Spark

+1 vote
2 answers

Hadoop 3 compatibility with older versions of Hive, Pig, Sqoop and Spark

Hadoop 3 is not widely used in ...READ MORE

answered Apr 20, 2018 in Apache Spark by kurt_cobain
• 9,240 points
1,492 views
0 votes
1 answer

What's the difference between 'filter' and 'where' in Spark SQL?

Both 'filter' and 'where' in Spark SQL ...READ MORE

answered May 23, 2018 in Apache Spark by nitinrawat895
• 10,150 points
4,204 views
0 votes
1 answer

Is it possible to run Spark and Mesos along with Hadoop?

Yes, it is possible to run Spark ...READ MORE

answered May 29, 2018 in Apache Spark by Data_Nerd
• 2,360 points
32 views
0 votes
1 answer

How to save and retrieve the Spark RDD from HDFS?

You can save the RDD using saveAsObjectFile and saveAsTextFile method. ...READ MORE

answered May 29, 2018 in Apache Spark by Shubham
• 13,210 points
1,472 views
0 votes
0 answers
0 votes
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 10,150 points
2,062 views
0 votes
1 answer

hadoop.mapred vs hadoop.mapreduce?

org.apache.hadoop.mapred is the Old API  org.apache.hadoop.mapreduce is the ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 10,150 points
199 views
0 votes
10 answers

hadoop fs -put command?

copy command can be used to copy files ...READ MORE

answered Dec 7, 2018 in Big Data Hadoop by Sujay
10,561 views
0 votes
1 answer
0 votes
1 answer

Not able to use sc in spark shell

Seems like master and worker are not ...READ MORE

answered Jan 3 in Apache Spark by Omkar
• 67,140 points
102 views