How can we optimize and minimize the memory when work with scala use case?

0 votes
When we calculate some use case with million of list in a collection, what can we do so that the memory allocation will be less but the output will be same? Can anyone explain?
Jul 5 in Apache Spark by nilam
15 views

1 answer to this question.

0 votes

Hi,

There is a term in Scala that is Lazy evaluation which is used for the spark to get rid of huge memory allocation. You can see the example below to see no memory allocation:

scala> lazy val x = (1 to 1000) .toList



You will get the same output but there will be no memory allocation to the var x.

answered Jul 5 by Gitika
• 19,720 points

Related Questions In Apache Spark

0 votes
0 answers
0 votes
1 answer

How can I minimize data transfers when working with Spark?

Minimizing data transfers and avoiding shuffling helps ...READ MORE

answered Sep 19, 2018 in Apache Spark by zombie
• 3,690 points
120 views
0 votes
1 answer

How can we use spark shell for scala without cluster?

You can run the Spark shell for ...READ MORE

answered Apr 28 in Apache Spark by Giri
45 views
0 votes
1 answer

How to calculate the result of formula with Scala?

Hi, You can use a simple mathematical calculation ...READ MORE

answered Jul 1 in Apache Spark by Gitika
• 19,720 points
26 views
0 votes
0 answers
0 votes
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 10,110 points
2,047 views
0 votes
1 answer

hadoop.mapred vs hadoop.mapreduce?

org.apache.hadoop.mapred is the Old API  org.apache.hadoop.mapreduce is the ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 10,110 points
196 views
0 votes
10 answers

hadoop fs -put command?

copy command can be used to copy files ...READ MORE

answered Dec 7, 2018 in Big Data Hadoop by Sujay
10,476 views
0 votes
1 answer
0 votes
1 answer