What are the levels of parallelism in spark streaming ?

0 votes
Jul 26, 2018 in Apache Spark by shams
• 3,580 points
196 views

1 answer to this question.

Your answer

Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.
0 votes

> In order to reduce the processing time, one needs to increase the parallelism.
> Spark Streaming provides three ways to increase the parallelism :
(1) Increase the number of receivers: If there are too many records for a single receiver (single machine) to read in and distribute so that is a bottleneck. So we can increase the no. of the receiver depending on the scenario.
(2) Re-partition the receive data: If one is not in a position to increase the no. of receivers, in that case, redistribute the data by re-partitioning.
(3) Increase parallelism in aggregation

answered Jul 26, 2018 by zombie
• 3,690 points
Clusters will not be fully utilized unless the level of parallelism for each operation is high enough. Spark automatically sets the number of partitions of an input file according to its size and for distributed shuffles. By default spark create one partition for each block of the file in HDFS it is 64MB by default

Related Questions In Apache Spark

0 votes
0 answers

what are the memory issues in spark ?

Mar 17 in Apache Spark by satish kumar
• 180 points
73 views
0 votes
1 answer

what are the job optimization Technics in spark and scala ?

There are different methods to achieve optimization ...READ MORE

answered Mar 18 in Apache Spark by Veer
83 views
0 votes
1 answer

What are the parameters in local[a,b,c] explains?

SparkContext.createTaskScheduler property parses the master parameter Local: 1 ...READ MORE

answered May 29, 2018 in Apache Spark by Shubham
• 12,270 points
48 views
0 votes
1 answer
0 votes
1 answer

In what kind of use cases has Spark outperformed Hadoop in processing?

I can list some but there can ...READ MORE

answered Sep 19, 2018 in Apache Spark by zombie
• 3,690 points
47 views
0 votes
1 answer

what are the spark real time issues ?

Some of the issues I have faced ...READ MORE

answered Mar 18 in Apache Spark by Sharman
115 views
0 votes
1 answer

what are the spark job and spark task and spark staging ?

In a Spark application, when you invoke ...READ MORE

answered Mar 18 in Apache Spark by Pavan
52 views
0 votes
1 answer

How to get the number of elements in partition?

rdd.mapPartitions(iter => Array(iter.size).iterator, true) This command will ...READ MORE

answered May 8, 2018 in Apache Spark by kurt_cobain
• 9,260 points
61 views
0 votes
7 answers

How to print the contents of RDD in Apache Spark?

Simple and easy: line.foreach(println) READ MORE

answered Dec 10, 2018 in Apache Spark by Kuber
5,010 views
+1 vote
3 answers

What is the difference between rdd and dataframes in Apache Spark ?

Comparison between Spark RDD vs DataFrame 1. Release ...READ MORE

answered Aug 27, 2018 in Apache Spark by shams
• 3,580 points
7,371 views

© 2018 Brain4ce Education Solutions Pvt. Ltd. All rights Reserved.
"PMP®","PMI®", "PMI-ACP®" and "PMBOK®" are registered marks of the Project Management Institute, Inc. MongoDB®, Mongo and the leaf logo are the registered trademarks of MongoDB, Inc.