Passing condition dynamically to Spark application

0 votes

How to pass the condition "dF.filter(Id==3000) " dynamically to a spark Application? 

Feb 19, 2019 in Apache Spark by Nanda
8,388 views
I need this question answer how to pass the condition do.filter(I'd==300)  dynamically to a spark application

1 answer to this question.

0 votes

You can try this:

d.filter(col("value").isin(desiredThings: _*))

and if you want to foldLeft you have to provide the base condition:

d.filter(desiredThings.foldLeft(lit(false))(

  (acc, x) => (acc || col("value") === (x)))

)

Alternatively, to use with a filter or where, you can generate a SQL expression using:

val filterExpr = desiredThings.map( v => s"value = $v").mkString(" or ")

And then use it like

d.filter(filterExpr).show

// or

d.where(filterExpr).show
answered Feb 19, 2019 by Omkar
• 69,210 points

Related Questions In Apache Spark

0 votes
1 answer

Is it mandatory to start Hadoop to run spark application?

No, it is not mandatory, but there ...READ MORE

answered Jun 14, 2018 in Apache Spark by nitinrawat895
• 11,380 points
687 views
0 votes
1 answer

How to give user only view access for Spark application?

You can give users only view permission ...READ MORE

answered Mar 14, 2019 in Apache Spark by Raj
1,259 views
0 votes
1 answer

How to enable SSL for Spark application?

You can do it dynamically like this: val ...READ MORE

answered Mar 15, 2019 in Apache Spark by Karan
2,201 views
0 votes
1 answer

How to increase worker timeout in Spark application?

By default, the timeout is set to ...READ MORE

answered Mar 25, 2019 in Apache Spark by Hari
8,419 views
+1 vote
2 answers
+1 vote
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 11,380 points
10,600 views
0 votes
1 answer

hadoop.mapred vs hadoop.mapreduce?

org.apache.hadoop.mapred is the Old API  org.apache.hadoop.mapreduce is the ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 11,380 points
2,207 views
+2 votes
11 answers

hadoop fs -put command?

Hi, You can create one directory in HDFS ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by nitinrawat895
• 11,380 points
104,770 views
–1 vote
1 answer

Not able to use sc in spark shell

Seems like master and worker are not ...READ MORE

answered Jan 3, 2019 in Apache Spark by Omkar
• 69,210 points
1,395 views
0 votes
1 answer

where can i get spark-terasort.jar and not .scala file, to do spark terasort in windows.

Hi! I found 2 links on github where ...READ MORE

answered Feb 13, 2019 in Apache Spark by Omkar
• 69,210 points
1,140 views
webinar REGISTER FOR FREE WEBINAR X
REGISTER NOW
webinar_success Thank you for registering Join Edureka Meetup community for 100+ Free Webinars each month JOIN MEETUP GROUP