Passing condition dynamically to Spark application.

0 votes

How to pass the condition "dF.filter(Id==3000) " dynamically to a spark Application? 

Feb 19 in Apache Spark by Nanda
262 views
I need this question answer how to pass the condition do.filter(I'd==300)  dynamically to a spark application

1 answer to this question.

0 votes

You can try this:

d.filter(col("value").isin(desiredThings: _*))

and if you want to foldLeft you have to provide the base condition:

d.filter(desiredThings.foldLeft(lit(false))(

  (acc, x) => (acc || col("value") === (x)))

)

Alternatively, to use with a filter or where, you can generate a SQL expression using:

val filterExpr = desiredThings.map( v => s"value = $v").mkString(" or ")

And then use it like

d.filter(filterExpr).show

// or

d.where(filterExpr).show
answered Feb 19 by Omkar
• 67,660 points

Related Questions In Apache Spark

0 votes
1 answer

Is it mandatory to start Hadoop to run spark application?

No, it is not mandatory, but there ...READ MORE

answered Jun 14, 2018 in Apache Spark by nitinrawat895
• 10,730 points
60 views
0 votes
1 answer

How to give user only view access for Spark application?

You can give users only view permission ...READ MORE

answered Mar 14 in Apache Spark by Raj
34 views
0 votes
1 answer

How to enable SSL for Spark application?

You can do it dynamically like this: val ...READ MORE

answered Mar 15 in Apache Spark by Karan
57 views
0 votes
1 answer

How to increase worker timeout in Spark application?

By default, the timeout is set to ...READ MORE

answered Mar 25 in Apache Spark by Hari
655 views
0 votes
1 answer
0 votes
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 10,730 points
3,371 views
0 votes
1 answer

hadoop.mapred vs hadoop.mapreduce?

org.apache.hadoop.mapred is the Old API  org.apache.hadoop.mapreduce is the ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 10,730 points
405 views
0 votes
10 answers

hadoop fs -put command?

put syntax: put <localSrc> <dest> copy syntax: copyFr ...READ MORE

answered Dec 7, 2018 in Big Data Hadoop by Aditya
16,743 views
–1 vote
1 answer

Not able to use sc in spark shell

Seems like master and worker are not ...READ MORE

answered Jan 3 in Apache Spark by Omkar
• 67,660 points
179 views
0 votes
1 answer

where can i get spark-terasort.jar and not .scala file, to do spark terasort in windows.

Hi! I found 2 links on github where ...READ MORE

answered Feb 13 in Apache Spark by Omkar
• 67,660 points
142 views