How to find the number of null contain in dataframe?

0 votes
May 3 in Apache Spark by anonymous
• 120 points

edited May 3 by anonymous 135 views

1 answer to this question.

0 votes

Hey there!

You can use the select method of the dataframe to filter out the values.

df.select([count(when(isnull(c), c)).alias(c) for c in df.columns]).show()

This will display a table with column names and the number of Null values in each column.

If you want to check Null values for a column, then you can use the below code:

df.where(df.col("<Enter column name here>").isNull).count()
answered May 3 by Omkar
• 67,290 points

Related Questions In Apache Spark

0 votes
1 answer
0 votes
1 answer

How to get the number of elements in partition?

rdd.mapPartitions(iter => Array(iter.size).iterator, true) This command will ...READ MORE

answered May 8, 2018 in Apache Spark by kurt_cobain
• 9,240 points
138 views
0 votes
6 answers
0 votes
7 answers

How to print the contents of RDD in Apache Spark?

Simple and easy: line.foreach(println) READ MORE

answered Dec 10, 2018 in Apache Spark by Kuber
8,185 views
0 votes
1 answer
0 votes
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 10,490 points
2,313 views
0 votes
1 answer

hadoop.mapred vs hadoop.mapreduce?

org.apache.hadoop.mapred is the Old API  org.apache.hadoop.mapreduce is the ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 10,490 points
237 views
0 votes
10 answers

hadoop fs -put command?

put syntax: put <localSrc> <dest> copy syntax: copyFr ...READ MORE

answered Dec 7, 2018 in Big Data Hadoop by Aditya
11,908 views
0 votes
1 answer
0 votes
1 answer

Not able to use sc in spark shell

Seems like master and worker are not ...READ MORE

answered Jan 3 in Apache Spark by Omkar
• 67,290 points
117 views