getting null values in spark dataframe while reading data from hbase

+1 vote

I am reading data from hbase using spark sql jdbc. one column has xml data. when xml size is small , I am able to read correct data in all columns. but if sizeof xml increases too much in a given row, some of the columns in dataframe becomes null for that row. xml is still coming correctly.

Jul 17, 2018 in Apache Spark by Raj
• 130 points
454 views

1 answer to this question.

0 votes
Can you share the screenshots for the same?
answered Jul 31, 2018 by kurt_cobain
• 9,260 points

Related Questions In Apache Spark

0 votes
6 answers
0 votes
1 answer

Filtering a row in Spark DataFrame based on matching values from a list

Use the function as following: var notFollowingList=List(9.8,7,6,3, ...READ MORE

answered Jun 5, 2018 in Apache Spark by Shubham
• 13,310 points
31,860 views
0 votes
1 answer

Getting error while connecting zookeeper in Kafka - Spark Streaming integration

I guess you need provide this kafka.bootstrap.servers ...READ MORE

answered May 24, 2018 in Apache Spark by Shubham
• 13,310 points
707 views
0 votes
1 answer

How to read a data from text file in Spark?

Hey, You can try this: from pyspark import SparkContext SparkContext.stop(sc) sc ...READ MORE

answered Aug 6 in Apache Spark by Gitika
• 25,360 points
324 views
0 votes
1 answer
0 votes
1 answer

What do we exactly mean by “Hadoop” – the definition of Hadoop?

The official definition of Apache Hadoop given ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by Shubham
230 views
+1 vote
1 answer
0 votes
13 answers

What is the difference between Hadoop/HDFS & HBase?

HDFS is a distributed file system whereas ...READ MORE

answered Apr 26 in Big Data Hadoop by Arihar
• 160 points
10,879 views
0 votes
1 answer

Efficient way to read specific columns from parquet file in spark

As parquet is a column based storage ...READ MORE

answered Apr 20, 2018 in Apache Spark by kurt_cobain
• 9,260 points
1,431 views
+5 votes
11 answers

Concatenate columns in apache spark dataframe

its late but this how you can ...READ MORE

answered Mar 21 in Apache Spark by anonymous
34,078 views