What should be the choice of database and what type of data format is suitable for Spark hadoop

0 votes


down vote

I am working on structured data (one value per field, the same fields for each row) that I have to put in a NoSql environment with Spark (as analysing tool) and Hadoop. Though, I am wondering what format to use. i was thinking about json or csv but I'm not sure. What do you think and why? I don't have enough experience in this field to properly decide.

2nd question : I have to analyse these data (stored in an HDFS). So, as far as I know I have two possibilities to query them (before the analysis):

  1. direct reading and filtering. I mean that it can be done with Spark, for example:

    data = sqlCtxt.read.json(path_data)
    
  2. Use Hbase/Hive to properly make a query and then process the data.

So, I don't know what is the standard way of doing all this and above all, what will be the fastest. 

Sep 28, 2018 in Big Data Hadoop by Neha
• 6,300 points
775 views

1 answer to this question.

0 votes
Use Parquet. I'm not sure about CSV but definitely, don't use JSON. My personal experience using JSON with spark was extremely, extremely slow to read from storage, after switching to Parquet my read times were much faster (e.g. some small files took minutes to load in compressed JSON, now they take less than a second to load in compressed Parquet).

On top of improving read speeds, compressed parquet can be partitioned by a spark when reading, whereas compressed JSON cannot. What this means is that Parquet can be loaded onto multiple cluster workers, whereas JSON will just be read onto a single node with 1 partition. This isn't a good idea if your files are large and you'll get Out Of Memory Exceptions. It also won't parallelize your computations, so you'll be executing on one node. This isn't the 'Sparky' way of doing things.

Final point: you can use SparkSQL to execute queries on stored parquet files, without having to read them into data frames first. Very handy.

Hope this helps :)
answered Sep 28, 2018 by Frankie
• 9,830 points

Related Questions In Big Data Hadoop

0 votes
2 answers

What is the difference between the Smart Data Access of SAP HANA and SAP HANA Vora?

Hadoop: Used to store Big Data in ...READ MORE

answered Jun 21, 2018 in Big Data Hadoop by kurt_cobain
• 9,390 points
1,107 views
0 votes
1 answer

Mention what is the Hadoop MapReduce APIs contract for a key and value class?

Hey, For a key and value class, there ...READ MORE

answered Jun 10, 2019 in Big Data Hadoop by Gitika
• 65,910 points
618 views
0 votes
2 answers

What is the relationship between Hadoop and Database?

Hadoop software framework work is very well ...READ MORE

answered Aug 6, 2019 in Big Data Hadoop by Dinesh
1,034 views
0 votes
1 answer
+1 vote
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 11,380 points
10,604 views
0 votes
1 answer

hadoop.mapred vs hadoop.mapreduce?

org.apache.hadoop.mapred is the Old API  org.apache.hadoop.mapreduce is the ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 11,380 points
2,209 views
+2 votes
11 answers

hadoop fs -put command?

Hi, You can create one directory in HDFS ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by nitinrawat895
• 11,380 points
104,785 views
0 votes
1 answer

What is the Data format and database choices in Hadoop and Spark?

Use Parquet. I'm not sure about CSV ...READ MORE

answered Sep 4, 2018 in Big Data Hadoop by Frankie
• 9,830 points
723 views
0 votes
1 answer

What is the difference between a Big Data Warehouse and a traditional Data Warehouse?

Hadoop is similar in architecture to MPP data ...READ MORE

answered Aug 10, 2018 in Big Data Hadoop by Frankie
• 9,830 points
1,166 views
webinar REGISTER FOR FREE WEBINAR X
REGISTER NOW
webinar_success Thank you for registering Join Edureka Meetup community for 100+ Free Webinars each month JOIN MEETUP GROUP