Bucketing in Hive

0 votes

Could you please let me know by default, how many buckets are created in hdfs location while inserting data if buckets are not defined in create statement?

Feb 11 in Big Data Hadoop by Dinesh
84 views

1 answer to this question.

Your answer

Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.
0 votes
By default, only 1 bucket will be created but that is not going to be efficient. The number of buckets should be equal to or less than the number of files in the HDFS. But, if there are more buckets (for example 1 bucket for each file), then the storage will be very inefficient. So, the optimal numbers of buckets should be decided based on the number of files and the size of files.
answered Feb 11 by Omkar
• 65,810 points

Related Questions In Big Data Hadoop

0 votes
1 answer

Bucketing vs Partitioning in HIve

Partition divides large amount of data into ...READ MORE

answered Jul 9, 2018 in Big Data Hadoop by Data_Nerd
• 2,340 points
1,613 views
0 votes
2 answers

How are Partitioning and Bucketing different from each other in Apache Hive?

Let us consider a student database table ...READ MORE

answered Apr 15 in Big Data Hadoop by nitinrawat895
• 9,030 points
108 views
0 votes
1 answer

What is the syntax for creating bucketing table in hive?

Hi, Syntax for creating bucketed table is as ...READ MORE

answered 4 days ago in Big Data Hadoop by Gitika
• 6,500 points
7 views
0 votes
1 answer

How Impala is fast compared to Hive in terms of query response?

Impala provides faster response as it uses MPP(massively ...READ MORE

answered Mar 21, 2018 in Big Data Hadoop by nitinrawat895
• 9,030 points
172 views
0 votes
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 9,030 points
1,635 views
0 votes
1 answer

hadoop.mapred vs hadoop.mapreduce?

org.apache.hadoop.mapred is the Old API  org.apache.hadoop.mapreduce is the ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 9,030 points
130 views
0 votes
10 answers

hadoop fs -put command?

copy command can be used to copy files ...READ MORE

answered Dec 7, 2018 in Big Data Hadoop by Sujay
7,935 views
0 votes
1 answer

Hadoop dfs -ls command?

In your case there is no difference ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by kurt_cobain
• 9,260 points
549 views
0 votes
1 answer

Hadoop: How to keep duplicates in Hive using collect_set()?

SELECT hash_id, COLLECT_LIST(num_of_cats) AS ...READ MORE

answered Nov 2, 2018 in Big Data Hadoop by Omkar
• 65,810 points
131 views
0 votes
1 answer

How to save Spark dataframe as dynamic partitioned table in Hive?

Hey, you can try something like this: df.write.partitionBy('year', ...READ MORE

answered Nov 6, 2018 in Big Data Hadoop by Omkar
• 65,810 points
310 views

© 2018 Brain4ce Education Solutions Pvt. Ltd. All rights Reserved.
"PMP®","PMI®", "PMI-ACP®" and "PMBOK®" are registered marks of the Project Management Institute, Inc. MongoDB®, Mongo and the leaf logo are the registered trademarks of MongoDB, Inc.