What is the best way to merge multi-part HDFS files into single file?

+1 vote

I am using Spark 2.2.1. My application code creates several 0 byte/very small size part files like the below.

part-04498-f33fc4b5-47d9-4d14-b37e-8f670cb2c53c-c000.snappy.parquet
part-04499-f33fc4b5-47d9-4d14-b37e-8f670cb2c53c-c000.snappy.parquet

All of these files are either 0 byte files with no actual data or very small files. 
1.What is the best way to merge all of these files into single HDFS file? 
2. If all of these are 0 byte files, I want to get rid of them. Can I achieve it with some setting in Spark?

I tried the below but the multi-part files were still there.

sc.textFile("hdfs://nameservice1/data/refined/lzcimp/mbr/part*").coalesce(1).saveAsTextFile("hdfs://nameservice1/data/refined/lzcimp/mbr/final.snappy.parquet")
Jul 29, 2019 in Big Data Hadoop by Karan
8,821 views

1 answer to this question.

+1 vote

1. In order to merge two or more files into one single file and store it in hdfs, you need to have a folder in the hdfs path containing the files that you want to merge.

Here, I am having a folder namely merge_files which contains the following files that I want to merge


image

Then you can execute the following command to the merge the files and store it in hdfs:

hadoop fs -cat /user/edureka_425640/merge_files/* | hadoop fs -put - /user/edureka_425640/merged_file s

The merged_files folder need not be created manually. It is going to be created automatically to store your output when you are using the above command. You can view your output using the following command. Here my merged_files is storing my output result.

hadoop fs -cat merged_files

Supposing we have a folder with multiple empty files and some non-empty files and if we want to delete the files that are empty, we can use the below command:

hdfs dfs -rm $(hdfs dfs -ls -R /user/A/ | grep -v "^d" | awk '{if ($5 == 0) print $8}')

Here I am having a folder, temp_folder with three files, 2 being empty and 1 file is nonempty. Please refer to the screenshot below:

image

image

answered Jul 29, 2019 by Tina
ALTER TABLE table_name [PARTITION partition_spec] CONCATENATE
Hello @Pedros,

What does this code do?
Hello can i ask how do i add 100+ of text files into your merge_files folder?

Hey, 

This is technically what cat ("concatenate") is supposed to do, even though most people just use it for output files to stdout. If you give it multiple filenames it will output them all sequentially, and then you can redirect that into a new file; in the case of all files just use * (or /path/to/directory/* if you're not in the directory already) and your shell will expand it to all the filenames

$ cat * > merged-file


Related Questions In Big Data Hadoop

0 votes
1 answer

What is the standard way to create files in your hdfs file-system?

Well, it's so easy. Just enter the below ...READ MORE

answered Sep 22, 2018 in Big Data Hadoop by Frankie
• 9,810 points
470 views
0 votes
1 answer

I have to ingest in hadoop cluster large number of files for testing , what is the best way to do it?

Hi@sonali, It depends on what kind of testing ...READ MORE

answered Jul 7 in Big Data Hadoop by MD
• 41,340 points
104 views
0 votes
1 answer

What are the various ways to import files into HDFS?

There are various tools and frameworks available ...READ MORE

answered Apr 13, 2018 in Big Data Hadoop by nitinrawat895
• 10,920 points
884 views
+1 vote
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 10,920 points
5,484 views
0 votes
1 answer

hadoop.mapred vs hadoop.mapreduce?

org.apache.hadoop.mapred is the Old API  org.apache.hadoop.mapreduce is the ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 10,920 points
808 views
+1 vote
11 answers

hadoop fs -put command?

put syntax: put <localSrc> <dest> copy syntax: copyF ...READ MORE

answered Dec 7, 2018 in Big Data Hadoop by Aditya
33,989 views
–1 vote
1 answer

Hadoop dfs -ls command?

In your case there is no difference ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by kurt_cobain
• 9,310 points
2,068 views
0 votes
5 answers
0 votes
1 answer

What is the command to navigate in HDFS?

First of all there is no command ...READ MORE

answered Apr 27, 2018 in Big Data Hadoop by Shubham
• 13,380 points
1,120 views