Explain to me the correct way to get a Hadoop FileSystem object so that I can use it for reading from HDFS as well as writing to HDFS.

0 votes

Can anyone help me with the right path to get a Hadoop file system so that I can read and write a certain file in HDFS? I tried a few methods as follows:

final Configuration conf = new Configuration();
conf.addResource(new Path("/usr/local/hadoop/etc/hadoop/core-site.xml"));
conf.addResource(new Path("/usr/local/hadoop/etc/hadoop/hdfs-site.xml"));

final FileSystem fs = FileSystem.get(conf);

The Class configuration documentation says the core-site.xml properties automatically get loaded when an object is created and the required file is in the classpath and there would be no need to set it again. 

The HDFS-site.xml is not added, yet it seems to work fine.

Is it a recommended option to just configure Core-site.xml and skip HDFS-site.xml or is it mandatory to define both? please let me know.

Jun 7 in Big Data Hadoop by nitinrawat895
• 9,450 points
19 views

1 answer to this question.

Your answer

Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.
0 votes

FileSystem needs only one configuration key to successfully connect to HDFS. Previously it was fs.default.name. From yarn onward, it's changed to fs.defaultFS. So the following snippet is sufficient for the connection.

Configuration conf = new Configuration();
conf.set(key, "hdfs://host:port");  // where key="fs.default.name"|"fs.defaultFS"

FileSystem fs = FileSystem.get(conf);       

I would suggest checking the core-site.xml which key exists. Set the same value associated with it in conf. If the machine from where you are running the code doesn't have the hostname mapping, put its IP. In MapR cluster value will have a prefix like MapRfs://.

answered Jun 7 by ravikiran
• 2,280 points

Related Questions In Big Data Hadoop

0 votes
1 answer

How to create a FileSystem object that can be used for reading from and writing to HDFS?

Read operation on HDFS In order to read ...READ MORE

answered Mar 21, 2018 in Big Data Hadoop by nitinrawat895
• 9,450 points

edited Mar 21, 2018 by nitinrawat895 171 views
0 votes
1 answer

In Hadoop MapReduce, how can i set an Object as the Value for Map output?

Try this and see if it works: public ...READ MORE

answered Nov 20, 2018 in Big Data Hadoop by Omkar
• 66,910 points
21 views
0 votes
1 answer
0 votes
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 9,450 points
1,844 views
0 votes
1 answer

hadoop.mapred vs hadoop.mapreduce?

org.apache.hadoop.mapred is the Old API  org.apache.hadoop.mapreduce is the ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 9,450 points
159 views
0 votes
10 answers

hadoop fs -put command?

copy command can be used to copy files ...READ MORE

answered Dec 7, 2018 in Big Data Hadoop by Sujay
9,140 views
0 votes
1 answer

Hadoop dfs -ls command?

In your case there is no difference ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by kurt_cobain
• 9,260 points
656 views
0 votes
1 answer

I need to copy data from one HDFS to another HDFS. Can you help me do so?

I understood your issue. Let me help you ...READ MORE

answered May 16 in Big Data Hadoop by ravikiran
• 2,280 points
21 views
0 votes
1 answer

Explain to me the Elasticsearch and Hadoop in a much better manner

I understand your problem, I suggest you download ...READ MORE

answered May 10 in Big Data Hadoop by ravikiran
• 2,280 points
25 views

© 2018 Brain4ce Education Solutions Pvt. Ltd. All rights Reserved.
"PMP®","PMI®", "PMI-ACP®" and "PMBOK®" are registered marks of the Project Management Institute, Inc. MongoDB®, Mongo and the leaf logo are the registered trademarks of MongoDB, Inc.