Copy all files from local Windows to HDFS with Scala code

0 votes
I have requirement copying the files from local machine to Hadoop environment with Scala programming.

1. We have to copy all the files from the share point folder to the local machine. (This one I am able to copy from share folder to location machine)

2. Once files copied to a local machine then I have to move all files from local machine (windows) to HDFS location with scala code.

Can you please help me with code how to copy/move all the files from the local system to HDFS path (2nd point) with Scala programming.
May 22, 2019 in Apache Spark by Rishab
3,750 views

1 answer to this question.

0 votes

Please try the following Scala code:

import org.apache.hadoop.conf.Configuration

import org.apache.hadoop.fs.FileSystem

import org.apache.hadoop.fs.Path


val hadoopConf = new Configuration()

val hdfs = FileSystem.get(hadoopConf)


val srcPath = new Path(srcFilePath)

val destPath = new Path(destFilePath)


hdfs.copyFromLocalFile(srcPath, destPath)

You should also check if Spark has the HADOOP_CONF_DIR variable set in the conf/spark-env.sh file. This will make sure that Spark is going to find the Hadoop configuration settings.

The dependencies for the build.sbt file:

libraryDependencies += "org.apache.hadoop" % "hadoop-common" % "2.6.0"

libraryDependencies += "org.apache.commons" % "commons-io" % "1.3.2"

libraryDependencies += "org.apache.hadoop" % "hadoop-hdfs" % "2.6.0"


Hope it helps!

If you want to know more about Apache Spark Scala, It's highly recommended to go for Apache Spark certification course today.

Thanks!!

answered May 22, 2019 by Karan

Related Questions In Apache Spark

0 votes
1 answer

Copy file from local to hdfs from the spark job in yarn mode

Refer to the below code: import org.apache.hadoop.conf.Configuration import org.apache.hadoop.fs.FileSystem import ...READ MORE

answered Jul 24, 2019 in Apache Spark by Yogi
3,421 views
0 votes
1 answer

Load .xlsx files to hive tables with spark scala

This should work: def readExcel(file: String): DataFrame = ...READ MORE

answered Jul 22, 2019 in Apache Spark by Kishan
4,061 views
0 votes
1 answer

How to save and retrieve the Spark RDD from HDFS?

You can save the RDD using saveAsObjectFile and saveAsTextFile method. ...READ MORE

answered May 29, 2018 in Apache Spark by Shubham
• 13,490 points
13,045 views
0 votes
1 answer

How to create RDD from parallelized collection in scala?

Hi, You can check this example in your ...READ MORE

answered Jul 4, 2019 in Apache Spark by Gitika
• 65,910 points
1,333 views
0 votes
1 answer

Scala pass input data as arguments

Please refer to the below code as ...READ MORE

answered Jun 19, 2019 in Apache Spark by Lisa
2,127 views
0 votes
1 answer

Doubt in display(id, name, salary) before display function

The statement display(id, name, salary) is written before the display function ...READ MORE

answered Jun 19, 2019 in Apache Spark by Ritu
403 views
0 votes
1 answer

How to calculate the result of formula with Scala?

Hi, You can use a simple mathematical calculation ...READ MORE

answered Jul 1, 2019 in Apache Spark by Gitika
• 65,910 points
1,013 views
0 votes
2 answers

How to execute a function in apache-scala?

Function Definition : def test():Unit{ var a=10 var b=20 var c=a+b } calling ...READ MORE

answered Aug 5, 2020 in Apache Spark by Ramkumar Ramasamy
677 views
0 votes
1 answer

where can i get spark-terasort.jar and not .scala file, to do spark terasort in windows.

Hi! I found 2 links on github where ...READ MORE

answered Feb 13, 2019 in Apache Spark by Omkar
• 69,210 points
1,145 views
0 votes
1 answer

How to select all columns with group by?

You can use the following to print ...READ MORE

answered Feb 19, 2019 in Apache Spark by Omkar
• 69,210 points
13,410 views
webinar REGISTER FOR FREE WEBINAR X
REGISTER NOW
webinar_success Thank you for registering Join Edureka Meetup community for 100+ Free Webinars each month JOIN MEETUP GROUP