Error while starting the daemon process in windows 10

0 votes
op-mapreduce-examples-3.1.3.jar
STARTUP_MSG:   build = https://gitbox.apache.org/repos/asf/hadoop.git -r ba631c436b806728f8ec2f54ab1e289526c90579; compiled by 'ztang' on 2019-09-12T02:47Z
STARTUP_MSG:   java = 1.8.0_251
************************************************************/
2020-04-18 18:04:35,576 INFO checker.ThrottledAsyncChecker: Scheduling a check for [DISK]file:/C:/hadoop/data/datanode
2020-04-18 18:04:35,747 WARN checker.StorageLocationChecker: Exception checking StorageLocation [DISK]file:/C:/hadoop/data/datanode
java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$POSIX.stat(Ljava/lang/String;)Lorg/apache/hadoop/io/nativeio/NativeIO$POSIX$Stat;
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.stat(Native Method)
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.getStat(NativeIO.java:455)
        at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfoByNativeIO(RawLocalFileSystem.java:796)
        at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:710)
        at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:678)
        at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:233)
        at org.apache.hadoop.util.DiskChecker.checkDirInternal(DiskChecker.java:141)
        at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:116)
        at org.apache.hadoop.hdfs.server.datanode.StorageLocation.check(StorageLocation.java:239)
        at org.apache.hadoop.hdfs.server.datanode.StorageLocation.check(StorageLocation.java:52)
        at org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker$1.call(ThrottledAsyncChecker.java:142)
        at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
        at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
        at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
2020-04-18 18:04:35,758 ERROR datanode.DataNode: Exception in secureMain
org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed volumes - current valid volumes: 0, volumes configured: 1, volumes failed: 1, volume failures tolerated: 0
        at org.apache.hadoop.hdfs.server.datanode.checker.StorageLocationChecker.check(StorageLocationChecker.java:231)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2799)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2714)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2756)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2900)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2924)
2020-04-18 18:04:35,776 INFO util.ExitUtil: Exiting with status 1: org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed volumes - current valid volumes: 0, volumes configured: 1, volumes failed: 1, volume failures tolerated: 0
2020-04-18 18:04:35,818 INFO datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at DESKTOP-AM73UNH/192.168.1.9
************************************************************/
Apr 18, 2020 in Big Data Hadoop by Arun
• 120 points
8,407 views

1 answer to this question.

–2 votes

Hi@Arun,

By default, the parameter "dfs.datanode.failed.volumes.tolerated" is set to 0, which means,the number of volumes that are allowed to fail before a datanode stops offering service. By default any volume failure will cause a datanode to shutdown.

To avoid this you have to set one property in hdfs-site.xml file.

<property>
    <name>dfs.datanode.failed.volumes.tolerated</name>
    <value>1</value>
</property>

Hope this will work.

answered Apr 20, 2020 by MD
• 95,460 points

Related Questions In Big Data Hadoop

0 votes
1 answer
–1 vote
1 answer

Facing the below error while installing mysql in VM

We would like to say that the ...READ MORE

answered Dec 21, 2018 in Big Data Hadoop by Omkar
• 69,220 points
977 views
0 votes
1 answer

Error while copying the file from local to HDFS

Well, the reason you are getting such ...READ MORE

answered May 3, 2018 in Big Data Hadoop by Ashish
• 2,650 points
4,019 views
0 votes
1 answer

Hadoop Hive: How to skip the first line of csv while loading in hive table?

You can try this: CREATE TABLE temp ...READ MORE

answered Nov 8, 2018 in Big Data Hadoop by Omkar
• 69,220 points
8,994 views
+1 vote
1 answer

What is the process to perform an incremental data load in Sqoop?

The process to perform incremental data load ...READ MORE

answered Dec 17, 2018 in Big Data Hadoop by Frankie
• 9,830 points
5,454 views
0 votes
1 answer

Error while starting hadoop daemons

First, format the namenode and then try ...READ MORE

answered Jan 11, 2019 in Big Data Hadoop by Omkar
• 69,220 points
1,645 views
0 votes
1 answer

Not able to start/stop hadoop daemons

If you were able to start the ...READ MORE

answered Dec 20, 2018 in Big Data Hadoop by Omkar
• 69,220 points
1,924 views
–1 vote
1 answer

Hadoop daemons not starting

You have to write this directory in ...READ MORE

answered Jan 11, 2019 in Big Data Hadoop by Omkar
• 69,220 points
1,578 views
–1 vote
1 answer

Namenode daemon not starting properly

Add this in your hdfs-site.xml <property> ...READ MORE

answered Jan 12, 2019 in Big Data Hadoop by Omkar
• 69,220 points
3,484 views
0 votes
1 answer

Failed to start namenode in hadoop

Change your following properties in hdfs-site.xml <property> ...READ MORE

answered Jan 12, 2019 in Big Data Hadoop by Omkar
• 69,220 points
13,697 views
webinar REGISTER FOR FREE WEBINAR X
REGISTER NOW
webinar_success Thank you for registering Join Edureka Meetup community for 100+ Free Webinars each month JOIN MEETUP GROUP