Hadoop Administration

+1 vote

I am trying to format the NN as wanted to do the HA setup through NFS.

steps - Taken 3 systems (vm11,vm12 and vm13) I made nfs server as vm13 and nfs is working fine ..ALL the systems are in sync with the NFS server, All good this point But when trying to do the NN HA setup  i am unable to format the NN getting the error as below.

Now i am providing the configuration files-: Both configuration files are same in vm11 and vm12 but while formating the NN give the error, could you please help me here.

[hduser@vm11 hadoop]$ pwd
/usr/local/hadoop/etc/hadoop

[hduser@vm11 hadoop]$ cat core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
<description>The name of default file system</description>
</property>

</configuration>
[hduser@vm11 hadoop]$ cat hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>

<property>

<name>dfs.replication</name>
<value>2</value>
<description>To specify replication</description>
</property>

<property>

<name>dfs.namenode.name.dir</name>
<value>file://hdfs/nameha</value>
<final>true</final>
</property>

<property>

<name>dfs.datanode.data.dir</name>
<value>file://hdfs/dataha</value>
<final>true</final>
</property>

<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>nn1,nn2</value>
</property>

<property>
<name>dfs.namenode.rpc-address.mycluster.nn1</name>
<value>vm11:9000</value>
</property>

<property>
<name>dfs.namenode.rpc-address.mycluster.nn2</name>
<value>vm12:9000</value>
</property>

<property>
<name>dfs.namenode.http-address.mycluster.nn1</name>
<value>vm11:50070</value>
</property>

<property>
<name>dfs.namenode.http-address.mycluster.nn2</name>
<value>vm12:50070</value>
</property>

<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>file://mnt/filer1</value>
</property>

<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hdfs.server.namenode.ha.configuredFailoverProxyProvider</value>
</property>

<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>

<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/hduser/.ssh/id_rsa</value>
</property>

</configuration>

[hduser@vm11 hadoop]$

======================NN format Error===========================

[hduser@vm12 hadoop]$ bin/hadoop namenode -format
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

19/08/04 00:28:42 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = vm12/192.168.0.8
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.7.6
STARTUP_MSG:   classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.7.6.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.54.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.7.6.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.7.6.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.7.6.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.7.6-tests.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.6.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.7.6.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.7.6-tests.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.7.6.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.7.6.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.7.6.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.6.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.6.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.6.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.7.6.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.6.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.6.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.6.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.6.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.6.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.7.6.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.6.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.6.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.6.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.6.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.6.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.6.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.6-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.6.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.6.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.6.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = https://shv@git-wip-us.apache.org/repos/asf/hadoop.git -r 085099c66cf28be31604560c376fa282e69282b8; compiled by 'kshvachk' on 2018-04-18T01:33Z
STARTUP_MSG:   java = 1.7.0_221
************************************************************/
19/08/04 00:28:42 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
19/08/04 00:28:42 INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-c2ade9bd-cf95-460c-a5a1-c2cc9d95d7f1
19/08/04 00:28:45 INFO namenode.FSNamesystem: No KeyProvider found.
19/08/04 00:28:45 INFO namenode.FSNamesystem: fsLock is fair: true
19/08/04 00:28:45 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
19/08/04 00:28:46 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
19/08/04 00:28:46 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
19/08/04 00:28:46 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
19/08/04 00:28:46 INFO blockmanagement.BlockManager: The block deletion will start around 2019 Aug 04 00:28:46
19/08/04 00:28:46 INFO util.GSet: Computing capacity for map BlocksMap
19/08/04 00:28:46 INFO util.GSet: VM type       = 64-bit
19/08/04 00:28:46 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
19/08/04 00:28:46 INFO util.GSet: capacity      = 2^21 = 2097152 entries
19/08/04 00:28:46 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
19/08/04 00:28:46 INFO blockmanagement.BlockManager: defaultReplication         = 2
19/08/04 00:28:46 INFO blockmanagement.BlockManager: maxReplication             = 512
19/08/04 00:28:46 INFO blockmanagement.BlockManager: minReplication             = 1
19/08/04 00:28:46 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
19/08/04 00:28:46 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
19/08/04 00:28:46 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
19/08/04 00:28:46 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
19/08/04 00:28:46 INFO namenode.FSNamesystem: fsOwner             = hduser (auth:SIMPLE)
19/08/04 00:28:46 INFO namenode.FSNamesystem: supergroup          = supergroup
19/08/04 00:28:46 INFO namenode.FSNamesystem: isPermissionEnabled = true
19/08/04 00:28:46 INFO namenode.FSNamesystem: Determined nameservice ID: mycluster
19/08/04 00:28:46 INFO namenode.FSNamesystem: HA Enabled: true
19/08/04 00:28:46 INFO namenode.FSNamesystem: Append Enabled: true
19/08/04 00:28:46 INFO util.GSet: Computing capacity for map INodeMap
19/08/04 00:28:46 INFO util.GSet: VM type       = 64-bit
19/08/04 00:28:46 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
19/08/04 00:28:46 INFO util.GSet: capacity      = 2^20 = 1048576 entries
19/08/04 00:28:46 INFO namenode.FSDirectory: ACLs enabled? false
19/08/04 00:28:46 INFO namenode.FSDirectory: XAttrs enabled? true
19/08/04 00:28:46 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
19/08/04 00:28:46 INFO namenode.NameNode: Caching file names occuring more than 10 times
19/08/04 00:28:46 INFO util.GSet: Computing capacity for map cachedBlocks
19/08/04 00:28:46 INFO util.GSet: VM type       = 64-bit
19/08/04 00:28:46 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
19/08/04 00:28:46 INFO util.GSet: capacity      = 2^18 = 262144 entries
19/08/04 00:28:46 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
19/08/04 00:28:46 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
19/08/04 00:28:46 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
19/08/04 00:28:46 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
19/08/04 00:28:46 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
19/08/04 00:28:46 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
19/08/04 00:28:46 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
19/08/04 00:28:46 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
19/08/04 00:28:46 INFO util.GSet: Computing capacity for map NameNodeRetryCache
19/08/04 00:28:46 INFO util.GSet: VM type       = 64-bit
19/08/04 00:28:46 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
19/08/04 00:28:46 INFO util.GSet: capacity      = 2^15 = 32768 entries
19/08/04 00:28:46 ERROR namenode.NameNode: Failed to start namenode.
java.lang.IllegalArgumentException: URI has an authority component
        at java.io.File.<init>(File.java:423)
        at org.apache.hadoop.hdfs.server.namenode.NNStorage.getStorageDirectory(NNStorage.java:329)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournals(FSEditLog.java:278)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournalsForWrite(FSEditLog.java:249)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:994)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1457)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1582)
19/08/04 00:28:46 INFO util.ExitUtil: Exiting with status 1
19/08/04 00:28:46 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at vm12/192.168.0.8
************************************************************/
[hduser@vm12 hadoop]$
Aug 4, 2019 in Big Data Hadoop by saroj
• 150 points

edited Aug 5, 2019 by Omkar 951 views

2 answers to this question.

+2 votes

Make the following changes to your configuration file and see if it works.

core-site.xml

<configuration>

<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
<description>The name of default file system</description>
</property>
<property>
    <name>hadoop.tmp.dir</name>
    <value>Give a path for temp dir</value>
   </property>
</configuration>

hdfs-site.xml

<property>

<name>dfs.namenode.name.dir</name>
<value>/hdfs/nameha</value>
<final>true</final>
</property>

<property>

<name>dfs.datanode.data.dir</name>
<value>/hdfs/dataha</value>
<final>true</final>
</property>
answered Aug 5, 2019 by Simran
0 votes

Hi,

  • In hdfs-site.xml , put the following property:
<property>
    <name>dfs.name.dir</name>
    <value>/<hadoop installation path>/hadoop-1.2.1/name/data</value>
</property>

where this is the path to write namenode metadata , run

  • ~$ stop-all.sh
  • Change directory permission to give user full access:

~$sudo chmod 750 /<hadoop installation path>/hadoop-1.2.1/name/data
  • format the name node

~$ hadoop namenode -format
  • start all hadoop daemons 

~$ start-all.sh 
  • make sure you check all the daemons are started:
~$ jps
answered Aug 5, 2019 by Rashi

Related Questions In Big Data Hadoop

0 votes
1 answer

The file exists before processing with hadoop command

Took session and it got resolved. READ MORE

answered Dec 18, 2017 in Big Data Hadoop by Sudhir
• 1,570 points
1,033 views
0 votes
1 answer

How to sync Hadoop configuration files to multiple nodes?

For syncing Hadoop configuration files, you have ...READ MORE

answered Jun 21, 2018 in Big Data Hadoop by HackTheCode
1,340 views
0 votes
1 answer

How can I download only hdfs and not hadoop?

No, you cannot download HDFS alone because ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by nitinrawat895
• 11,380 points
1,524 views
+1 vote
2 answers

Failed to restart Hadoop namenode using cloudera quickstart

You can use Cloudera Manager to manage ...READ MORE

answered Mar 19, 2018 in Big Data Hadoop by kurt_cobain
• 9,350 points

edited Jun 9, 2020 by MD 4,255 views
+1 vote
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 11,380 points
11,009 views
0 votes
1 answer

hadoop.mapred vs hadoop.mapreduce?

org.apache.hadoop.mapred is the Old API  org.apache.hadoop.mapreduce is the ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 11,380 points
2,522 views
+2 votes
11 answers

hadoop fs -put command?

Hi, You can create one directory in HDFS ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by nitinrawat895
• 11,380 points
108,709 views
–1 vote
1 answer

Hadoop dfs -ls command?

In your case there is no difference ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by kurt_cobain
• 9,350 points
4,602 views
0 votes
10 answers

What is the difference between Mongodb and Hadoop?

MongoDB is a NoSQL database, whereas Hadoop is ...READ MORE

answered Jun 20, 2018 in Big Data Hadoop by jenny_code
12,070 views
+2 votes
10 answers

Is there any difference between “hdfs dfs” and “hadoop fs” shell commands?

hadoop fs <args> fs is used for generic ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by anonymous
33,979 views
webinar REGISTER FOR FREE WEBINAR X
REGISTER NOW
webinar_success Thank you for registering Join Edureka Meetup community for 100+ Free Webinars each month JOIN MEETUP GROUP