hadoop Nanenode wont start

2019-08-03 05:44发布

if you are visiting this link through my previous question : hadoop2.2.0 installation on linux ( NameNode not starting )

you probably know! I have been trying to run single-node mode for hadoop-2.2.0 for a long time now :D if not visit that and ull find out :)

finally, after following the tutorials I can format the namenode fine , however when I start the namenode I see the following error in the logs:

2014-05-31 15:44:20,587 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.lang.IllegalArgumentException: Does not contain a valid host:port authority: file:///
    at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:280)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)

I have googled for the solution , most of them asks to check double check and keep checking core-site.xml , mapred-site.xml , hdfs-site.xml I have done all those and they look absolutely fine to me. Does any one have any clues as to what might be going wrong?

UPDATE location of the files /usr/local/hadoop/etc/hadoop

core-site.xml

<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>

hdfs-site.xml

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop/yarn_data/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop/yarn_data/hdfs/datanode</value>
</property>
</configuration>

mapred-site.xml

<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration

1条回答
狗以群分
2楼-- · 2019-08-03 06:30

Remove file: from the values of dfs.namenode.name.dir and dfs.datanode.data.dir properties . Format the NameNode properly and start the daemons. Also, make sure you have proper ownership and permissions on these directories.

If you really want to use file: scheme then use file://, so that values look like :

file:///usr/local/hadoop/yarn_data/hdfs/namenode
file:///usr/local/hadoop/yarn_data/hdfs/datanode

HTH

查看更多
登录 后发表回答