hadoop NullPointerException

2019-07-14 06:20发布

I was trying to setup a multi node cluster of hadoop michael-noll's way using two computers.

When I tried to format the hdfs it showed a NullPointerException.

hadoop@psycho-O:~/project/hadoop-0.20.2$ bin/start-dfs.sh
starting namenode, logging to /home/hadoop/project/hadoop-0.20.2/bin/../logs/hadoop-hadoop-namenode-psycho-O.out
slave: bash: line 0: cd: /home/hadoop/project/hadoop-0.20.2/bin/..: No such file or directory
slave: bash: /home/hadoop/project/hadoop-0.20.2/bin/hadoop-daemon.sh: No such file or directory
master: starting datanode, logging to /home/hadoop/project/hadoop-0.20.2/bin/../logs/hadoop-hadoop-datanode-psycho-O.out
master: starting secondarynamenode, logging to /home/hadoop/project/hadoop-0.20.2/bin/../logs/hadoop-hadoop-secondarynamenode-psycho-O.out
master: Exception in thread "main" java.lang.NullPointerException
master:     at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:134)
master:     at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:156)
master:     at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:160)
master:     at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:131)
master:     at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:115)
master:     at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:469)
hadoop@psycho-O:~/project/hadoop-0.20.2$ 

I dunno what is causing this. Please help me figure out the problem. I am not a fresher in the topic, so please make your answer less techy as possible. :)

If some more information is needed kindly tell me.

5条回答
Emotional °昔
2楼-- · 2019-07-14 07:07

Your bash scripts seem not to have the execute rights or don't even exist:

slave: bash: line 0: cd: /home/hadoop/project/hadoop-0.20.2/bin/..: No such file or directory
slave: bash: /home/hadoop/project/hadoop-0.20.2/bin/hadoop-daemon.sh: No such file or directory

查看更多
疯言疯语
3楼-- · 2019-07-14 07:10

You might have set your user directory wrong or something, looks like it's looking in the wrong directories to find your files.

查看更多
孤傲高冷的网名
4楼-- · 2019-07-14 07:15
master:     at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:134)
master:     at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:156)
master:     at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:160)

It seems that your secondary namenode has trouble connecting to the primary namenode, which is definitely required for the whole system to rock the road, for there's checkpointing things need to be done. So I guess there's something wrong with your network configuration, including:

  • ${HADOOP_HOME}/conf/core-site.xml,which contains something like this:

    <!-- Put site-specific property overrides in this file. -->
    <configuration>
        <property>
            <name>hadoop.tmp.dir</name>
            <value>/app/hadoop/tmp</value>
            <description>A base for other temporary directories.</description>
        </property>
    
        <property>
            <name>fs.default.name</name>
            <value>hdfs://master:54310</value>
            <description>The name of the default file system.  A URI whose
            scheme and authority determine the FileSystem implementation.  The
            uri's scheme determines the config property (fs.SCHEME.impl) naming
            the FileSystem implementation class.  The uri's authority is used to
            determine the host, port, etc. for a filesystem.</description>
        </property>
    </configuration>
    
  • and the /etc/hosts. This file is really a slippy slope, you gotta be careful with these ip alias name, which should be consistent with the hostname of the machine with that ip.

        127.0.0.1   localhost
        127.0.1.1   zac
    
        # The following lines are desirable for IPv6 capable hosts
        ::1     ip6-localhost ip6-loopback
        fe00::0 ip6-localnet
        ff00::0 ip6-mcastprefix
        ff02::1 ip6-allnodes
        ff02::2 ip6-allrouters
    
        192.168.1.153 master     #pay attention to these two!!!
        192.168.99.146 slave1
    
查看更多
够拽才男人
5楼-- · 2019-07-14 07:17

Apparently the defaults are not correct so you have to add them yourself as described in this post

It worked for me.

查看更多
Summer. ? 凉城
6楼-- · 2019-07-14 07:18

It seems you have not installed hadoop in your datanode(slave) at all (or) you have done it in a wrong path. The correct path in your case should be /home/hadoop/project/hadoop-0.20.2/

查看更多
登录 后发表回答