I was trying to setup a multi node cluster of hadoop michael-noll's way using two computers.
When I tried to format the hdfs it showed a NullPointerException
.
hadoop@psycho-O:~/project/hadoop-0.20.2$ bin/start-dfs.sh
starting namenode, logging to /home/hadoop/project/hadoop-0.20.2/bin/../logs/hadoop-hadoop-namenode-psycho-O.out
slave: bash: line 0: cd: /home/hadoop/project/hadoop-0.20.2/bin/..: No such file or directory
slave: bash: /home/hadoop/project/hadoop-0.20.2/bin/hadoop-daemon.sh: No such file or directory
master: starting datanode, logging to /home/hadoop/project/hadoop-0.20.2/bin/../logs/hadoop-hadoop-datanode-psycho-O.out
master: starting secondarynamenode, logging to /home/hadoop/project/hadoop-0.20.2/bin/../logs/hadoop-hadoop-secondarynamenode-psycho-O.out
master: Exception in thread "main" java.lang.NullPointerException
master: at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:134)
master: at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:156)
master: at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:160)
master: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:131)
master: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:115)
master: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:469)
hadoop@psycho-O:~/project/hadoop-0.20.2$
I dunno what is causing this. Please help me figure out the problem. I am not a fresher in the topic, so please make your answer less techy as possible. :)
If some more information is needed kindly tell me.
Your bash scripts seem not to have the execute rights or don't even exist:
You might have set your user directory wrong or something, looks like it's looking in the wrong directories to find your files.
It seems that your secondary namenode has trouble connecting to the primary namenode, which is definitely required for the whole system to rock the road, for there's checkpointing things need to be done. So I guess there's something wrong with your network configuration, including:
${HADOOP_HOME}/conf/core-site.xml,which contains something like this:
and the /etc/hosts. This file is really a slippy slope, you gotta be careful with these ip alias name, which should be consistent with the hostname of the machine with that ip.
Apparently the defaults are not correct so you have to add them yourself as described in this post
It worked for me.
It seems you have not installed hadoop in your datanode(slave) at all (or) you have done it in a wrong path. The correct path in your case should be /home/hadoop/project/hadoop-0.20.2/