At first i cannot let jobtrackers and tasktrackers run, then i replaced all ips like 10.112.57.243 with hdmaster in xml files, and changed mapred.job.tracker into hdfs:// one. Later i formated namenode while hadoop running, then it turned into a disaster. I found the error msg as the title in logs, then i tried even remove all in /tmp and hdfs tmp, then restart, it's still like this. So how can i get rid of this error and let namenode run again? Thanks a lot.
core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://hdmaster:50000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/ubuntu/hadoop/tmp</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
<description>Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time.</description>
</property>
</configuration>
hadoop-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/ubuntu/hadoop/tmp</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://hdmaster:50000</value>
</property>
<property>
<name>mapred.job.tracker</name>
<value>hdfs://hdmaster:50001</value>
</property>
</configuration>
mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>hdfs://hdmaster:50001</value>
</property>
<property>
<name>mapred.system.dir</name>
<value>/home/ubuntu/hadoop/system</value>
</property>
<property>
<name>mapred.local.dir</name>
<value>/home/ubuntu/hadoop/var</value>
</property>
</configuration>