Incompatible clusterIDs in datanode and namenode

2019-06-06 06:44发布

I checked solutions in this site.

I went to the (hadoop folder)/data/dfs/datanode to change ID.

but, there are not anything in datanode folder.

what can I do?

Thank for reading.

And If you help me, I will be appreciate you.

PS

2017-04-11 20:24:05,507 WARN org.apache.hadoop.hdfs.server.common.Storage: Failed to add storage directory [DISK]file:/tmp/hadoop-knu/dfs/data/

java.io.IOException: Incompatible clusterIDs in /tmp/hadoop-knu/dfs/data: namenode clusterID = CID-4491e2ea-b0dd-4e54-a37a-b18aaaf5383b; datanode clusterID = CID-13a3b8e1-2f8e-4dd2-bcf9-c602420c1d3d

2017-04-11 20:24:05,509 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool (Datanode Uuid unassigned) service to localhost/127.0.0.1:9010. Exiting.

java.io.IOException: All specified directories are failed to load.

2017-04-11 20:24:05,509 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool (Datanode Uuid unassigned) service to localhost/127.0.0.1:9010

core-site.xml

<configuration>
    <property>
            <name>fs.defaultFS</name>
            <value>hdfs://localhost:9010</value>
    </property>
</configuration>

hdfs-site.xml

<configuration>
   <property>
            <name>dfs.replication</name>
            <value>1</value>
   </property>
   <property>
            <name>dfs.namenode.name.dir</name>
            <value>/home/knu/hadoop/hadoop-2.7.3/data/dfs/namenode</value>
    </property>
    <property>
            <name>dfs.namenode.checkpoint.dir</name>
            <value>/home/knu/hadoop/hadoop-2.7.3/data/dfs/namesecondary</value>
    </property>
    <property>
            <name>dfs.dataode.data.dir</name>
            <value>/home/knu/hadoop/hadoop-2.7.3/data/dfs/datanode</value>
    </property>
    <property>
            <name>dfs.http.address</name>
            <value>localhost:50070</value>
    </property>
    <property>
           <name>dfs.secondary.http.address</name>
            <value>localhost:50090</value>
    </property>
</configuration>

PS2

[knu@localhost ~]$ ls -l /home/knu/hadoop/hadoop-2.7.3/data/dfs/
drwxrwxr-x. 2 knu knu  6  4월 11 21:28 datanode
drwxrwxr-x. 3 knu knu 40  4월 11 22:15 namenode
drwxrwxr-x. 3 knu knu 40  4월 11 22:15 namesecondary

2条回答
一纸荒年 Trace。
2楼-- · 2019-06-06 06:51

The problem is with the property name dfs.datanode.data.dir, it is misspelt as dfs.dataode.data.dir. This invalidates the property from being recognised and as a result, the default location of ${hadoop.tmp.dir}/hadoop-${USER}/dfs/data is used as data directory.

hadoop.tmp.dir is /tmp by default, on every reboot the contents of this directory will be deleted and forces datanode to recreate the folder on startup. And thus Incompatible clusterIDs.

Edit this property name in hdfs-site.xml before formatting the namenode and starting the services.

查看更多
兄弟一词,经得起流年.
3楼-- · 2019-06-06 07:02

Try formatting the namenode and then restarting HDFS.

查看更多
登录 后发表回答