Writing to HDFS could only be replicated to 0 node

2019-01-11 03:19发布

I have 3 data nodes running, while running a job i am getting the following given below error ,

java.io.IOException: File /user/ashsshar/olhcache/loaderMap9b663bd9 could only be replicated to 0 nodes instead of minReplication (=1). There are 3 datanode(s) running and 3 node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1325)

This error mainly comes when our DataNode instances have ran out of space or if DataNodes are not running. I tried restarting the DataNodes but still getting the same error.

dfsadmin -reports at my cluster nodes clearly shows a lots of space is available.

I am not sure why this is happending.

6条回答
祖国的老花朵
2楼-- · 2019-01-11 04:02

What I usually do when this happens is that I go to tmp/hadoop-username/dfs/ directory and manually delete the data and name folders (assuming you are running in a Linux environment).

Then format the dfs by calling bin/hadoop namenode -format (make sure that you answer with a capital Y when you are asked whether you want to format; if you are not asked, then re-run the command again).

You can then start hadoop again by calling bin/start-all.sh

查看更多
对你真心纯属浪费
3楼-- · 2019-01-11 04:13

Very Simple fix for the same issue on Windows 8.1
I used Windows 8.1 OS and Hadoop 2.7.2, Did the following things to overcome this issue.

  1. When I started the hdfs namenode -format, I noticed there is a lock in my directory. please refer the figure below.
    HadoopNameNode
  2. Once I deleted the full folder as shown below, and again I did the hdfs namenode -format. Folder location
    Full Folder Delete
  3. After performing above two steps, I could successfully place my required files in HDFS system. I used start-all.cmd command to start yarn and namenode.
查看更多
别忘想泡老子
4楼-- · 2019-01-11 04:15

I had this problem and I solved it as bellow:

  1. Find where are your datanode and namenode metadata/data saved; if you cannot find it, simply do this command on mac to find it (there are located in a folder called "tmp")

    find /usr/local/Cellar/ -name "tmp";

    find command is like this: find <"directory"> -name <"any string clue for that directory or file">

  2. After finding that file, cd into it. /usr/local/Cellar//hadoop/hdfs/tmp

    then cd to dfs

    then using -ls command see that data and name directories are located there.

  3. Using remove command, remove them both:

    rm -R data . and rm -R name

  4. Go to bin folder and end everything if you already have not done it:

    sbin/end-dfs.sh

  5. Exit from the server or localhost.

  6. Log into the server again: ssh <"server name">

  7. start the dfs:

    sbin/start-dfs.sh

  8. Format the namenode for being sure:

    bin/hdfs namenode -format

  9. you can now use hdfs commands to upload your data into dfs and run MapReduce jobs.

查看更多
相关推荐>>
5楼-- · 2019-01-11 04:18
  1. Check whether your DataNode is running,use the command:jps.
  2. If it is not running wait sometime and retry.
  3. If it is running, I think you have to re-format your DataNode.
查看更多
我只想做你的唯一
6楼-- · 2019-01-11 04:24

1.Stop all Hadoop daemons

for x in `cd /etc/init.d ; ls hadoop*` ; do sudo service $x stop ; done

2.Remove all files from /var/lib/hadoop-hdfs/cache/hdfs/dfs/name

Eg: devan@Devan-PC:~$ sudo rm -r /var/lib/hadoop-hdfs/cache/

3.Format Namenode

sudo -u hdfs hdfs namenode -format

4.Start all Hadoop daemons

for x in `cd /etc/init.d ; ls hadoop*` ; do sudo service $x start ; done

Stop All Hadoop Service

查看更多
聊天终结者
7楼-- · 2019-01-11 04:24

I had the same issue, I was running very low on disk space. Freeing up disk solved it.

查看更多
登录 后发表回答