Error message while copy file from LocalFile to hd

2019-09-07 13:23发布

I tried to copy file from local to hdfs. Using the command hadoop dfs -copyFromLocal in/ /user/hduser/hadoop

The following error message shown. Please help to find the problem.

DEPRECATED: Use of this script to execute hdfs command is deprecated. Instead use the hdfs command for it.

15/02/02 19:22:23 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hduser/hadoop._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1549)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3200)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:641)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:482)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

    at org.apache.hadoop.ipc.Client.call(Client.java:1468)
    at org.apache.hadoop.ipc.Client.call(Client.java:1399)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
    at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:399)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1532)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1349)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588)
copyFromLocal: File /user/hduser/hadoop._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.

My hdfs-site.xml

<property>
       <name>dfs.replication</name>
       <value>1</value>
     </property>
     <property>
       <name>dfs.namenode.name.dir</name>
       <value>file:/home/hduser/mydata/hdfs/namenode</value>
     </property>
     <property>
       <name>dfs.datanode.data.dir</name>
       <value>file:/home/hduser/mydata/hdfs/datanode</value>
     </property>

jps

12805 NameNode
13276 ResourceManager
13398 NodeManager
13081 SecondaryNameNode
14129 Jps

2条回答
Emotional °昔
2楼-- · 2019-09-07 13:42

We can see that you have missing NameNode and DataNode from jps output.

First stop hadoop on hadoop directory

cd /usr/local/hadoop/sbin
stop-all.sh

Then remove the contents of NameNode and DataNode

rm -rf /usr/local/hadoop_store/hdfs/namenode/*
rm -rf /usr/local/hadoop_store/hdfs/datanode/*

format NameNode

hadoop namenode -format

start all over again

start-all.sh
查看更多
神经病院院长
3楼-- · 2019-09-07 13:43

The following steps should fix the issue. However, you may lose data.

  • Stop Hadoop :

    • sbin/stop-dfs.sh
    • sbin/stop.yarn.sh
  • Delete NameNode & DataNode:

    • hdfs dfs -rm -r /mydata/hdfs/namenode
    • hdfs dfs -rm -r /mydata/hdfs/datanodenode
  • Format NameNode

    • hdfs namenode -format
  • Start NameNode, Datanodes & YARN

    • sbin/start-dfs.sh
    • sbin/start-yarn.sh
查看更多
登录 后发表回答