“hadoop namenode -format” returns a java.net.Unkno

2019-01-23 20:00发布

I'm currently learning hadoop and I'm trying to setup a single node test as defined in http://hadoop.apache.org/common/docs/current/single_node_setup.html

I've configured ssh (I can log without a password).

My server is on our intranet, behind a proxy.

When I'm trying to run

bin/hadoop namenode -format

I get the following java.net.UnknownHostException exception:

$ bin/hadoop namenode -format
11/06/10 15:36:47 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = java.net.UnknownHostException: srv-clc-04.univ-nantes.prive3: srv-clc-04.univ-nantes.prive3
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 0.20.203.0
STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-203 -r 1099333; compiled by 'oom' on Wed May  4 07:57:50 PDT 2011
************************************************************/
Re-format filesystem in /home/lindenb/tmp/HADOOP/dfs/name ? (Y or N) Y
11/06/10 15:36:50 INFO util.GSet: VM type       = 64-bit
11/06/10 15:36:50 INFO util.GSet: 2% max memory = 19.1675 MB
11/06/10 15:36:50 INFO util.GSet: capacity      = 2^21 = 2097152 entries
11/06/10 15:36:50 INFO util.GSet: recommended=2097152, actual=2097152
11/06/10 15:36:50 INFO namenode.FSNamesystem: fsOwner=lindenb
11/06/10 15:36:50 INFO namenode.FSNamesystem: supergroup=supergroup
11/06/10 15:36:50 INFO namenode.FSNamesystem: isPermissionEnabled=true
11/06/10 15:36:50 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
11/06/10 15:36:50 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
11/06/10 15:36:50 INFO namenode.NameNode: Caching file names occuring more than 10 times 
11/06/10 15:36:50 INFO common.Storage: Image file of size 113 saved in 0 seconds.
11/06/10 15:36:50 INFO common.Storage: Storage directory /home/lindenb/tmp/HADOOP/dfs/name has been successfully formatted.
11/06/10 15:36:50 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at java.net.UnknownHostException: srv-clc-04.univ-nantes.prive3: srv-clc-04.univ-nantes.prive3
************************************************************/

After that, hadoop was started

./bin/start-all.sh

but there was another new exception when I tried to copy a local file:

 bin/hadoop fs  -copyFromLocal ~/file.txt  file.txt

DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/lindenb/file.txt could only be replicated to 0 nodes, instead of 1
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1417)

how can I fix this problem please ?

Thanks

4条回答
三岁会撩人
2楼-- · 2019-01-23 20:41

The tmp directory that you have created should have ownership issues. That is why hadoop is not able to write to the tmp directoy to fix it run the following commands

sudo chown hduser:hadoop /app/<your hadoop tmp dir>
查看更多
做自己的国王
3楼-- · 2019-01-23 20:46

UnknownHostException is thrown when hadoop tries to resolve the DNS name (srv-clc-04.univ-nantes.prive3) to an ip address. This fails.

Look for the domain name in the configuration files and replace it by "localhost". (Or update the DNS up resolve the name to an ip address)

查看更多
倾城 Initia
4楼-- · 2019-01-23 20:56

First get the host name of your computer. It can be obtained by running $hostname command. Then add 127.0.0.1 localhost hostname into the /etc/hosts file. That should solve the problem.

查看更多
Evening l夕情丶
5楼-- · 2019-01-23 20:56

Appending the below to /etc/hosts may help:

127.0.0.1   localhost   yourhostname
查看更多
登录 后发表回答