Unable to start daemons using start-dfs.sh

2019-02-17 19:49发布

问题:

We are using cdh4-0.0 distribution from cloudera. We are unable to start the daemons using the below command.

>start-dfs.sh
Starting namenodes on [localhost]
hduser@localhost's password: 
localhost: mkdir: cannot create directory `/hduser': Permission denied
localhost: chown: cannot access `/hduser/hduser': No such file or directory
localhost: starting namenode, logging to /hduser/hduser/hadoop-hduser-namenode-canberra.out
localhost: /home/hduser/work/software/cloudera/hadoop-2.0.0-cdh4.0.0/sbin/hadoop-daemon.sh: line 150: /hduser/hduser/hadoop-hduser-namenode-canberra.out: No such file or directory
localhost: head: cannot open `/hduser/hduser/hadoop-hduser-namenode-canberra.out' for reading: No such file or directory

回答1:

Looks like you're using tarballs?

Try to set an override the default HADOOP_LOG_DIR location in your etc/hadoop/hadoop-env.sh config file like so:

export HADOOP_LOG_DIR=/path/to/hadoop/extract/logs/

And then retry sbin/start-dfs.sh, and it should work.

In packaged environments, the start-stop scripts are tuned to provide a unique location for each type of service, via the same HADOOP_LOG_DIR env-var, so they do not have the same issue you're seeing.

If you are using packages instead, don't use these scripts and instead just do:

service hadoop-hdfs-namenode start
service hadoop-hdfs-datanode start
service hadoop-hdfs-secondarynamenode start