I've tried numerous ways of setting the logging level in Hadoop to WARN, but have failed each time. Firstly, I tried to configure the log4j.properties file by simply replacing "INFO" with "WARN" everywhere. No result.
Next, I tried to give Hadoop UNIX commands (in accordance with http://hadoop.apache.org/common/docs/current/commands_manual.html#daemonlog):
$ hadoop daemonlog -setlevel
Is it possible that one actually has to alter the SOURCE CODE to make it work? Logging is often quite simple to control, in most cases a slight adjustment of the logging properties usually does it...
Apache hadoop documentation is a bit misleading. If you are debugging issues you can change the log level on the fly using the below steps. You should mention the package name rather than the file name.
Example: For Namenode: hadoop daemonlog -setlevel lxv-centos-01:50070 org.apache.hadoop.hdfs.server.namenode DEBUG
For Resourcemanager yarn daemonlog -setlevel lxv-centos-01:8088 org.apache.hadoop.yarn.server.resourcemanager DEBUG
The above setting goes away when you restart the processes. This is a temporary solution for debugging issues.
To change the log levels dynamically, so that restart of the daemon is not required use hadoop daemonlog utility.
For example to change the log level of datanode logs to WARN.
I rather use
in hadoop-env.sh
or you can use hadoop.root.logger in log4j.properties
DRFA will allow the logs to go into the File Appender rather than Console -> System.err/out.
The default log level can be adjusted by modifying the
hadoop.root.logger
property in yourconf/log4j.properties
configuration file. Note that you'll have to do that for every node in your cluster.Example line in
conf/log4j.properties
: