Running wordcount sample using MRV1 on CDH4.0.1 VM

2019-06-16 02:03发布

问题:

I downloaded the VM from https://downloads.cloudera.com/demo_vm/vmware/cloudera-demo-vm-cdh4.0.0-vmware.tar.gz

I found that below listed services are running after the system boots.

  • MRV1 Services

hadoop-0.20-mapreduce-jobtracker
hadoop-0.20-mapreduce-tasktracker

  • MRV2 services

hadoop-yarn-nodemanager
hadoop-yarn-resourcemanager
hadoop-mapreduce-historyserver

  • HDFS Services

hadoop-hdfs-namenode
hadoop-hdfs-datanode

The word count example runs fine and generates the output as expected

/usr/bin/hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar wordcount input output

However, the above runs using the MRv2 - YARN framework

My goal is to run using MRv1. As suggested on the Cloudera documentation, I stop the MRV2 services, and edited /etc/hadoop/conf/mapred-site.xml

  <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
  </property

to "classic" (also tried "local")

  <property>
    <name>mapreduce.framework.name</name>
    <value>classic</value>
  </property

I expected it to run using MRV1 (jobtracker and tasktracker). However, I see the following error:

12/10/10 21:48:39 INFO mapreduce.Cluster: Failed to use org.apache.hadoop.mapred.LocalClientProtocolProvider due to error: Invalid "mapreduce.jobtracker.address" configuration value for LocalJobRunner : "172.30.5.21:8021"
12/10/10 21:48:39 ERROR security.UserGroupInformation: PriviledgedActionException as:cloudera (auth:SIMPLE) cause:java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
        at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:121)
        at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:83)
        ......

Can someone suggest what could be wrong. Why is the error pointing to invalid configuration?

回答1:

I think your cluster still points to MRv2 configuration directory instead on MRv1.

Update/Install hadoop-conf alternative in each node in the cluster pointing to MRv1 configuration directory with high priority.

Then restart all your services.

Eg:

$ sudo update-alternatives --install /etc/hadoop/conf hadoop-conf /etc/hadoop/conf.mrv1 50
$ sudo update-alternatives --set hadoop-conf /etc/hadoop/conf.mrv1


回答2:

The following answer is not mine, but OP's, which was posted in the question itself.


I had been missing one thing that caused the above failure. Make sure that in hadoop-env.sh, you change "export HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce" to " export HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce".

The error was a bit misleading. Also, I had exported the variable on shell, but this one overrides it I believe (need verification).