试图运行MRUnit例如,当冲突API(Conflicting API when trying to

2019-10-20 05:50发布

我一直在玩弄MRUnit并试图运行它的一个Hadoop wordcount的例子中教程以下单词计数和单元测试

虽然不是一个球迷,我一直在使用Eclipse来运行代码和我不断收到对setMapper功能的错误

import java.io.IOException;
import java.util.ArrayList;
import java.util.List;

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;


import org.apache.hadoop.mrunit.mapreduce.MapDriver;
import org.apache.hadoop.mrunit.mapreduce.MapReduceDriver;
import org.apache.hadoop.mrunit.mapreduce.ReduceDriver;

import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;

import org.junit.Before;
import org.junit.Test;

public class TestWordCount {
  MapReduceDriver<LongWritable, Text, Text, IntWritable, Text, IntWritable> mapReduceDriver;
  MapDriver<LongWritable, Text, Text, IntWritable> mapDriver;
  ReduceDriver<Text, IntWritable, Text, IntWritable> reduceDriver;

  @Before
  public void setUp() throws IOException
  {
      WordCountMapper mapper = new WordCountMapper();
      mapDriver = new MapDriver<LongWritable, Text, Text, IntWritable>();
      mapDriver.setMapper(mapper);  //<--Issue here

      WordCountReducer reducer = new WordCountReducer();
      reduceDriver = new ReduceDriver<Text, IntWritable, Text, IntWritable>();
      reduceDriver.setReducer(reducer);

      mapReduceDriver = new MapReduceDriver<LongWritable, Text, Text, IntWritable,     Text, IntWritable>();
      mapReduceDriver.setMapper(mapper); //<--Issue here
      mapReduceDriver.setReducer(reducer);
  }

错误信息:

java.lang.Error: Unresolved compilation problems: 
    The method setMapper(Mapper<LongWritable,Text,Text,IntWritable>) in the type MapDriver<LongWritable,Text,Text,IntWritable> is not applicable for the arguments (WordCountMapper)
    The method setMapper(Mapper<LongWritable,Text,Text,IntWritable>) in the type MapReduceDriver<LongWritable,Text,Text,IntWritable,Text,IntWritable> is not applicable for the arguments (WordCountMapper)

往上看这个问题,我觉得这可能是一个API冲突,但我不知道到哪里寻找它。 之前,任何人有这个问题?

编辑我使用与hadoop2罐子和最新的Junit(4.10)在它的jar用户定义的库。

编辑2下面是WordCountMapper代码

import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

public class WordCountMapper extends Mapper<Object, Text, Text, IntWritable> 
{

    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();


    public void map(Object key, Text value, Context context)throws IOException, InterruptedException 
    {
        StringTokenizer itr = new StringTokenizer(value.toString());
        while (itr.hasMoreTokens()) 
        {
            word.set(itr.nextToken());
            context.write(word, one);
        }
    }
}

最后编辑/ IT WORKS

原来我需要设置

WordCountMapper mapper = new WordCountMapper();

Mapper mapper = new WordCountMapper();

因为有泛型的问题。 此外,还需要到图书馆的Mockito导入到我的用户定义的库。

Answer 1:

这是你的问题

public class WordCountMapper extends Mapper<Object, Text, Text, IntWritable>
....
MapDriver<LongWritable, Text, Text, IntWritable> mapDriver;

WordCountMapper输入型( Object )是不兼容的MapDriver输入型( LongWritable )。 改变你的Mapper定义

class WordCountMapper extends Mapper<LongWritable, Text, Text, IntWritable>

你可能想改变你的map从方法参数Object keyLongWritable key也。



Answer 2:

请确保您已导入正确的类,我都面临着同样的错误,不像我的程序上面有这两类减速正确的参数并reduce_test但由于进口错误的类我都面临着被上述报告同样的错误信息

误服进口分类 -

进口org.apache.hadoop.mrunit.ReduceDriver;

正确的类---

进口org.apache.hadoop.mrunit.mapreduce.ReduceDriver;

在mapper_test的情况下,同一个解决方案,如果你确信你的参数在Mapper__class和Mapper_test相同



文章来源: Conflicting API when trying to run MRUnit example