How to write avro output in hadoop map reduce?

2019-08-28 05:26发布

问题:

I wrote one Hadoop word count program which takes TextInputFormat input and is supposed to output word count in avro format.

Map-Reduce job is running fine but output of this job is readable using unix commands such as more or vi. I was expecting this output be unreadable as avro output is in binary format.

I have used mapper only, reducer is not present. I just want to experiment with avro so I am not worried about memory or stack overflow. Following the the code of mapper

public class WordCountMapper extends Mapper<LongWritable, Text, AvroKey<String>, AvroValue<Integer>> {

    private Map<String, Integer> wordCountMap = new HashMap<String, Integer>();

    @Override
    protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
        String[] keys = value.toString().split("[\\s-*,\":]");
        for (String currentKey : keys) {
            int currentCount = 1;
            String currentToken = currentKey.trim().toLowerCase();
            if(wordCountMap.containsKey(currentToken)) {
                currentCount = wordCountMap.get(currentToken);
                currentCount++;
            }
            wordCountMap.put(currentToken, currentCount);
        }
        System.out.println("DEBUG : total number of unique words = " + wordCountMap.size());
    }

    @Override
    protected void cleanup(Context context) throws IOException, InterruptedException {
        for (Map.Entry<String, Integer> currentKeyValue : wordCountMap.entrySet()) {
            AvroKey<String> currentKey = new AvroKey<String>(currentKeyValue.getKey());
            AvroValue<Integer> currentValue = new AvroValue<Integer>(currentKeyValue.getValue());
            context.write(currentKey, currentValue);
        }
    }
}

and driver code is as follows :

public int run(String[] args) throws Exception {

    Job avroJob = new Job(getConf());
    avroJob.setJarByClass(AvroWordCount.class);
    avroJob.setJobName("Avro word count");

    avroJob.setInputFormatClass(TextInputFormat.class);
    avroJob.setMapperClass(WordCountMapper.class);

    AvroJob.setInputKeySchema(avroJob, Schema.create(Type.INT));
    AvroJob.setInputValueSchema(avroJob, Schema.create(Type.STRING));

    AvroJob.setMapOutputKeySchema(avroJob, Schema.create(Type.STRING));
    AvroJob.setMapOutputValueSchema(avroJob, Schema.create(Type.INT));

    AvroJob.setOutputKeySchema(avroJob, Schema.create(Type.STRING));
    AvroJob.setOutputValueSchema(avroJob, Schema.create(Type.INT));


    FileInputFormat.addInputPath(avroJob, new Path(args[0]));
    FileOutputFormat.setOutputPath(avroJob, new Path(args[1]));

    return avroJob.waitForCompletion(true) ? 0 : 1;
}

I would like to know how do avro output looks like and what am I doing wrong in this program.

回答1:

Latest release of Avro library includes an updated example of their ColorCount example adopted for MRv2. I suggest you to look at it, use the same pattern as they use in Reduce class or just extend AvroMapper. Please note that using Pair class instead of AvroKey+AvroValue is also essential for running Avro on Hadoop.