I would like to write multiple output files. How do I do this using Job instead of JobConf?
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
回答1:
an easy way to to create key based output file names
input data type
//key //value
cupertino apple
sunnyvale banana
cupertino pear
MultipleTextOutputFormat class
static class KeyBasedMultipleTextOutputForma extends MultipleTextOutputFormat<Text, Text> {
@Override
protected String generateFileNameForKeyValue(Text key, Text value, String name) {
return key.toString() + "/" + name;
}
}
job config
job.setOutputFormat(KeyBasedMultipleTextOutputFormat.class);
Run this code and you’ll see the following files in HDFS, where /output is the job output directory:
$ hadoop fs -ls /output
/output/cupertino/part-00000
/output/sunnyvale/part-00000
hopes it helps.
回答2:
The docs say to use org.apache.hadoop.mapreduce.lib.output.MultipleOutputs
instead.
Below is a snippet of code that uses MultipleOutputs. Unfortunately I didn't write it and haven't spent much time with it... So I don't know exactly why things are where. I share with the hopes it helps. :)
Job Setup
job.setJobName("Job Name");
job.setJarByClass(ETLManager.class);
job.setMapOutputKeyClass(Text.class);
job.setOutputKeyClass(NullWritable.class);
job.setMapOutputValueClass(MyThing.class);
job.setMapperClass(MyThingMapper.class);
job.setReducerClass(MyThingReducer.class);
MultipleOutputs.addNamedOutput(job, Constants.MyThing_NAMED_OUTPUT, TextOutputFormat.class, NullWritable.class, Text.class);
job.setInputFormatClass(MyInputFormat.class);
FileInputFormat.addInputPath(job, new Path(conf.get("input")));
FileOutputFormat.setOutputPath(job, new Path(String.format("%s/%s", conf.get("output"), Constants.MyThing_NAMED_OUTPUT)));
Reducer Setup
public class MyThingReducer extends
Reducer<Text, MyThing, NullWritable, NullWritable> {
private MultipleOutputs m_multipleOutputs;
@Override
public void setup(Context context) {
m_multipleOutputs = new MultipleOutputs(context);
}
@Override
public void cleanup(Context context) throws IOException,
InterruptedException {
if (m_multipleOutputs != null) {
m_multipleOutputs.close();
}
}
@Override
public void reduce(Text key, Iterable<MyThing> values, Context context)throws IOException, InterruptedException {
for (MyThing myThing : values) {
m_multipleOutputs.write(Constants.MyThing_NAMED_OUTPUT, EMPTY_KEY, generateData(context, myThing), generateFileName(context, myThing));
context.progress();
}
}
}
EDIT: Added link to MultipleOutputs.