Why is the number of combiner input records more t

2019-05-05 14:23发布

A Combiner runs after the Mapper and before the Reducer, it will receive as input all data emitted by the Mapper instances on a given node. It then emits output to the Reducers. So the records of the combiner input should less than the maps ouputs.

12/08/29 13:38:49 INFO mapred.JobClient:   Map-Reduce Framework

12/08/29 13:38:49 INFO mapred.JobClient:     Reduce input groups=8649

12/08/29 13:38:49 INFO mapred.JobClient:     Map output materialized bytes=306210

12/08/29 13:38:49 INFO mapred.JobClient:     Combine output records=859412

12/08/29 13:38:49 INFO mapred.JobClient:     Map input records=457272

12/08/29 13:38:49 INFO mapred.JobClient:     Reduce shuffle bytes=0

12/08/29 13:38:49 INFO mapred.JobClient:     Reduce output records=8649

12/08/29 13:38:49 INFO mapred.JobClient:     Spilled Records=1632334

12/08/29 13:38:49 INFO mapred.JobClient:     Map output bytes=331837344

12/08/29 13:38:49 INFO mapred.JobClient:     **Combine input records=26154506**

12/08/29 13:38:49 INFO mapred.JobClient:     **Map output records=25312392**

12/08/29 13:38:49 INFO mapred.JobClient:     SPLIT_RAW_BYTES=218

12/08/29 13:38:49 INFO mapred.JobClient:     Reduce input records=17298

1条回答
叼着烟拽天下
2楼-- · 2019-05-05 15:11

I think it's because the Combiner can also run on the output of previous Combine steps, since your Combiner runs and produces new records which are then Combined with other records coming out of your Mappers. It may also be that Map output records is calculated after the Combiner runs, meaning that there are less records because some have been Combined.

查看更多
登录 后发表回答