Can any one explain me how secondary sorting works in hadoop ?
Why must one use GroupingComparator
and how does it work in hadoop ?
I was going through the link given below and got doubt on how groupcomapator works.
Can any one explain me how grouping comparator works?
http://www.bigdataspeak.com/2013/02/hadoop-how-to-do-secondary-sort-on_25.html
A partitioner just ensures that one reducer receives all the records belonging to a key but it doesn’t change the fact that the reducer groups by key within the partition.
In case of secondary sort we form composite keys and if we let the default behavior to continue the grouping logic will consider the keys to be different.
So we need to control the grouping. Hence, we have to indicate to the framework to group based on natural part of key rather than the composite key. Hence grouping comparator has to be use for the same.
Above mention examples have good explanation, let me simplify it.we need to perform three major steps.
I find it easy to understand certain concepts with help of diagrams and this is certainly one of them.
Lets assume that our secondary sorting is on a composite key made out of Last Name and First Name.
With the composite key out of the way, now lets look at the secondary sorting mechanism
The partitioner and the group comparator use only natural key, the partitioner uses it to channel all records with the same natural key to a single reducer. This partitioning happens in the Map Phase, data from various Map tasks are received by reducers where they are grouped and then sent to the reduce method. This grouping is where the group comparator comes into picture, if we would not have specified a custom group comparator then Hadoop would have used the default implementation which would have considered the entire composite key, which would have lead to incorrect results.
Overview of MR steps
Grouping Comparator
Once the data reaches a reducer, all data is grouped by key. Since we have a composite key, we need to make sure records are grouped solely by the natural key. This is accomplished by writing a custom GroupPartitioner. We have a Comparator object only considering the yearMonth field of the TemperaturePair class for the purposes of grouping the records together.
Here are the results of running our secondary sort job:
190101 -206
190102 -333
190103 -272
190104 -61
190105 -33
190106 44
190107 72
190108 44
190109 17
190110 -33
190111 -217
190112 -300
While sorting data by value may not be a common need, it’s a nice tool to have in your back pocket when needed. Also, we have been able to take a deeper look at the inner workings of Hadoop by working with custom partitioners and group partitioners. Refer this link also..What is the use of grouping comparator in hadoop map reduce
Here is an example for grouping. Consider a composite key
(a, b)
and its valuev
. And let's assume that after sorting you end up, among others, with the following group of (key, value) pairs:With the default group comparator the framework will call the
reduce
function 3 times with respective (key, value) pairs, since all keys are different. However, if you provide your own custom group comparator, and define it so that it depends only ona
, ignoringb
, then the framework concludes that all keys in this group are equal and calls the reduce function only once using the following key and the list of values:Note that only the first composite key is used, and that b12 and b13 are "lost", i.e., not passed to the reducer.
In the well known example from the "Hadoop" book computing the max temperature by year,
a
is the year, andb
's are temperatures sorted in descending order, thus b11 is the desired max temperature and you don't care about otherb
's. The reduce function just writes the received (a1, b11) as the solution for that year.In your example from "bigdataspeak.com" all
b
's are required in the reducer, but they are available as parts of respective values (objects)v
.In this way, by including your value or its part in the key, you can use Hadoop to sort not only your keys, but also your values.
Hope this helps.