Change to HashMap hash function in Java 8

2019-03-13 07:08发布

In java 8 java.util.Hashmap I noticed a change from:

static int hash(int h) {
    h ^= (h >>> 20) ^ (h >>> 12);
    return h ^ (h >>> 7) ^ (h >>> 4);

to:

static final int hash(Object key) {
    int h;
    return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);

It appears from the code that the new function is a simpler XOR of the lower 16 bits with the upper 16 leaving the upper 16 bits unchanged, as opposed to several different shifts in the previous implementation, and from the comments that this is less effective at allocating the results of hash functions with a high number of collisions in lower bits to different buckets, but saves CPU cycles by having to do less operations.

The only thing I saw in the release notes was the change from linked lists to balanced trees to store colliding keys (which I thought might have changed the amount of time it made sense to spend calculating a good hash), I was specifically interested in seeing if there was any expected performance impact from this change on large hash maps. Is there any information about this change, or does anyone with a better knowledge of hash functions have an idea of what the implications of this change might be (if any, perhaps I just misunderstood the code) and if there was any need to generate hash codes in a different way to maintain performance when moving to Java 8?

3条回答
Luminary・发光体
2楼-- · 2019-03-13 07:52

As you noted: there is a significant performance improvement in HashMap in Java 8 as described in JEP-180. Basically, if a hash chain goes over a certain size, the HashMap will (where possible) replace it with a balanced binary tree. This makes the "worst case" behaviour of various operations O(log N) instead of O(N).

This doesn't directly explain the change to hash. However, I would hypothesize that the optimization in JEP-180 means that the performance hit due to a poorly distributed hash function is less important, and that the cost-benefit analysis for the hash method changes; i.e. the more complex version is less beneficial on average. (Bear in bind that when the key type's hashcode method generates high quality codes, then gymnastics in the complex version of the hash method are a waste of time.)

But this is only a theory. The real rationale for the hash change is most likely Oracle confidential.

查看更多
Melony?
3楼-- · 2019-03-13 08:04

When I ran hash implementation diffences I see time difference in nano seconds as below (not great but can have some effect when the size is huge ~1million+)-

7473 ns – java 7

3981 ns– java 8

If we are talking about well formed keys and hashmap of big size (~million), this might have some impact and this is because of simplified logic.

查看更多
在下西门庆
4楼-- · 2019-03-13 08:04

As Java documentation says that idea is to handle the situation where old implementation of Linked list performs O(n) instead of O(1). This happens when same hash code is generated for large set of data being inserted in HashMap.

This is not normal scenario though. To handle a situation that once the number of items in a hash bucket grows beyond a certain threshold, that bucket will switch from using a linked list of entries to a binary tree. In the case of high hash collisions, this will improve search performance from O(n) to O(log n) which is much better and solves the problem of performance.

查看更多
登录 后发表回答