I wrote a simple benchmark in order to find out if bounds check can be eliminated when the array gets computed via bitwise and. This is basically what nearly all hash tables do: They compute
h & (table.length - 1)
as an index into the table
, where h
is the hashCode
or a derived value. The results shows that the bounds check don't get eliminated.
The idea of my benchmark is pretty simple: Compute two values i
and j
, where both are guaranteed to be valid array indexes.
i
is the loop counter. When it gets used as array index, the bounds check gets eliminated.j
gets computed asx & (table.length - 1)
, wherex
is some value changing on each iteration. When it gets used as array index, the bounds check does not get eliminated.
The relevant part is as follows:
for (int i=0; i<=table.length-1; ++i) {
x += result;
final int j = x & (table.length-1);
result ^= i + table[j];
}
The other experiment uses
result ^= table[i] + j;
instead. The difference in timing is maybe 15% (pretty consistently across different variants I've tried). My questions:
- Are there other possible reasons for this besides bound check elimination?
- Is there some complicated reason I can't see why there's no bound check elimination for
j
?
A summary of the answers
MarkoTopolnik's answer shows that it's all more complicated and the elimination of the bounds checks is not guaranteed to be a win, especially on his computer the "normal" code is slower than "masked". I guess this is because of it allowing some additional optimization which shows to be actually detrimental in this case (given the complexity of the current CPUs, the compiler hardly even knows for sure).
leventov's answer shows clearly that the array bounds check gets done in "masked" and that it's elimination makes the code as fast as "normal".
Donal Fellows points to the fact, that the masking doesn't work for a zero-length table, as x & (0-1)
equals to x
. So the best thing the compiler can do is to replace the bound check by a zero-length check. But this is IMHO still worth it, as the zero-length check can be moved out of the loop easily.
Proposed optimization
Because of the the equivalence a[x & (a.length - 1)]
throws if and only if a.length == 0
, the compiler can do the following:
- For each array access, check if the index has been computed via a bitwise and.
- If so, check if either of the operands was computed as length minus one.
- If so, replace the bounds check by a zero-length check.
- Let the existing optimizations take care of it.
Such an optimization should be pretty simple and cheap as it only looks at the parent nodes in the SSA graph. Unlike many complex optimizations, it can never be detrimental, as it only replaces one check by a slightly simpler one; so there's no problem, not even if it can't be moved out of the loop.
I'll post this to the hotspot-dev mailing lists.
News
John Rose filed an RFE and there's already a "quick-and-dirty" patch.
I've extended a benchmark by Marko Topolnik:
Results:
2. The second question is for hotspot-dev mailing lists rather than StackOverflow, IMHO.
In order to safely eliminate that bounds check, it is necessary to prove that
is guaranteed to produce a valid index into
table
. It won't iftable.length
is zero (as you'll end up with& -1
, an effective-noop). It also won't usefully do it iftable.length
is not a power of 2 (you'll lose information; consider the case wheretable.length
is 17).How can the HotSpot compiler know that these bad conditions are not true? It has to be more conservative than a programmer can be, as the programmer can know more about the high-level constraints on the system (e.g., that the array is never empty and always as a number of elements that is a power-of-two).
To start off, the main difference between your two tests is definitely in bounds check elimination; however, the way this influences the machine code is far from what the naïve expectation would suggest.
My conjecture:
The bounds check figures more strongly as a loop exit point than as additional code which introduces overhead.
The loop exit point prevents the following optimization which I have culled from the emitted machine code:
If the loop can break out at any step, this staging would result in work performed for loop steps which were never actually taken.
Consider this slight modification of your code:
There is just one difference: I have added the check
to give the loop a way to exit prematurely on any step. (I also introduced a guard to ensure no array entries are actually 0.)
On my machine, this is the result:
the "normal index" variant is substantially faster, as generally expected.
However, let us remove the additional check:
Now my results are these:
"Masked index" responded predictably (reduced overhead), but "normal index" is suddenly much worse. This is apparently due to a bad fit between the additional optimization step and my specific CPU model.
My point:
The performance model at such a detailed level is very unstable and, as witnessed on my CPU, even erratic.