Recently I came to know that the mod('%') operator is very slow. So I made a function which will work just like a%b. But is it faster than the mod operator?
Here's my function
int mod(int a, int b)
{
int tmp = a/b;
return a - (b*tmp);
}
Recently I came to know that the mod('%') operator is very slow. So I made a function which will work just like a%b. But is it faster than the mod operator?
Here's my function
int mod(int a, int b)
{
int tmp = a/b;
return a - (b*tmp);
}
According to Chandler Carruth's benchmarks at CppCon 2015, the fastest modulo operator (on x86, when compiled with Clang) is:
I suggest that you watch the whole talk, he goes into more details on why this method is faster than just using
%
unconditionally.This will likely be compiler and platform dependent.
But I was interested and on my system you appear to be correct in my benchmarks. However the method from @865719's answer is fastest:
Build:
GCC 5.1.1
(x86_64)Output:
Most of the time, your micro optimized code will not beat the compiler. I also don't know where that "wisdom" comes from, that claims the built in
%
to be slow. It is just as fast as the machine will be able to calculate it - with all the micro optimizations the compiler can do for you.Also note, that performance measurements of such very small pieces of code is not an easy task. Conditionals of a loop construct or the jitter of your time measurement might dominate your results. You can find some talks on such issues by people like e.g. Andrei Alexantrescu, or Chandler Caruth on youtube. I have once written a micro benchmarking framework for a project I was working on. There is really a lot to care about, including external stuff like the OS preempting your thread, or moving it to another core.
It is often possible for a programmer to beat the performance of the remainder operation in cases where a programmer knows things about the operands that the compiler doesn't. For example, if the base is likely to be a power of 2, but is not particularly likely to be larger than the value to be reduced, one could use something like:
If the compiler expands the function in-line and the base is a constant power of 2, the compiler will replace the remainder operator with a bitwise AND, which is apt to be a major improvement. In cases where the base isn't a constant power of two, the generated code would need to do a little bit of computation before selecting whether to use the remainder operator, but in cases where the base happens to be a power of two the cost savings of the bitwise AND may exceed the cost of the conditional logic.
Another scenario where a custom modulus function may help is when the base is a fixed constant for which the compiler hasn't made provisions to compute the remainder. For example, if one wants to compute x % 65521 on a platform which can perform rapid integer shifts and multiplies, one may observe that computing
x -= (x>>16)*65521;
will causex
to be much smaller but will not affect the value ofx % 65521
. Doing the operation a second time will reducex
to the range 0..65745--small enough that a single conditional subtraction will yield the correct remainder.Some compilers may know how to use such techniques to handle the
%
operator efficiently with a constant base, but for those that don't the approach can be a useful optimization, especially when dealing with numbers larger than a machine word [observe that 65521 is 65536-15, so on a 16-bit machine one could evaluatex
asx = (x & 65536) + 15*(x >> 16)
. Not as readable as the form which subtracts65521 * (x >> 16)
, but it's easy to see how it could be handled efficiently on a 16-bit machine.