I was wondering which are the benefits of using truncation towards minus infinity (Haskell, Ruby) instead of truncation towards zero (C, PHP), from the perspective of programming languages/compilers implementation.
It seems that truncating towards minus infinity is the right way to go, but I haven’t found a reliable source for such claiming, nor how such decision impact the implementation of compilers. I’m particularly interested in possible compilers optimizations, but not exclusively.
Related sources:
Division in Haskell
When is the difference between quotRem and divMod useful?
These actually are not even the only choices and, in fact, maybe not even usually the best. I could summarize here, but it is perhaps better to just link to this excellent paper that contrasts truncate, floor, and Euclidean division, covering the theory and some real world applications, The Euclidean Definition of Functions div and mod, Raymond T. Boute.
Here is a quote from the (informative) Rationale in Annex C of ISO/IEC 10967-1:2012 Language independent arithmetic (vl. LIA-1) C.5.1.2.2. Ellipsis ... inserted by me.
... Two rounding rules are in common use: round toward minus infinity (quotI), and round toward zero. The latter is not specified in LIA-1, due to proneness for erroneous use, when the arguments are of different signs. For example,
quotI(-3,2) = -2 round toward minus infinity, specified in LIA-1
divtI(-3,2) = -1 round toward zero, no longer specified by any part of LIA
quotI ... as well as ... all satisfy a broadly useful translation invariant:
quotI(x + i * y, y) = quotI( x, y) + i if y ≠ 0, and no overflow occurs
... quotI is the form of integer division preferred by many mathematicians. divtI (no longer specified by LIA) is the form of division introduced by Fortran.
Integer division is frequently used for grouping. For example, if a series of indexed items are to be partitioned into groups of n items, it is natiural to put item i
into group i/n
. This works fine if quotI is used for integer division. However if divtI (no longer specified in LIA) is used, and i
can be negative, group 0 will get 2 ⋅ n-1 items rather than the desired n. This uneven behaviour for negative i
can cause subtle program errors, and is a strong reason against the use of divtI ...
There are various trade-offs with the different kinds of rounding in integer division. And as Jake McArthur mentioned, this is not the only ones. For example, there's also rounding to the nearest integer.
Another consideration is that integer division and remainder go hand-in-hand. The quotient * divisor + remainder = dividend
law holds. So different types of division rounding will produce different types of remainder. For example:
- Division rounding towards zero, produces a remainder that always has the same sign as the dividend. For example, in C and Java,
(-5) % 3 = -2
(because (-5) / 3 = -1
, and (-1) * 3 + (-2) = -5
); whereas 5 % (-3) = 2
(because 5 / (-3) = -1
, and (-1) * (-3) + 2 = 5
).
- Division rounding towards negative infinity, produces a remainder that always has the same sign as the divisor. For example, in Python and Ruby,
(-5) % 3 = 1
(because (-5) / 3 = -2
, and (-2) * 3 + 1 = -5
); whereas 5 % (-3) = 1
(because 5 / (-3) = -2
, and (-2) * (-3) + (-1) = 5
).
- Division rounding towards the nearest integer, produces a remainder (out of the two possible ones) that is the smallest (closest to zero).
Having a remainder with the same sign as the divisor is often the most useful in math and computer science. One often needs to compute the remainder with a fixed divisor. For example, x % 2
. In languages where the remainder has the sign of the dividend, like C and Java, this expression could evaluate to -1, 0, or 1, depending on x
. However, in languages where the remainder has the sign of the divisor, like Python and Ruby, this expression could only evaluate to 0 (if even) or 1 (if odd). This is probably much more in line with what is intended.
I believe that many processor architectures, including the x86 architecture, contains an instruction for integer division that rounds towards zero. So it may be more efficient to compute this on most computers. I am not sure if there is an instruction for integer division that rounds towards negative infinity.