As we know, because of the limited precision of double, the following two calculation may not give the exact the same value :
A / B / C and
A / ( B * C )
My question is even with the same two variable, A & B, can the compiler guarantee every time A / B yield the same value ?
Or I should ask in the code, can we guarantee the following statement always return true:
If ( A / B == A / B )
A guarantee of behavior for a compiler requires some document specifying the behavior. The answer therefore depends on the programming language, and, if the specification of the programming language is inadequate, the answer depends on the specific compiler used.
The question does not identify any specific programming language, let alone any specific compiler. Some programming languages have standards that specify many aspects of their behavior, while some programming languages are informal and do not clearly document behavior.
In the latter category, Python says that floating-point behavior is derived from whatever platform it is running on. So we cannot easily be sure what Python will do.
The C standard is not completely clear about floating-point behavior. I opened this question to seek clarification, and, so far, my interpretation is that the standard ought to be interpreted as requiring implementations to use one format of their choice to evaluate floating-point operations of a particular type, but compilers have not historically conformed to this. For example, older versions of GCC could evaluate expressions with extended precision but convert to nominal precision at unpredictable times. This could result in A/B == A/B
evaluating to false (even excluding NaNs, which I assume for purposes of discussion here).
The Java specification is more specific about floating-point operations and specifies conformance to IEEE 754, but it specifies FP-strict and not FP-strict modes, and a cursory examination suggests that not FP-strict mode could allow A/B
to compare unequal to A/B
.