assuming two arbitrary timestamps:
uint32_t timestamp1;
uint32_t timestamp2;
Is there a standard conform way to get a signed difference of the two beside the obvious variants of converting into bigger signed type and the rather verbose if-else.
Beforehand it is not known which one is larger, but its known that the difference is not greater than max 20bit, so it will fit into 32 bit signed.
int32_t difference = (int32_t)( (int64_t)timestamp1 - (int64_t)timestamp2 );
This variant has the disadvantage that using 64bit arithmetic may not be supported by hardware and is possible of course only if a larger type exists (what if the timestamp already is 64bit).
The other version
int32_t difference;
if (timestamp1 > timestamp2) {
difference = (int32_t)(timestamp1 - timestamp2);
} else {
difference = - ((int32_t)(timestamp2 - timestamp1));
}
is quite verbose and involves conditional jumps.
That is with
int32_t difference = (int32_t)(timestamp1 - timestamp2);
Is this guaranteed to work from standards perspective?
Rebranding Ian Abbott's macro-packaging of Bathseba's answer as an answer:
Summarizing the discussions on why this is more portable than a simple typecast: The C standard (back to C99, at least) specifies the representation of
int32_t
(it must be two's complement), but not in all cases how it should be cast fromuint32_t
.Finally, note that Ian's macro, Bathseba's answer, and M.M's answers all also work in the more general case where the counters are allowed to wrap around 0, as is the case, for example, with TCP sequence numbers.
Bathsheba's answer is correct but for completeness here are two more ways (which happen to work in C++ as well):
and
The latter is not a strict aliasing violation because that rule explicitly allows punning between signed and unsigned versions of an integer type.
The suggestion:
will work on any actual machine that exists and offers the
int32_t
type, but technically is not guaranteed by the standard (the result is implementation-defined).The conversion of an unsigned integer value to a signed integer is implementation defined. This is spelled out in section 6.3.1.3 of the C standard regarding integer conversions:
On implementations people are most likely to use, the conversion will occur the way you expect, i.e. the representation of the unsigned value will be reinterpreted as a signed value.
Specifically GCC does the following:
MSVC:
So for these implementations, what you proposed will work.
You can use a
union
type pun based onPerform the calculation in
unsigned
arithmetic, assign the result to the_unsigned
member, then read the_signed
member of theunion
as the result:This is portable to any platform that implements the fixed width types upon which we are relying (they don't need to). 2's complement is guaranteed for the signed member and, at the "machine" level, 2's complement signed arithmetic is indistinguishable from unsigned arithmetic. There's no conversion or
memcpy
-type overhead here: a good compiler will compile out what's essentially standardese syntactic sugar.(Note that this is undefined behaviour in C++.)