As far as I am aware, decimal and hexadecimal are simply representations of (let's say) an int
.
This means that if I define an integer, x
I should be able to print x
as:
- a decimal:
printf("%d", x);
- a hexadecimal:
printf("%x", x);
What I don't understand is how this behaves when x
exceeds MAXINT.
Take the below code for example:
#include<stdio.h>
int main(int argc, char** argv) {
// Define two numbers that are both less than MAXINT
int a = 808548400;
int b = 2016424312;
int theSum = a + b; // 2824972712 -> larger than MAXINT
printf("%d\n", theSum); // -1469994584 -> Overflowed
printf("%x\n", theSum); // A861A9A8 -> Correct representation
}
As my comments suggest, the sum of these two decimal numbers is a number larger than MAXINT. This number has overflowed when printed as a decimal (as I would expect), but when printed as hexadecimal it appears to be perfectly fine.
Interestingly, if I continue adding to this number, and cause it to overflow again, it returns to representing the decimal number correctly. The hexadecimal number is always correct.
Could anyone explain why this is the case.
TIA
Citing n1570 (latest C11 draft), §7.21.6.1 p8:
So, in a nutshell, you're using a conversion specifier for the wrong type here. Actually, your signed overflow is undefined behavior, but your implementation obviously uses 2's complement for negative numbers and the overflow results in a wraparound on your system, so the result is exactly the same representation as the correct unsigned number.
If you used
%u
instead of%x
, you would see the same number in decimal notation. Again, this would be the result of undefined behavior that happens to be "correct" on your system. Always avoid signed overflows, the result is allowed to be anything.