As far as I am aware, decimal and hexadecimal are simply representations of (let's say) an int
.
This means that if I define an integer, x
I should be able to print x
as:
- a decimal:
printf("%d", x);
- a hexadecimal:
printf("%x", x);
What I don't understand is how this behaves when x
exceeds MAXINT.
Take the below code for example:
#include<stdio.h>
int main(int argc, char** argv) {
// Define two numbers that are both less than MAXINT
int a = 808548400;
int b = 2016424312;
int theSum = a + b; // 2824972712 -> larger than MAXINT
printf("%d\n", theSum); // -1469994584 -> Overflowed
printf("%x\n", theSum); // A861A9A8 -> Correct representation
}
As my comments suggest, the sum of these two decimal numbers is a number larger than MAXINT. This number has overflowed when printed as a decimal (as I would expect), but when printed as hexadecimal it appears to be perfectly fine.
Interestingly, if I continue adding to this number, and cause it to overflow again, it returns to representing the decimal number correctly. The hexadecimal number is always correct.
Could anyone explain why this is the case.
TIA