Am I correct to say the difference between a signed and unsigned integer is:
- Unsigned can hold a larger positive value, and no negative value.
- Unsigned uses the leading bit as a part of the value, while the signed version uses the left-most-bit to identify if the number is positive or negative.
- signed integers can hold both positive and negative numbers.
Any other differences?
Signed integers in C represent numbers. If
a
andb
are variables of signed integer types, the standard will never require that a compiler make the expressiona+=b
store intoa
anything other than the arithmetic sum of their respective values. To be sure, if the arithmetic sum would not fit intoa
, the processor might not be able to put it there, but the standard would not require the compiler to truncate or wrap the value, or do anything else for that matter if values that exceed the limits for their types. Note that while the standard does not require it, C implementations are allowed to trap arithmetic overflows with signed values.Unsigned integers in C behave as abstract algebraic rings of integers which are congruent modulo some power of two, except in scenarios involving conversions to, or operations with, larger types. Converting an integer of any size to a 32-bit unsigned type will yield the member corresponding to things which are congruent to that integer mod 4,294,967,296. The reason subtracting 3 from 2 yields 4,294,967,295 is that adding something congruent to 3 to something congruent to 4,294,967,295 will yield something congruent to 2.
Abstract algebraic rings types are often handy things to have; unfortunately, C uses signedness as the deciding factor for whether a type should behave as a ring. Worse, unsigned values are treated as numbers rather than ring members when converted to larger types, and unsigned values smaller than
int
get converted to numbers when any arithmetic is performed upon them. Ifv
is auint32_t
which equals4,294,967,294
, thenv*=v;
should makev=4
. Unfortunately, ifint
is 64 bits, then there's no telling whatv*=v;
could do.Given the standard as it is, I would suggest using unsigned types in situations where one wants the behavior associated with algebraic rings, and signed types when one wants to represent numbers. It's unfortunate that C drew the distinctions the way it did, but they are what they are.
You must used unsigned Integers when programming on Embedded Systems. In loops, when there is no need for signed integers, using unsigned integers will save safe necessary for designing such systems.
The only guaranteed difference between a signed and an unsigned value in C is that the signed value can be negative, 0 or positive, while an unsigned can only be 0 or positive. The problem is that C doesn't define the format of types (so you don't know that your integers are in two's complement). Strictly speaking the first two points you mentioned are incorrect.
Unsigned integers are far more likely to catch you in a particular trap than are signed integers. The trap comes from the fact that while 1 & 3 above are correct, both types of integers can be assigned a value outside the bounds of what it can "hold" and it will be silently converted.
When you run this, you'll get the following output even though both values were assigned to -1 and were declared differently.
Over and above what others have said, in C, you cannot overflow an unsigned integer; the behaviour is defined to be modulus arithmetic. You can overflow a signed integer and, in theory (though not in practice on current mainstream systems), the overflow could trigger a fault (perhaps similar to a divide by zero fault).