Apparently there are architectures that don't have 8-bit bytes.
It would seem that such architectures would preclude the existence of an int8_t
(defined in stdint.h
) type since C, from my understanding, cannot create datatypes smaller than a CHAR_BIT
.
That said, the IEEE stdint.h def seems to require that such a type exist (along with others), only allowing for the 64-bit to not exist on architectures that do not support it.
Am I missing something?
EDIT: As @JasonD points out in the comments below, the linked page states at the end;
As a consequence of adding int8_t, the following are true:
A byte is exactly 8 bits.
{CHAR_BIT} has the value 8, {SCHAR_MAX} has the value 127, {SCHAR_MIN} has the value -128, and {UCHAR_MAX} has the value 255.
In other words, the linked IEEE page does not apply to architectures with other byte lengths than 8. This is in line with POSIX which requires 8 bit char.
-- Before edit --
The explanation is in a note on the page you linked to;
The "width" of an integer type is the number of bits used to store its value in a pure binary system; the actual type may use more bits than that (for example, a 28-bit type could be stored in 32 bits of actual storage)
Just because an architecture doesn't handle 8-bit bytes natively, doesn't preclude an exact 8-bit integral type. The arithmetic could be handled using shifts and masks of wider registers to 'emulate' 8-bit arithmetic.