I have found that the C99 standard have a statement which denies the compatibility between the type char and the type signed char/unsigned char.
Note 35 of C99 standard:
CHAR_MIN, defined in limits.h, will have one of the values 0 or SCHAR_MIN, and this can be used to distinguish the two options. Irrespective of the choice made, char is a separate type from the other two and is not compatible with either.
My question is that why does the committee deny the compatibility? What is the rationale? If char is compatible with signed char or unsigned char, will something terrible happen?
The roots are in compiler history. There were (are) essentially two C dialects in the Eighties:
Which of these should C89 have standardized? C89 chose to standardize neither, because it would have invalidated a large number of assumptions made in C code already written--what standard folks call the installed base. So C89 did what K&R did: leave the signedness of plain char implementation-defined. If you required a specific signedness, qualify your char. Modern compilers usually let you chose the dialect with an option (eg. gcc's
-funsigned-char
).The "terrible" thing that can happen if you ignore the distinction between (un)signed char and plain char is that if you do arithmetic and shifts without taking these details into account, you might get sign extensions when you don't expect them or vice versa (or even undefined behavior when shifting).
There's also some dumb advice out there that recommends to always declare your chars with an explicit signed or unsigned qualifier. This works as long as you only work with pointers to such qualified types, but it requires ugly casts as soon as you deal with strings and string functions, all of which operate on pointer-to-plain-char, which is assignment-incompatible without a cast. Such code suddenly gets plastered with tons of ugly-to-the-bone casts.
The basic rules for chars are:
char
for strings and if you need to pass pointers to functions taking plain charunsigned char
if you need to do bit twiddling and shifting on bytessigned char
if you need small signed values, but think about usingint
if space is not a concernThink of
signed char
andunsigned char
as the smallest arithmetic, integral types, just likesigned short
/unsigned short
, and so forth withint
,long int
,long long int
. Those types are all well-specified.On the other hand,
char
serves a very different purpose: It's the basic type of I/O and communication with the system. It's not meant for computations, but rather as the unit of data. That's why you findchar
used in the command line arguments, in the definition of "strings", in theFILE*
functions and in other read/write type IO functions, as well as in the exception to the strict aliasing rule. Thischar
type is deliberately less strictly defined so as to allow every implementation to use the most "natural" representation.It's simply a matter of separating responsibilities.
(It is true, though, that
char
is layout-compatible with bothsigned char
andunsigned char
, so you may explicitly convert one to the other and back.)