Why is char not compatible with signed char or uns

2019-04-06 12:48发布

问题:

I have found that the C99 standard have a statement which denies the compatibility between the type char and the type signed char/unsigned char.

Note 35 of C99 standard:

CHAR_MIN, defined in limits.h, will have one of the values 0 or SCHAR_MIN, and this can be used to distinguish the two options. Irrespective of the choice made, char is a separate type from the other two and is not compatible with either.

My question is that why does the committee deny the compatibility? What is the rationale? If char is compatible with signed char or unsigned char, will something terrible happen?

回答1:

The roots are in compiler history. There were (are) essentially two C dialects in the Eighties:

  1. Where plain char is signed
  2. Where plain char is unsigned

Which of these should C89 have standardized? C89 chose to standardize neither, because it would have invalidated a large number of assumptions made in C code already written--what standard folks call the installed base. So C89 did what K&R did: leave the signedness of plain char implementation-defined. If you required a specific signedness, qualify your char. Modern compilers usually let you chose the dialect with an option (eg. gcc's -funsigned-char).

The "terrible" thing that can happen if you ignore the distinction between (un)signed char and plain char is that if you do arithmetic and shifts without taking these details into account, you might get sign extensions when you don't expect them or vice versa (or even undefined behavior when shifting).

There's also some dumb advice out there that recommends to always declare your chars with an explicit signed or unsigned qualifier. This works as long as you only work with pointers to such qualified types, but it requires ugly casts as soon as you deal with strings and string functions, all of which operate on pointer-to-plain-char, which is assignment-incompatible without a cast. Such code suddenly gets plastered with tons of ugly-to-the-bone casts.

The basic rules for chars are:

  • Use plain char for strings and if you need to pass pointers to functions taking plain char
  • Use unsigned char if you need to do bit twiddling and shifting on bytes
  • Use signed char if you need small signed values, but think about using int if space is not a concern


回答2:

Think of signed char and unsigned char as the smallest arithmetic, integral types, just like signed short/unsigned short, and so forth with int, long int, long long int. Those types are all well-specified.

On the other hand, char serves a very different purpose: It's the basic type of I/O and communication with the system. It's not meant for computations, but rather as the unit of data. That's why you find char used in the command line arguments, in the definition of "strings", in the FILE* functions and in other read/write type IO functions, as well as in the exception to the strict aliasing rule. This char type is deliberately less strictly defined so as to allow every implementation to use the most "natural" representation.

It's simply a matter of separating responsibilities.

(It is true, though, that char is layout-compatible with both signed char and unsigned char, so you may explicitly convert one to the other and back.)