What are the possible situations where would we need a signed char? I guess the only use of this is in conversion of a char quantity to an integer.
问题:
回答1:
If I remember right, a "char" may be signed or unsigned (it depends on the compiler/implementation). If you need an unsigned char you should explicitly ask for it (with "unsigned char") and if you need a signed char you should explicitly ask for it (with "signed char").
A "char" is just a (typically 8-bit) integer. It has nothing to do with characters.
A character could be anything, depending on what you're doing. I prefer using "uint32_t" and Unicode (UTF-32). For crusty old/broken software that uses ASCII, a char is fine (regardless of whether "char" is signed or unsigned). For UTF-8 you'd probably want to use "unsigned char" or "uint8_t".
You might also be tempted to try to use "wchar_t" (and the "wchar.h" header), but there's lots of ways that can go wrong (do some research if you're tempted).
回答2:
char
is an integer, usually with a width of 8 bits. But because its signedness is implementation defined (ie depends on compiler), it is probably not a good idea to use it for arithmetic. Use unsigned char
or signed char
instead, or if you want to enforce the width, use uint8_t
and int8_t
from stdint.h
.
回答3:
The reason why compilers are allowed to make plain char
signed is that back in the very early days of the C programming language, every integer type was signed. By the time unsigned types were added to the language, there must already have been too much existing code that did things like store -1 in a char variable as a sentinel value, that it was not feasible to to change the compilers on existing systems such that char
was unsigned. There probably wasn't any great pressure for unsigned chars anyway; the early development of C happened on 7-bit ASCII.
As C was ported to platforms where there were 8-bit printable characters (such as IBM mainframes speaking EBCDIC or the PC), compilers there made char
unsigned because having a printable character with a negative value would be an even larger portability nightmare than not being able to store -1 in a char
. On the other hand, this led to the current situation where portable code cannot make any assumptions about the signedness of char.
回答4:
Inline with what you mentioned, char
are 8 bit integer values.
You wouldn't strictly need them to be negative for most practical purpose. Since, they must be represented as bits and allow arithmetic operations to be performed on them, they are represented as int
. Of course, you have unsigned char
also.
回答5:
In any place where you want to represent a value in the range [-128, 127] an signed char is fit. If you have a struct with many fields that will be instantiated a lot of times it is relevant to keep the data types as small as possible.
回答6:
In the code bellow:
signed char c = -1;
printf("%c %d\n", c, c);
c = c >> 1;
printf("%c %d\n", c, c);
The result will be different if you use an unsigned char
instead of a signed char
(at least, it is on my "AMD Athlon(tm) 64 Processor" with gcc under cygwin). The reason for that is when you right shift an unsigned value, it is padded with zero, and when you do the same with a signed value that is negative, it is padded with one.
Is this useful though, I cannot tell... but this is a situation where the sign of a char matter.