What are the possible situations where would we need a signed char? I guess the only use of this is in conversion of a char quantity to an integer.
相关问题
- Multiple sockets for clients to connect to
- What is the best way to do a search in a large fil
- glDrawElements only draws half a quad
- Index of single bit in long integer (in C) [duplic
- Equivalent of std::pair in C
The reason why compilers are allowed to make plain
char
signed is that back in the very early days of the C programming language, every integer type was signed. By the time unsigned types were added to the language, there must already have been too much existing code that did things like store -1 in a char variable as a sentinel value, that it was not feasible to to change the compilers on existing systems such thatchar
was unsigned. There probably wasn't any great pressure for unsigned chars anyway; the early development of C happened on 7-bit ASCII.As C was ported to platforms where there were 8-bit printable characters (such as IBM mainframes speaking EBCDIC or the PC), compilers there made
char
unsigned because having a printable character with a negative value would be an even larger portability nightmare than not being able to store -1 in achar
. On the other hand, this led to the current situation where portable code cannot make any assumptions about the signedness of char.Inline with what you mentioned,
char
are 8 bit integer values. You wouldn't strictly need them to be negative for most practical purpose. Since, they must be represented as bits and allow arithmetic operations to be performed on them, they are represented asint
. Of course, you haveunsigned char
also.In the code bellow:
The result will be different if you use an
unsigned char
instead of asigned char
(at least, it is on my "AMD Athlon(tm) 64 Processor" with gcc under cygwin). The reason for that is when you right shift an unsigned value, it is padded with zero, and when you do the same with a signed value that is negative, it is padded with one.Is this useful though, I cannot tell... but this is a situation where the sign of a char matter.
If I remember right, a "char" may be signed or unsigned (it depends on the compiler/implementation). If you need an unsigned char you should explicitly ask for it (with "unsigned char") and if you need a signed char you should explicitly ask for it (with "signed char").
A "char" is just a (typically 8-bit) integer. It has nothing to do with characters.
A character could be anything, depending on what you're doing. I prefer using "uint32_t" and Unicode (UTF-32). For crusty old/broken software that uses ASCII, a char is fine (regardless of whether "char" is signed or unsigned). For UTF-8 you'd probably want to use "unsigned char" or "uint8_t".
You might also be tempted to try to use "wchar_t" (and the "wchar.h" header), but there's lots of ways that can go wrong (do some research if you're tempted).
char
is an integer, usually with a width of 8 bits. But because its signedness is implementation defined (ie depends on compiler), it is probably not a good idea to use it for arithmetic. Useunsigned char
orsigned char
instead, or if you want to enforce the width, useuint8_t
andint8_t
fromstdint.h
.In any place where you want to represent a value in the range [-128, 127] an signed char is fit. If you have a struct with many fields that will be instantiated a lot of times it is relevant to keep the data types as small as possible.