Should a buffer of bytes be signed char or unsigned char or simply a char buffer? Any differences between C and C++?
Thanks.
Should a buffer of bytes be signed char or unsigned char or simply a char buffer? Any differences between C and C++?
Thanks.
If you fetch an element into a wider variable, it will of course be sign-extended or not.
You should use either char or unsigned char but never signed char. The standard has the following in 3.9/2
Do you really care? If you don't, just use the default (char) and don't clutter your code with unimportant matter. Otherwise, future maintainers will be left wondering why did you use signed (or unsigned). Make their life simpler.
For maximum portability always use unsigned char. There are a couple of instances where this could come into play. Serialized data shared across systems with different endian type immediately comes to mind. When performing shift or bit masking the values is another.
The choice of int8_t vs uint8_t is similar to when you are comparing a ptr to be NULL.
From a functionality point of view, comparing to NULL is the same as comparing to 0 because NULL is a #define for 0.
But personally, from a coding style point of view, I choose to compare my pointers to NULL because the NULL #define connotes to the person maintaining the code that you are checking for a bad pointer...
VS
when someone sees a comparison to 0 it connotes that you are checking for a specific value.
For the above reason, I would use uint8_t.
Should and should ... I tend to prefer unsigned, since it feels more "raw", less inviting to say "hey, that's just a bunch of small
ints
", if I want to emphasize the binary-ness of the data.I don't think I've ever used an explicit
signed char
to represent a buffer of bytes.Of course, one third option is to represent the buffer as
void *
as much as possible. Many common I/O functions work withvoid *
, so sometimes the decision of what integer type to use can be fully encapsulated, which is nice.