I did some googling and couldn't find any good article on this question. What should I watch out for when implementing an app that I want to be endian-agnostic?
问题:
回答1:
This might be a good article for you to read: The byte order fallacy
The byte order of the computer doesn't matter much at all except to compiler writers and the like, who fuss over allocation of bytes of memory mapped to register pieces. Chances are you're not a compiler writer, so the computer's byte order shouldn't matter to you one bit.
Notice the phrase "computer's byte order". What does matter is the byte order of a peripheral or encoded data stream, but--and this is the key point--the byte order of the computer doing the processing is irrelevant to the processing of the data itself. If the data stream encodes values with byte order B, then the algorithm to decode the value on computer with byte order C should be about B, not about the relationship between B and C.
回答2:
The only time you have to care about endianness is when you're transferring endian-sensitive binary data (that is, not text) between systems that might not have the same endianness. The normal solution is to use "network byte order" (AKA big-endian) to transfer data, and then swizzle the bytes if necessary on the other end.
To convert from host to network byte order, use htons(3)
and htonl(3)
. To convert back, use ntohl(3)
and ntohs(3)
. Check out the man page for everything you need to know. For 64-bit data, this question and answer will be helpful.
回答3:
What should I watch out for when implementing an app that I want to be endian-agnostic?
You first have to recognize when endian becomes an issue. And it mostly becomes an issue when you have to read or write data from somewhere external, be it reading data from a file or doing network communication between computers.
In such cases, endianess matters for integers bigger than a byte, as integers are represented differently in memory by different platforms. This means every time you need to read or write external data, you need to do more than just dumping the memory of your program, or read data directly into your own variables.
e.g. if you have this snippet of code:
unsigned int var = ...;
write(fd, &var, sizeof var);
You're directly writing out the memory content of var
, which means the data gets presented to wherever this data goes just as it is represented in your own computer' memory.
If you write this data to a file, the file content will be different whether you run the program on a big endian or a little endian machine. So that code is not endian agnostic, and you'd want to avoid doing things like this.
Instead focus on the data format. When reading/writing data, always decide the data format first, and then write the code to handle it. This might already have been decided for you if you need to read some existing well defined file format or implement an existing network protocol.
Once you know the data format, instead of e.g. dumping out an int variable directly, your code does this:
uint32_t i = ...;
uint8_t buf[4];
buf[0] = (i&0xff000000) >> 24;
buf[1] = (i&0x00ff0000) >> 16;
buf[2] = (i&0x0000ff00) >> 8;
buf[3] = (i&0x000000ff);
write(fd, buf, sizeof buf);
We've now picked the most significant byte and placed it as the first byte in a buffer, and the least significant byte placed at the end of the buffer. That integer is represented in big endian format in buf
, regardless of the endian of the host - so this code is endian agnostic.
The consumer of this data must know that the data is represented in a big endian format. And regardless of the host the program runs on, this code would read that data just fine:
uint32_t i;
uint8_t buf[4];
read(fd, buf, sizeof buf);
i = (uint32_t)buf[0] << 24;
i |= (uint32_t)buf[1] << 16;
i |= (uint32_t)buf[2] << 8;
i |= (uint32_t)buf[3];
Conversely, if the data you need to read is known to be in little endian format, the endianess agnostic code would just do
uint32_t i ;
uint8_t buf[4];
read(fd, buf, sizeof buf);
i = (uint32_t)buf[3] << 24;
i |= (uint32_t)buf[2] << 16;
i |= (uint32_t)buf[1] << 8;
i |= (uint32_t)buf[0];
You can makes some nice inline functions or macros to wrap and unwrap all 2,4,8 byte integer types you need, and if you use those and care about the data format and not the endian of the processor you run on, your code will not depend on the endianess it's running on.
This is more code than many other solutions, I've yet to write a program where this extra work has had any meaningful impact on performance, even when shuffeling 1Gbps+ of data around.
It also avoids misaligned memory access which you can easily get with an approach of e.g.
uint32_t i;
uint8_t buf[4];
read(fd, buf, sizeof buf);
i = ntohl(*(uint32_t)buf));
which can also incur a performance hit (insignificant on some, many many orders of magnitude on others) at best, and a crash at worse on platforms that can't do unaligned access to integers.
回答4:
Several answers have covered file IO, which is certainly the most common endian concern. I'll touch on one not-yet-mentioned: Unions.
The following union is a common tool in SIMD/SSE programming, and is not endian-friendly:
union uint128_t {
_m128i dq;
uint64_t dd[2];
uint32_t dw[4];
uint16_t dh[8];
uint8_t db[16];
};
Any code accessing the dd/dw/dh/db forms will be doing so in endian-specific fashion. On 32-bit CPUs it is also somewhat common to see simpler unions that allow more easily breaking 64-bit arithmetic into 32-bit portions:
union u64_parts {
uint64_t dd;
uint32_t dw[2];
};
Since in this usage case is it rare (if ever) that you want to iterate over each element of the union, I prefer to write such unions as this:
union u64_parts {
uint64_t dd;
struct {
#ifdef BIG_ENDIAN
uint32_t dw2, dw1;
#else
uint32_t dw1, dw2;
#endif
}
};
The result is implicit endian-swapping for any code that accesses dw1/dw2 directly. The same design approach can be used for the 128-bit SIMD datatype above as well, though it ends up being considerably more verbose.
Disclaimer: Union use is often frowned upon because of the loose standards definitions regarding structure padding and alignment. I find unions very useful and have used them extensively, and I haven't run into any cross-compatibility issues in a very long time (15+ yrs). Union padding/alignment will behave in an expected and consistent fashion for any current compiler targeting x86, ARM, or PowerPC.
回答5:
Inside your code you can pretty much ignore it - everything cancels out.
When you read/write data to disk or the network use htons
回答6:
This is clearly a rather controversial subject.
The general approach is to design your application such that you only care about byteorder in one small portion: the input and the output sections of the code.
Everywhere else, you should use the native byte order.
Note that although MOST machines do this the same way, it's not guaranteed that floating point and integer data is stored the same way, so to be completely sure that things work right, you need to know not only the size, but also whether it is integer or floating point.
The other alternative is to only consume and produce data in text format. This is probably almost as easy to implement, and unless you have really high rate of data in/out of the application with very little processing, it's probably very little difference in performance. And with the benefit (to some) that you can read the input and output data in a text editor, rather than trying to decode what the value of bytes 51213498-51213501 in the output actually should be, when you've got something wrong in the code.
回答7:
If you need to reinterpret between a 2,4 or 8 byte integer type and a byte-indexed array (or visa versa), than you need to know the endianness.
This comes up frequently in cryptographic algorithm implementation, serialization applications (like network protocol, filesystem or database backends), and of course operating system kernels and drivers.
It is usually detected by a macro like ENDIAN... something.
For example:
uint32 x = ...;
uint8* p = (uint8*) &x;
p is pointing to the high byte on BE machines, and the low byte on LE machine.
Using the macros you can write:
uint32 x = ...;
#ifdef LITTLE_ENDIAN
uint8* p = (uint8*) &x + 3;
#else // BIG_ENDIAN
uint8* p = (uint8*) &x;
#endif
to always get the high byte for example.
There are ways to define the macro here: C Macro definition to determine big endian or little endian machine? if your toolchain doesnt provide them.