I'm currently trying to create a C source code which properly handles I/O whatever the endianness of the target system.
I've selected "little endian" as my I/O convention, which means that, for big endian CPU, I need to convert data while writing or reading.
Conversion is not the issue. The problem I face is to detect endianness, preferably at compile time (since CPU do not change endianness in the middle of execution...).
Up to now, I've been using this :
#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
...
#else
...
#endif
It's documented as a GCC pre-defined macro, and Visual seems to understand it too.
However, I've received report that the check fails for some big_endian systems (PowerPC).
So, I'm looking for a foolproof solution, which ensures that endianess is correctly detected, whatever the compiler and the target system. well, most of them at least...
[Edit] : Most of the solutions proposed rely on "run-time tests". These tests may sometimes be properly evaluated by compilers during compilation, and therefore cost no real runtime performance.
However, branching with some kind of << if (0) { ... } else { ... }
>> is not enough. In the current code implementation, variable and functions declaration depend on big_endian detection. These cannot be changed with an if statement.
Well, obviously, there is fall back plan, which is to rewrite the code...
I would prefer to avoid that, but, well, it looks like a diminishing hope...
[Edit 2] : I have tested "run-time tests", by deeply modifying the code. Although they do their job correctly, these tests also impact performance.
I was expecting that, since the tests have predictable output, the compiler could eliminate bad branches. But unfortunately, it doesn't work all the time. MSVC is good compiler, and is successful in eliminating bad branches, but GCC has mixed results, depending on versions, kind of tests, and with greater impact on 64 bits than on 32 bits.
It's strange. And it also means that the run-time tests cannot be ensured to be dealt with by the compiler.
Edit 3 : These days, I'm using a compile-time constant union, expecting the compiler to solve it to a clear yes/no signal. And it works pretty well : https://godbolt.org/g/DAafKo
Notwithstanding compiler-defined macros, I don't think there's a compile-time way to detect this, since determining the endianness of an architecture involves analyzing the manner in which it stores data in memory.
Here's a function which does just that:
You can't detect it at compile time to be portable across all compilers. Maybe you can change the code to do it at run-time - this is achievable.
As stated earlier, the only "real" way to detect Big Endian is to use runtime tests.
However, sometimes, a macro might be preferred.
Unfortunately, I've not found a single "test" to detect this situation, rather a collection of them.
For example, GCC recommends :
__BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
. However, this only works with latest versions, and earlier versions (and other compilers) will give this test a false value "true", since NULL == NULL. So you need the more complete version :defined(__BYTE_ORDER__)&&(__BYTE_ORDER__ == __ORDER_BIG_ENDIAN__)
OK, now this works for newest GCC, but what about other compilers ?
You may try
__BIG_ENDIAN__
or__BIG_ENDIAN
or_BIG_ENDIAN
which are often defined on big endian compilers.This will improve detection. But if you specifically target PowerPC platforms, you can add a few more tests to improve even more detection. Try
_ARCH_PPC
or__PPC__
or__PPC
orPPC
or__powerpc__
or__powerpc
or evenpowerpc
. Bind all these defines together, and you have a pretty fair chance to detect big endian systems, and powerpc in particular, whatever the compiler and its version.So, to summarize, there is no such thing as a "standard pre-defined macros" which guarantees to detect big-endian CPU on all platforms and compilers, but there are many such pre-defined macros which, collectively, give a high probability of correctly detecting big endian under most circumstances.
As others have pointed out, there isn't a portable way to check for endianness at compile-time. However, one option would be to use the
autoconf
tool as part of your build script to detect whether the system is big-endian or little-endian, then to use theAC_C_BIGENDIAN
macro, which holds this information. In a sense, this builds a program that detects at runtime whether the system is big-endian or little-endian, then has that program output information that can then be used statically by the main source code.Hope this helps!
Socket's
ntohl
function can be used for this purpose. SourceAt compile time in C you can't do much more than trusting preprocessor
#define
s, and there are no standard solutions because the C standard isn't concerned with endianness.Still, you could add an assertion that is done at runtime at the start of the program to make sure that the assumption done when compiling was true:
(where
COMPILED_FOR_BIG_ENDIAN
andCOMPILED_FOR_LITTLE_ENDIAN
are macros#define
d previously according to your preprocessor endianness checks)