Is there a programmatic way to detect whether or not you are on a big-endian or little-endian architecture? I need to be able to write code that will execute on an Intel or PPC system and use exactly the same code (i.e. no conditional compilation).
相关问题
- Sorting 3 numbers without branching [closed]
- How to compile C++ code in GDB?
- Why does const allow implicit conversion of refere
- thread_local variables initialization
- What uses more memory in c++? An 2 ints or 2 funct
相关文章
- Class layout in C++: Why are members sometimes ord
- How to mock methods return object with deleted cop
- What are the problems associated to Best First Sea
- Which is the best way to multiply a large and spar
- C++ default constructor does not initialize pointe
- Selecting only the first few characters in a strin
- What exactly do pointers store? (C++)
- Converting glm::lookat matrix to quaternion and ba
How about this?
The way C compilers (at least everyone I know of) work the endianness has to be decided at compile time. Even for biendian processors (like ARM och MIPS) you have to choose endianness at compile time. Further more the endianness is defined in all common file formats for executables (such as ELF). Although it is possible to craft a binary blob of biandian code (for some ARM server exploit maybe?) it probably has to be done in assembly.
See Endianness - C-Level Code illustration.
Here's another C version. It defines a macro called
wicked_cast()
for inline type punning via C99 union literals and the non-standard__typeof__
operator.If integers are single-byte values, endianness makes no sense and a compile-time error will be generated.
You can do it by setting an int and masking off bits, but probably the easiest way is just to use the built in network byte conversion ops (since network byte order is always big endian).
Bit fiddling could be faster, but this way is simple, straightforward and pretty impossible to mess up.