i've searched for macro's to determine endianess on a machine and didn't found any standard proprocessor macros for this, but a lot of solutions doing that on runtime. why should i detect endianess at runtime?
if i do somthing like that:
#ifdef LITTLE_ENDIAN
inline int swap(int& x) {
// do swap anyhow
return swapped;
}
#elif BIG_ENDIAN
inline int& swap(int& x) { return x; }
#else
#error "some blabla"
#endif
int main() {
int x = 0x1234;
int y = swap(x);
return 0;
}
the compiler will generate only one function.
but if i do it like (see predef.endian):
enum {
ENDIAN_UNKNOWN,
ENDIAN_BIG,
ENDIAN_LITTLE,
ENDIAN_BIG_WORD, /* Middle-endian, Honeywell 316 style */
ENDIAN_LITTLE_WORD /* Middle-endian, PDP-11 style */
};
int endianness(void)
{
uint8_t buffer[4];
buffer[0] = 0x00;
buffer[1] = 0x01;
buffer[2] = 0x02;
buffer[3] = 0x03;
switch (*((uint32_t *)buffer)) {
case 0x00010203: return ENDIAN_BIG;
case 0x03020100: return ENDIAN_LITTLE;
case 0x02030001: return ENDIAN_BIG_WORD;
case 0x01000302: return ENDIAN_LITTLE_WORD;
default: return ENDIAN_UNKNOWN;
}
int swap(int& x) {
switch(endianess()) {
case ENDIAN_BIG:
return x;
break;
case LITTLE_ENDIAN:
// do swap
return swapped;
break;
default:
// error blabla
}
// do swap anyhow
}
the compiler generates code for the detection.
i don't get it, why should i do this?
if i have code, compiled for a little-endian machine, the whole code is generated for little endian, and if i try to run such code on a big-endian machine (on a bi-endian machine like arm wiki:bi-endian) the whole code is compiled for a little-endian machine. so all other declarations of e.g. int are also le.
// compiled on little endian
uint32_t 0x1234; // 0x1234 constant literal
// should result 34120000 on BE