This arose from a question earlier today on the subject of bignum libraries and gcc specific hacks to the C language. Specifically, these two declarations were used:
typedef unsigned int dword_t __attribute__((mode(DI)));
On 32 bit systems and
typedef unsigned int dword_t __attribute__((mode(TI)));
On 64-bit systems.
I assume given this is an extension to the C language that there exists no way to achieve whatever it achieves in current (C99) standards.
So my questions are simple: is that assumption correct? And what do these statements do to the underlying memory? I think the result is I have 2*sizeof(uint32_t)
for a dword
in 32-bit systems and 2*sizeof(uint64_t)
for 64-bit systems, am I correct?
@haelix Just read this question and I also tried to understand this thing. By my reading: you can find the definitions in the [gcc/gcc/machmode.def] in GCC source tree. For 'SD' it should be:
and 'DECIMAL_FLOAT_MODE' says:
These allow you to explicitly specify a size for a type without depending on compiler or machine semantics, such as the size of 'long' or 'int'.
They are described fairly well on this page.
I quote from that page:
So
DI
is essentiallysizeof(char) * 8
.Further explanation, including
TI
mode, can be found here (possibly better than the first link, but both provided for reference).So
TI
is essentiallysizeof(char) * 16
(128 bits).