When I use std::bitset<N>::bitset( unsigned long long )
this constructs a bitset and when I access it via the operator[]
, the bits seems to be ordered in the little-endian fashion. Example:
std::bitset<4> b(3ULL);
std::cout << b[0] << b[1] << b[2] << b[3];
prints 1100
instead of 0011
i.e. the ending (or LSB) is at the little (lower) address, index 0.
Looking up the standard, it says
initializing the first M bit positions to the corresponding bit values in
val
Programmers naturally think of binary digits from LSB to MSB (right to left). So the first M bit positions is understandably LSB → MSB, so bit 0 would be at b[0]
.
However, under shifting, the definition goes
The value of
E1
<<E2
isE1
left-shiftedE2
bit positions; vacated bits are zero-filled.
Here one has to interpret the bits in E1
as going from MSB → LSB and then left-shift E2
times. Had it been written from LSB → MSB, then only right-shifting E2
times would give the same result.
I'm surprised that everywhere else in C++, the language seems to project the natural (English; left-to-right) writing order (when doing bitwise operations like shifting, etc.). Why be different here?
This is consistent with the way bits are usually numbered - bit 0 represents 20, bit 1 represents 21, etc. It has nothing to do with the endianness of the architecture, which concerns byte ordering not bit ordering.
There is no notion of endian-ness as far as the standard is concerned. When it comes to
std::bitset
,[template.bitset]/3
defines bit position:Using this definition of bit position in your standard quote
a
val
with binary representation11
leads to abitset<N> b
withb[0] = 1
,b[1] = 1
and remaining bits set to0
.