Why is this program showing the following output ?
#include <bitset>
...
{
std::bitset<8> b1(01100100); std::cout<<b1<<std::endl;
std::bitset<8> b2(11111111); std::cout<<b2<<std::endl; //see, this variable
//has been assigned
//the value 11111111
//whereas, during
//execution, it takes
//the value 11000111
std::cout << "b1 & b2: " << (b1 & b2) << '\n';
std::cout << "b1 | b2: " << (b1 | b2) << '\n';
std::cout << "b1 ^ b2: " << (b1 ^ b2) << '\n';
}
This is the OUTPUT:
01000000
11000111
b1 & b2: 01000000
b1 | b2: 11000111
b1 ^ b2: 10000111
First, I thought there is something wrong with the header file (I was using MinGW) so I checked using MSVCC. But it too showed the same thing. Please help.
As per NPE's answer, you are constructing the
bitset
with anunsigned long
, and not with bits as you were expecting. An alternative way to construct it, which enables you to specify the bits, is by using thestring
constructor as follows:Click here to view the output.
Despite the appearance, the
11111111
is decimal. The binary representation of11111111
10 is101010011000101011000111
2. Upon construction,std::bitset<8>
takes the eight least significant bits of that:11000111
2.The first case is similar except the
01100100
is octal (due to the leading zero). The same number expressed in binary is1001000000001000000
2.One way to represent a bitset with a value of
11111111
2 isstd::bitset<8> b1(0xff)
.Alternatively, you can construct a bitset from a binary string: