In C++, a bool is guaranteed to be 0 or 1
C++ (§4.5/4):
An rvalue of type bool can be converted to an rvalue of type int, with
false becoming zero and true becoming one.
Consider the following function and what g++5.2 generates with -O3
int foo(bool a, bool b)
{
if(a&b) return 3;
else return 5;
}
0000000000000000 <_Z3foobb>:
0: 40 84 ff test %dil,%dil
3: 74 13 je 18 <_Z3foobb+0x18>
5: 40 84 f6 test %sil,%sil
8: b8 03 00 00 00 mov $0x3,%eax
d: 74 09 je 18 <_Z3foobb+0x18>
f: f3 c3 repz retq
11: 0f 1f 80 00 00 00 00 nopl 0x0(%rax)
18: b8 05 00 00 00 mov $0x5,%eax
1d: c3 retq
As seen, above, it is generating two test instructions which indicates that it is still treating the if as a if(a&&b) instead of a bitwise and.
Even if I first explicitly convert the two bools to two chars , it still generates the same output as above.
Since I know that the two operands a and b can only have 0/1 as values, is there some way to get gcc to generate just one test instruction. This is indeed what it does if the function takes two ints instead of two bools.
With
&
, some compiler already produces different asm without jump:clang 3.6 (rc2):
A hack, in your case is to use
*
which has the same true-table for true/falseWhich produces:
Demo
It is doing a bitwise-AND. That is the definition of what
&
does, and the compiler correctly implements the observable behaviour for bitwise-AND.The operands of binary
&
undergo the integer promotions, soa
andb
are converted toint
. The definition of this conversion is:true
is converted to1
false
is converted to0
Doing a bitwise-AND, here is an exhaustive list of cases:
1 & 1 == 1
1 & 0 == 0
0 & 1 == 0
0 & 0 == 0
As you can see, this is the same list of cases are for a boolean AND. Therefore you cannot distinguish the operation merely by looking at the assembly output.
To confirm this, here is an example showing that g++ generates the exact same assembly for
if (a&b)
as forif (a&&b)
.Remember that C++ is defined in terms of an abstract machine, and the output of the compiler may be anything at all so long as it produces the same observable behaviour as the abstract machine would.
You seem to be objecting to the part of the code associated with the conversion of
bool
toint
. Your compiler apparently has implementedbool
such that any non-zero bit pattern is consideredtrue
.So, extra steps are necessary. Performing
test %dil, %sil
would give the wrong result in the case that one of the bools had bit pattern000...0010
and the other had000....0001
, for example.As user657267 points out in comments, the compiler has decided to test non-zero on one argument, and then test non-zero on the other argument, which it is easy to show that it fulfils the requirements of the defined behaviour for
&
along with the integer promotions as described above.There's nothing you can do about this. You could submit a g++ patch so that it only ever generates bit pattern
000...0001
for atrue
bool and therefore it could usetest %dil, %sil
here but I doubt it would be accepted.It would be possible to write code that reads the representation of
a
andb
and performs&
on the lowest byte. However this would be dangerous; the compiler might have decided to send some other bit pattern than you are expecting because it knows that those are acceptable bit patterns fortrue
.An alternative solution might be to change your function to accept
unsigned int
arguments instead ofbool
. Of course this would introduce the possibility of new bugs, e.g. if you called the function with1ull << 40
it would truncate1ull << 40
to0
and therefore behave the same as passing infalse
. You can't eat your cake and have it; in this scenario you'd have to be very careful every time you call your function. (Maybe writing a wrapper class forint
would help!)I feel like this may not quite be what you are looking for but normally arithmetic and's can only be exicuted if the objects being tested are of type int so a simple cast may fix that but I would assume a cast could generate extra unwanted instructions that you don't want as well.