可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
I'm designing a language, and trying to decide whether true
should be 0x01 or 0xFF. Obviously, all non-zero values will be converted to true, but I'm trying to decide on the exact internal representation.
What are the pros and cons for each choice?
回答1:
0 is false because the processor has a flag that is set when a register is set to zero.
No other flags are set on any other value (0x01, 0xff, etc) - but the zero flag is set to false when there's a non-zero value in the register.
So the answers here advocating defining 0 as false and anything else as true are correct.
If you want to "define" a default value for true, then 0x01 is better than most:
- It represents the same number in every bit length and signedness
- It only requires testing one bit if you want to know whether it's true, should the zero flag be unavailable, or costly to use
- No need to worry about sign extension during conversion to other types
- Logical and arithmetic expressions act the same on it
-Adam
回答2:
It doesn't matter, as long as it satisfies the rules for the external representation.
I would take a hint from C here, where false is defined absolutely as 0, and true is defined as not false. This is an important distinction, when compared to an absolute value for true. Unless you have a type that only has two states, you have to account for all values within that value type, what is true, and what is false.
回答3:
Why are you choosing that non-zero values are true? In Ada true is TRUE and false is FALSE. There is no implicit type conversion to and from BOOLEAN.
回答4:
Using -1 has one advantage in a weakly typed language -- if you mess up and use the bitwise and
operator instead of the logical and
operator, your condition will still evaluate correctly as long as one of the operands has been converted to the canonical boolean representation. This isn't true if the canonical representation is 1.
0xffffffff & 0x00000010 == 0x00000010 (true)
0xffffffff && 0x00000010 == 0xffffffff (true)
but
0x00000001 & 0x00000010 == 0x00000000 (false)
0x00000001 && 0x00000010 == 0xffffffff (true)
回答5:
IMO, if you want to stick with false=0x00, you should use 0x01. 0xFF is usually:
- a sign that some operation overflowed
or
And in both cases, it probably means false. Hence the *nix return value convention from executables, that true=0x00, and any non-zero value is false.
回答6:
-1 is longer to type than 1...
In the end it doesn't matter since 0 is false and anything else is true, and you will never compare to the exact representation of true.
Edit, for those down voting, please explain why. This answer is essentially the same as the one currently rated at +19. So that is 21 votes difference for what is the same basic answer.
If it is because of the -1 comment, it is true, the person who actually defines "true" (eg: the compiler writer) is going to have to use -1 instead of 1, assuming they chose to use an exact representation. -1 is going to take longer to type than 1, and the end result will be the same. The statement is silly, it was meant to be silly, because there is no real difference between the two (1 or -1).
If you are going to mark something down at least provide a rationale for it.
回答7:
0xff is an odd choice since it has an implicit assumption that 8 bits is your minimum storage unit. But it's not that uncommon to want to store boolean values more compactly than that.
Perhaps you want to rephrase by thinking about whether boolean operators produce something that is just one 0 or 1 bit (which works regardless of sign extension), or is all-zeroes or all-ones (and depends on sign extension of signed two's-complement quantities to maintain all-ones at any length).
I think your life is simpler with 0 and 1.
回答8:
The pros are none, and the cons are none, too. As long as you provide an automatic conversion from integer to boolean, it will be arbitrary, so it really doesn't matter which numbers you choose.
On the other hand, if you didn't allow this automatic conversion you'd have a pro: you wouldn't have some entirely arbitrary rule in your language. You wouldn't have (7 - 4 - 3) == false
, or 3 * 4 + 17 == "Hello"
, or "Hi mom!" == Complex(7, -2)
.
回答9:
I think the C method is the way to go. 0 means false, anything else means true. If you go with another mapping for true, then you are left with the problem of having indeterminate values - that are neither true nor false.
If this is language that you'll be compiling for a specific instruction set that has special support for a particular representation, then I'd let that guide you. But absent any additional information, for an 'standard' internal representation, I'd go with -1 (all 1's in binary). This value extends well to whatever size boolean you want (single bit, 8-bit, 16, etc), and if you break up a "TRUE" or a "FALSE" into a smaller "TRUE" or "FALSE", its still the same. (where if you broke a 16 bit TRUE=0x0001 you'd get a FALSE=0x00 and a TRUE=0x01).
回答10:
Design the language so that 0 is false and non-zero is true. There is no need to "convert" anything, and thinking "non-zero" instead of some specific value will help you write the code properly.
If you have built-in symbols like "True" then go ahead and pick a value, but always think "non-zero is true" instead of "0x01 is true".
回答11:
Whatever you do, once you select your values don't change them. In FORTH-77, true and false were defined as 1 and 0. Then, FORTH-83 redefined them as -1 and 0. There were a not few (well ok, only a few, this is FORTH we are talking about) problems caused by this.