I get very confused when it seems like sometimes my code treats a certain value as signed and sometimes it treats it as unsigned when comparing values. How does the code know whether a value is signed or unsigned?
问题:
回答1:
Why do you think that assembly code has to "know" if a value is signed or unsigned?
For most operations the results of a signed and an usigned operation are the same:
signed int a = 5;
signed int b = -6; // 0xFFFFFFFA
signed int c;
c = a + b; // results in -1 which is 0xFFFFFFFF
And:
unsigned int a = 5;
unsigned int b = 0xFFFFFFFA;
unsigned int c;
c = a + b; // results in 0xFFFFFFFF
Some exceptions are division and comparison. Most CPUs have different assembler instructions for signed and unsigned operations in this case. The examples here are x86 assembler but msp430 should be similar:
signed int a, b;
if(a > b) { ... }
Results in:
mov eax, [a]
cmp eax, [b]
jle elsePart ; Note the "L" in "jle"
And:
unsigned int a, b;
if(a > b) { ... }
Results in:
mov eax, [a]
cmp eax, [b]
jbe elsePart ; Note the "B" in "jbe"
回答2:
The machine doesn't care or know what is signed or unsigned unless you are telling it so.
At the level where assembler developers dwell, the machine is a brick and you are the conductor. You have to know enough to understand the contracts of the machine's instruction set and things like flags
to ensure a deterministic outcome.
回答3:
Some processor instructions are signed, some others are not, if you declare a var in C, which is a unsigned, will compile into assembly with unsigned instructions (usually faster to execute) if you are using assembly, you will have to choose the instructions that you really need.