Why does left shift operation invoke Undefined Beh

2019-01-02 16:13发布

问题:

In C bitwise left shift operation invokes Undefined Behaviour when the left side operand has negative value.

Relevant quote from ISO C99 (6.5.7/4)

The result of E1 << E2 is E1 left-shifted E2 bit positions; vacated bits are filled with zeros. If E1 has an unsigned type, the value of the result is E1 × 2E2, reduced modulo one more than the maximum value representable in the result type. If E1 has a signed type and nonnegative value, and E1 × 2E2 is representable in the result type, then that is the resulting value; otherwise, the behavior is undefined.

But in C++ the behaviour is well defined.

ISO C++-03 (5.8/2)

The value of E1 << E2 is E1 (interpreted as a bit pattern) left-shifted E2 bit positions; vacated bits are zero-filled. If E1 has an unsigned type, the value of the result is E1 multiplied by the quantity 2 raised to the power E2, reduced modulo ULONG_MAX+1 if E1 has type unsigned long, UINT_MAX+1 otherwise. [Note: the constants ULONG_MAXand UINT_MAXare defined in the header ). ]

That means

int a = -1, b=2, c;
c= a << b ;

invokes Undefined Behaviour in C but the behaviour is well defined in C++.

What forced the ISO C++ committee to consider that behaviour well defined as opposed to the behaviour in C?

On the other hand the behaviour is implementation defined for bitwise right shift operation when the left operand is negative, right?

My question is why does left shift operation invoke Undefined Behaviour in C and why does right shift operator invoke just Implementation defined behaviour?

P.S : Please don't give answers like "It is undefined behaviour because the Standard says so". :P

回答1:

The paragraph you copied is talking about unsigned types. The behavior is undefined in C++. From the last C++0x draft:

The value of E1 << E2 is E1 left-shifted E2 bit positions; vacated bits are zero-filled. If E1 has an unsigned type, the value of the result is E1 × 2E^2, reduced modulo one more than the maximum value representable in the result type. Otherwise, if E1 has a signed type and non-negative value, and E1×2E^2 is representable in the result type, then that is the resulting value; otherwise, the behavior is undefined.

EDIT: got a look at C++98 paper. It just doesn't mention signed types at all. So it's still undefined behavior.

Right-shift negative is implementation defined, right. Why? In my opinion: It's easy to implementation-define because there is no truncation from the left issues. When you shift left you must say not only what's shifted from the right but also what happens with the rest of the bits e.g. with two's complement representation, which is another story.



回答2:

In C bitwise left shift operation invokes Undefined Behaviour when the left side operand has negative value. [...] But in C++ the behaviour is well defined. [...] why [...]

The easy answer is: Becuase the standards say so.

A longer answer is: It has probably something to do with the fact that C and C++ both allow other representations for negative numbers besides 2's complement. Giving fewer guarantees on what's going to happen makes it possible to use the languages on other hardware including obscure and/or old machines.

For some reason, the C++ standardization committee felt like adding a little guarantee about how the bit representation changes. But since negative numbers still may be represented via 1's complement or sign+magnitude the resulting value possibilities still vary.

Assuming 16 bit ints, we'll have

 -1 = 1111111111111111  // 2's complement
 -1 = 1111111111111110  // 1's complement
 -1 = 1000000000000001  // sign+magnitude

Shifted to the left by 3, we'll get

 -8 = 1111111111111000  // 2's complement
-15 = 1111111111110000  // 1's complement
  8 = 0000000000001000  // sign+magnitude

What forced the ISO C++ committee to consider that behaviour well defined as opposed to the behaviour in C?

I guess they made this guarantee so that you can use << appropriately when you know what you're doing (ie when you're sure your machine uses 2's complement).

On the other hand the behaviour is implementation defined for bitwise right shift operation when the left operand is negative, right?

I'd have to check the standard. But you may be right. A right shift without sign extension on a 2's complement machine isn't particularly useful. So, the current state is definitely better than requiring vacated bits to be zero-filled because it leaves room for machines that do a sign extensions -- even though it is not guaranteed.



回答3:

To answer your real question as stated in the title: as for any operation on a signed type, this has undefined behavior if the result of the mathematical operation doesn't fit in the target type (under- or overflow). Signed integer types are designed like that.

For the left shift operation if the value is positive or 0, the definition of the operator as a multiplication with a power of 2 makes sense, so everything is ok, unless the result overflows, nothing surprising.

If the value is negative, you could have the same interpretation of multiplication with a power of 2, but if you just think in terms of bit shift, this would be perhaps surprising. Obviously the standards committee wanted to avoid such ambiguity.

My conclusion:

  • if you want to do real bit pattern operations use unsigned types
  • if you want to multiply a value (signed or not) by a power of two, do just that, something like

    i * (1u << k)

your compiler will transform this into decent assembler in any case.



回答4:

A lot of these kind of things are a balance between what common CPUs can actually support in a single instruction and what's useful enough to expect compiler-writers to guarantee even if it takes extra instructions. Generally, a programmer using bit-shifting operators expects them to map to single instructions on CPUs with such instructions, so that's why there's undefined or implementation behaviour where CPUs had various handling of "edge" conditions, rather than mandating a behaviour and having the operation be unexpectedly slow. Keep in mind that the additional pre/post or handling instructions may be made even for the simpler use cases. undefined behaviour may have been necessary where some CPUs generated traps/exceptions/interrupts (as distinct from C++ try/catch type exceptions) or generally useless/inexplicable results, while if the set of CPUs considered by the Standards Committee at the time all provided at least some defined behaviour, then they could make the behaviour implementation defined.



回答5:

My question is why does left shift operation invoke Undefined Behaviour in C and why does right shift operator invoke just Implementation defined behaviour?

The folks at LLVM speculate the shift operator has constraints because of the way the instruction is implemented on various platforms. From What Every C Programmer Should Know About Undefined Behavior #1/3:

... My guess is that this originated because the underlying shift operations on various CPUs do different things with this: for example, X86 truncates 32-bit shift amount to 5 bits (so a shift by 32-bits is the same as a shift by 0-bits), but PowerPC truncates 32-bit shift amounts to 6 bits (so a shift by 32 produces zero). Because of these hardware differences, the behavior is completely undefined by C...

Nate that the discussion was about shifting an amount greater than the register size. But its the closest I've found to explaining the shift constraints from an authority.

I think a second reason is the potential sign change on a 2's compliment machine. But I've never read it anywhere (no offense to @sellibitze (and I happen to agree with him)).



回答6:

The behavior in C++03 is the same as in C++11 and C99, you just need to look beyond the rule for left-shift.

Section 5p5 of the Standard says that:

If during the evaluation of an expression, the result is not mathematically defined or not in the range of representable values for its type, the behavior is undefined

The left-shift expressions which are specifically called out in C99 and C++11 as being undefined behavior, are the same ones that evaluate to a result outside the range of representable values.

In fact, the sentence about unsigned types using modular arithmetic is there specifically to avoid generating values outside the representable range, which would automatically be undefined behavior.



回答7:

In C89, the behavior of left-shifting negative values was unambiguously defined on two's-complement platforms which did not use padding bits on signed and unsigned integer types. The value bits that signed and unsigned types had in common to be in the same places, and the only place the sign bit for a signed type could go was in the same place as the upper value bit for unsigned types, which in turn had to be to the left of everything else.

The C89 mandated behaviors were useful and sensible for two's-complement platforms without padding, at least in cases where treating them as multiplication would not cause overflow. The behavior may not have been optimal on other platforms, or on implementations that seek to reliably trap signed integer overflow. The authors of C99 probably wanted to allow implementations flexibility in cases where the C89 mandated behavior would have been less than ideal, but nothing in the rationale suggests an intention that quality implementations shouldn't continue to behave in the old fashion in cases where there was no compelling reason to do otherwise.

Unfortunately, even though there have never been any implementations of C99 that don't use two's-complement math, the authors of C11 declined to define the common-case (non-overflow) behavior; IIRC, the claim was that doing so would impede "optimization". Having the left-shift operator invoke Undefined Behavior when the left-hand operand is negative allows compilers to assume that the shift will only be reachable when the left-hand operand is non-negative. This allows compilers which receive code like:

int do_something(int x)
{
  if (x >= 0)
  {
    launch_missiles();
    exit(1);
  }
  return x<<4;
}

to recognize that such a method will never be called with a negative value for x, and thus the if test can be deleted and the launch_missiles() call made unconditional. Since exit is known not to return, the compiler can also omit the computation of x<<4. Were it not for such a rule, a programmer would have to insert some kind of clunky __assume(x >= 0); directive to request such behavior, but making left-shifts of negative values Undefined Behavior eliminates the need to have a programmer who obviously wants such semantics (by virtue of performing the left-shift) to clutter up the code with them.

Note, btw, in the hypothetical event that code did call do_something(-1), it would be engaging in Undefined Behavior, so calling launch_missiles would be a perfectly legitimate thing to do.



回答8:

The result of shifting depends upon the numeric representation. Shifting behaves like multiplication only when numbers are represented as two's complement. But the problem is not exclusive to negative numbers. Consider a 4-bit signed number represented in excess-8 (aka offset binary). The number 1 is represented as 1+8 or 1001 If we left shift this as bits, we get 0010 which is the representation for -6. Similarly, -1 is represented as -1+8 0111 which becomes 1110 when left-shifted, the representation for +6. The bitwise behavior is well-defined, but the numeric behavior is highly dependent on the system of representation.



标签: