Why the range of any data type is greater on negative side as compare to positive side?
For example, in case of integer:
In Turbo C its range is -32768
to 32767
and for Visual Studio it is -2147483648
to 2147483647
.
The same happens to other data types...
[UPD: Set proper limit values for Visual Studio]
Ordinarily, due to using a two's complement system for storing negative values, when you flip the sign bit on an integer it's biased toward the negative.
The range should be: -(2^(n-1)) - ((2^(n-1)-1)
Because the range includes zero. The number of different values an n-bit integer can represent is 2^n. That means a 16-bit integer can represent 65536 different values. If it's an unsigned 16-bit integer, it can represent 0-65535 (inclusive). The convention for signed integers is to represent -32768 to 32767, -214748368 to 214748367, etc.
With 2s complement negative numbers are defined as the bitwise not plus 1 this reduces the range of possible numbers in a given number of bits by 1 on the negative side.
The minimum range required by C is actually -32767 through 32767, because it has to cater for two's complement, ones' complement and sign/magnitude encoding for negative numbers, all of which the C standard allows. See
Annex E, Implementation limits
of C11 (and C99) for details on the minimum ranges for data types.Your question pertains only to the two's complement variant and the reason for that is simple. With 16 bits, you can represent 216 (or 65,536) different values and zero has to be one of those. Hence there are an odd number of values left, of which the majority (by one) are negative values:
Both ones' complement and sign-magnitude encoding allow for a negative-zero value (as well as positive-zero), meaning that one less bit pattern is available for the non-zero numbers, hence the reduced minimum range you find in the standard.
Two's complement is actually a nifty encoding scheme because positive and negative numbers can be added together with the same simple hardware. Other encoding schemes tend to require more elaborate hardware to do the same task.
For a fuller explanation on how two's complement works, see the wikipedia page.
Because of how numbers are stored. Signed numbers are stored using something called "two's complement notation".
Remember all variables have a certain amount of bits. If the most significant one of them, the one on the left, is a 0, then the number is non-negative (i.e., positive or zero), and the rest of the bits simply represent the value.
However, if the leftmost bit is a 1, then the number is negative. The real value of the number can be obtained by subtracting 2^n from the whole number represented (as an unsigned quantity, including the leftmost 1), where n is the amount of bits the variable has.
Since only n - 1 bits are left for the actual value (the "mantissa") of the number, the possible combinations are 2^(n - 1). For positive/zero numbers, this is easy: they go from 0, to 2^(n - 1) - 1. That -1 is to account for zero itself -- for instance, if you only had four possible combinations, those combinations would represent 0, 1, 2, and 3 (notice how there's four numbers): it goes from 0 to 4 - 1.
For negative numbers, remember the leftmost bit is 1, so the whole number represented goes between 2^(n - 1) and (2^n) - 1 (parentheses are very important there!). However, as I said, you have to take 2^n away to get the real value of the number. 2^(n - 1) - 2^n is -(2^(n - 1)), and ((2^n) - 1) - 2^n is -1. Therefore, the negative numbers' range is -(2^(n - 1)) to -1.
Put all that together and you get -2^(n - 1) to 2^(n - 1) - 1. As you can see, the upper bound gets a -1 that the lower bound doesn't.
And that's why there's one more negative number than positive.