Why does the compiler decide that 2.3 is double so this code won't compile:
decimal x;
x = 2.3; // Compilation error - can not convert double to decimal.
x = (decimal) 2.3 // O.k.
Why the compiler doesn't think like this:
He wants to get a decimal, he give me a value that can be decimal, So it's decimal!
And why this doesn't get compilation error:
short x;
x = 23; // O.K.
Who said that 23 isn't an int?
Because Floatingpint numbers are always a little bit diffucult in calculation and in Valuerange, they are in Immediate notation always the biggest possible type. (in your case: Double).
Non-Floating Points have some kind of the same Handling below, so they can be converted without any problem. If your value exceeds the Value-Range of a variable, it may cause an error (for example 257 for a Byte).
There are a few things going on here:
double
literal to afloat
implicitly. That won't work.double
todecimal
(which is allowed, but generally not a good idea), and then an implicit conversion ofdecimal
tofloat
(which isn't allowed). Ifx
is meant to be declared asdecimal
, then the only conversion required is fromdouble
todecimal
- which still isn't a good idea, usually.The working conversion of the integer literal is due to an "implicit constant expression conversion", as specified in section 6.1.9 of the C# 4 spec:
There's something similar for
long
, but not fordouble
.Basically, when you're writing a floating point constant, it's a good idea to explicitly specify the type with a suffix:
There are a lot of questions here. Let's break them down into small questions.
Historical reasons. C# is designed to be a member of the "C-like syntax" family of languages, so that its superficial appearance and basic idioms are familiar to programmers who use C-like languages. In almost all of those languages, floating point literals are treated as binary not decimal floats because that's how C did it originally.
Were I designing a new language from scratch I would likely make ambiguous literals illegal; every floating point literal would have to be unambigiuously double, single or decimal, and so on.
Because doing so is probably a mistake, in two ways.
First, doubles and decimals have different ranges and different amounts of "representation error" -- that is, how different is the quantity actually represented from the precise mathematical quantity you wish to represent. Converting a double to a decimal or vice versa is a dangerous thing to do and you should be sure that you are doing it correctly; making you spell out the cast calls attention to the fact that you are potentially losing precision or magnitude.
Second, doubles and decimals have very different usages. Doubles are usually used for scientific calculations where a difference between 1.000000000001 and 0.99999999999 is far smaller than experimental error. Accruing small representation errors is irrelevant. Decimals are usually used for exact financial calculations that need to be perfectly accurate to the penny. Mixing the two accidentally seems dangerous.
There are times when you have to do so; for example, it is easier to work out "exponential" problems like mortgage amortization or compounded interest accrual in doubles. In those cases again we make you spell out that you are converting from double to decimal in order to make it very clear that this is a point in the program where precision or magnitude losses might occur if you haven't gotten it right.
C# is not a "hide your mistakes for you" kind of language. It is a "tell you about your mistakes so you can fix them" kind of language. If you meant to say "2.3m" and you forgot the "m" then the compiler should tell you about it.
Because an integer constant can be checked to see if it is in the correct range at compile time. And a conversion from an in-range integer to a smaller integral type is always exact; it never loses precision or magnitude, unlike double/decimal conversions. Also, integer constant arithmetic is always done in a "checked" context unless you override that with an unchecked block, so there is not even the danger of overflow.
And it is less likely that integer/short arithmetic crosses a "domain" boundary like double/decimal arithmetic. Double arithmetic is likely to be scientific, decimal arithmetic is likely to be financial. But integer and short arithmetic are not each clearly tied to different business domains.
And making it legal means that you don't have to write ugly unnecessary code that casts constants to the right types.
There is therefore no good reason to make it illegal, and good reasons to make it legal.
2.3 is
double
. That is the language rules; any numeric literal with a decimal point in it is adouble
, unless it has aF
suffix (float
), orM
suffix (decimal
):The compiler helpfully tells me this, too: