Why the compiler decide 2.3 is double and not deci

2020-02-26 05:38发布

Why does the compiler decide that 2.3 is double so this code won't compile:

decimal x;
x = 2.3; // Compilation error - can not convert double to decimal.
x = (decimal) 2.3 // O.k.

Why the compiler doesn't think like this:
He wants to get a decimal, he give me a value that can be decimal, So it's decimal!

And why this doesn't get compilation error:

short x;
x = 23; // O.K.

Who said that 23 isn't an int?

4条回答
家丑人穷心不美
2楼-- · 2020-02-26 06:00

Because Floatingpint numbers are always a little bit diffucult in calculation and in Valuerange, they are in Immediate notation always the biggest possible type. (in your case: Double).

Non-Floating Points have some kind of the same Handling below, so they can be converted without any problem. If your value exceeds the Value-Range of a variable, it may cause an error (for example 257 for a Byte).

查看更多
劫难
3楼-- · 2020-02-26 06:11

There are a few things going on here:

  • In your first example, you're trying to convert a double literal to a float implicitly. That won't work.
  • The supposedly working line is actually trying to perform an explicit conversion of double to decimal (which is allowed, but generally not a good idea), and then an implicit conversion of decimal to float (which isn't allowed). If x is meant to be declared as decimal, then the only conversion required is from double to decimal - which still isn't a good idea, usually.
  • The working conversion of the integer literal is due to an "implicit constant expression conversion", as specified in section 6.1.9 of the C# 4 spec:

    A constant-expression of type int can be converted to type sbyte, byte, short, ushort, uint or ulong, provided the value of the constant-expression is within the range of the destination type.

    There's something similar for long, but not for double.

Basically, when you're writing a floating point constant, it's a good idea to explicitly specify the type with a suffix:

double d = 2.3d;
float f = 2.3f;
decimal m = 2.3m;
查看更多
走好不送
4楼-- · 2020-02-26 06:12

There are a lot of questions here. Let's break them down into small questions.

Why is the literal 2.3 of type double rather than decimal?

Historical reasons. C# is designed to be a member of the "C-like syntax" family of languages, so that its superficial appearance and basic idioms are familiar to programmers who use C-like languages. In almost all of those languages, floating point literals are treated as binary not decimal floats because that's how C did it originally.

Were I designing a new language from scratch I would likely make ambiguous literals illegal; every floating point literal would have to be unambigiuously double, single or decimal, and so on.

Why is it illegal in general to convert implicitly between double and decimal?

Because doing so is probably a mistake, in two ways.

First, doubles and decimals have different ranges and different amounts of "representation error" -- that is, how different is the quantity actually represented from the precise mathematical quantity you wish to represent. Converting a double to a decimal or vice versa is a dangerous thing to do and you should be sure that you are doing it correctly; making you spell out the cast calls attention to the fact that you are potentially losing precision or magnitude.

Second, doubles and decimals have very different usages. Doubles are usually used for scientific calculations where a difference between 1.000000000001 and 0.99999999999 is far smaller than experimental error. Accruing small representation errors is irrelevant. Decimals are usually used for exact financial calculations that need to be perfectly accurate to the penny. Mixing the two accidentally seems dangerous.

There are times when you have to do so; for example, it is easier to work out "exponential" problems like mortgage amortization or compounded interest accrual in doubles. In those cases again we make you spell out that you are converting from double to decimal in order to make it very clear that this is a point in the program where precision or magnitude losses might occur if you haven't gotten it right.

Why is it illegal to convert a double literal to a decimal literal? Why not just pretend that it was a decimal literal?

C# is not a "hide your mistakes for you" kind of language. It is a "tell you about your mistakes so you can fix them" kind of language. If you meant to say "2.3m" and you forgot the "m" then the compiler should tell you about it.

Then why is it legal to convert an integer literal (or any integer constant) to short, byte, and so on?

Because an integer constant can be checked to see if it is in the correct range at compile time. And a conversion from an in-range integer to a smaller integral type is always exact; it never loses precision or magnitude, unlike double/decimal conversions. Also, integer constant arithmetic is always done in a "checked" context unless you override that with an unchecked block, so there is not even the danger of overflow.

And it is less likely that integer/short arithmetic crosses a "domain" boundary like double/decimal arithmetic. Double arithmetic is likely to be scientific, decimal arithmetic is likely to be financial. But integer and short arithmetic are not each clearly tied to different business domains.

And making it legal means that you don't have to write ugly unnecessary code that casts constants to the right types.

There is therefore no good reason to make it illegal, and good reasons to make it legal.

查看更多
女痞
5楼-- · 2020-02-26 06:12

2.3 is double. That is the language rules; any numeric literal with a decimal point in it is a double, unless it has a F suffix (float), or M suffix (decimal):

x = 2.3F; // fine

The compiler helpfully tells me this, too:

Literal of type double cannot be implicitly converted to type 'float'; use an 'F' suffix to create a literal of this type

查看更多
登录 后发表回答