Why c# decimals can't be initialized without t

2019-01-12 05:36发布

问题:

public class MyClass
{
    public const Decimal CONSTANT = 0.50; // ERROR CS0664   
}

produces this error:

error CS0664: Literal of type double cannot be implicitly converted to type 'decimal'; use an 'M' suffix to create a literal of this type

as documented. But this works:

public class MyClass
{
    public const Decimal CONSTANT = 50; // OK   
}

And I wonder why they forbid the first one. It seems weird to me.

回答1:

The type of a literal without the m suffix is double - it's as simple as that. You can't initialize a float that way either:

float x = 10.0; // Fail

The type of the literal should be made clear from the literal itself, and the type of variable it's assigned to should be assignable to from the type of that literal. So your second example works because there's an implicit conversion from int (the type of the literal) to decimal. There's no implicit conversion from double to decimal (as it can lose information).

Personally I'd have preferred it if there'd been no default or if the default had been decimal, but that's a different matter...



回答2:

The first example is a double literal. The second example is an integer literal.

I guess it's not possible to convert double to decimal without possible loss of precision, but it is ok with an integer. So they allow implicit conversion with an integer.



回答3:

Every literal is treated as a type. If you do not chose the 'M' suffix it is treated as a double. That you cannot implicitly convert a double to a decimal is quite understandable as it loses precision.



回答4:

Your answer i a bit lower in the same link you provided, also Here. In Conversions:

"The integral types are implicitly converted to decimal and the result evaluates to decimal. Therefore you can initialize a decimal variable using an integer literal, without the suffix".

So, the reason is because of the implicit conversion between int and decimal. And since 0.50 is treated as double, and there is not implicit conversion between double and decimal, you get your error.

For more details:

http://msdn.microsoft.com/en-us/library/y5b434w4(v=vs.80).aspx

http://msdn.microsoft.com/en-us/library/yht2cx7b.aspx



回答5:

Its a design choice that the creators of C# made.

Likely it stems that double can lose precision and they didn't want you to store that loss. int don't have that problem.



回答6:

From http://msdn.microsoft.com/en-us/library/364x0z75.aspx : There is no implicit conversion between floating-point types and the decimal type; therefore, a cast must be used to convert between these two types.

They do this because double has such a huge range ±5.0 × 10−324 to ±1.7 × 10308 whereas int is only -2,147,483,648 to 2,147,483,647. A decimal's range is (-7.9 x 1028 to 7.9 x 1028) / (100 to 28) so it can hold an int but not a double.