Why c# decimals can't be initialized without t

2019-01-12 04:58发布

public class MyClass
{
    public const Decimal CONSTANT = 0.50; // ERROR CS0664   
}

produces this error:

error CS0664: Literal of type double cannot be implicitly converted to type 'decimal'; use an 'M' suffix to create a literal of this type

as documented. But this works:

public class MyClass
{
    public const Decimal CONSTANT = 50; // OK   
}

And I wonder why they forbid the first one. It seems weird to me.

6条回答
Bombasti
2楼-- · 2019-01-12 05:47

The first example is a double literal. The second example is an integer literal.

I guess it's not possible to convert double to decimal without possible loss of precision, but it is ok with an integer. So they allow implicit conversion with an integer.

查看更多
再贱就再见
3楼-- · 2019-01-12 05:49

Your answer i a bit lower in the same link you provided, also Here. In Conversions:

"The integral types are implicitly converted to decimal and the result evaluates to decimal. Therefore you can initialize a decimal variable using an integer literal, without the suffix".

So, the reason is because of the implicit conversion between int and decimal. And since 0.50 is treated as double, and there is not implicit conversion between double and decimal, you get your error.

For more details:

http://msdn.microsoft.com/en-us/library/y5b434w4(v=vs.80).aspx

http://msdn.microsoft.com/en-us/library/yht2cx7b.aspx

查看更多
欢心
4楼-- · 2019-01-12 05:52

Its a design choice that the creators of C# made.

Likely it stems that double can lose precision and they didn't want you to store that loss. int don't have that problem.

查看更多
The star\"
5楼-- · 2019-01-12 05:55

Every literal is treated as a type. If you do not chose the 'M' suffix it is treated as a double. That you cannot implicitly convert a double to a decimal is quite understandable as it loses precision.

查看更多
混吃等死
6楼-- · 2019-01-12 05:58

The type of a literal without the m suffix is double - it's as simple as that. You can't initialize a float that way either:

float x = 10.0; // Fail

The type of the literal should be made clear from the literal itself, and the type of variable it's assigned to should be assignable to from the type of that literal. So your second example works because there's an implicit conversion from int (the type of the literal) to decimal. There's no implicit conversion from double to decimal (as it can lose information).

Personally I'd have preferred it if there'd been no default or if the default had been decimal, but that's a different matter...

查看更多
再贱就再见
7楼-- · 2019-01-12 05:59

From http://msdn.microsoft.com/en-us/library/364x0z75.aspx : There is no implicit conversion between floating-point types and the decimal type; therefore, a cast must be used to convert between these two types.

They do this because double has such a huge range ±5.0 × 10−324 to ±1.7 × 10308 whereas int is only -2,147,483,648 to 2,147,483,647. A decimal's range is (-7.9 x 1028 to 7.9 x 1028) / (100 to 28) so it can hold an int but not a double.

查看更多
登录 后发表回答