可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
Given the following:
// not a problem
int i = 2, j = 3;
so it surprises me that this:
// compiler error: Implicitly-typed local variables cannot have multiple declarators
var i = 2, j = 3;
doesn't compile. Maybe there is something I don't understand about this (which is why I'm asking this)?
But why wouldn't the compiler realize that I meant:
var i = 2;
var j = 3;
which WOULD compile.
回答1:
It's just another point of possible confusion for the programmer and the compiler.
For example this is fine:
double i = 2, j = 3.4;
but what does this mean?
var i = 2, j = 3.4;
With syntactic sugar this kind of thing is a headache no one needs--so I doubt your case would ever be supported. It involves too much of the compiler trying to be a little bit too clever.
回答2:
When we designed the feature I asked the community what
var x = 1, y = 1.2;
should mean. The question and answers are here:
http://blogs.msdn.com/b/ericlippert/archive/2006/06/26/what-are-the-semantics-of-multiple-implicitly-typed-declarations-part-one.aspx
http://blogs.msdn.com/b/ericlippert/archive/2006/06/27/what-are-the-semantics-of-multiple-implicitly-typed-declarations-part-two.aspx
Briefly, about half the respondants said that the obviously correct thing to do was to make x and y both double, and about half the respondants said that the obviously correct thing to do was to make x int and y double.
(The language committee specified that it should be "double", and I actually implemented the code that way long before we shipped. We used the same type inference algorithm as we do for implicitly typed arrays, where all the expressions must be convertible to a best element type.)
When half your customer base thinks that one thing is "obviously correct" and the other half believes that the opposite is "obviously correct" then you have a big design problem on your hands. The solution was to make the whole thing illegal and avoid the problem.
回答3:
Because if this worked:
var i = 2, j = 3;
because this works:
var i = 2;
var j = 3;
then you might expect this to work:
var i = 2, j = "3";
because this works:
var i = 2;
var j = "3";
Even in the case posited by James Gaunt, where they are both numeric types and could be stored in a value of the same type, what type would i
be?:
var i = 2, j = 3.4;
j
is obviously a double, but i
could logically be either an int or a double, depending on how you expected var to infer the types. Either way it were implemented, you'd cause confusion with people who expected it to work the other way.
To avoid all this confusion, it's simply disallowed. I personally don't see it as a big loss, personally; if you want to declare a list of variables (which is itself pretty rare in my working experience), just strongly type em.
回答4:
I think it's just too iffy. When the two variables are the same type it's an easy specific case, but in the more general case you'd have to consider what is "correct" in code like:
var x = new object(), y = "Hello!", z = 5;
Should those all be typed as object
, since that's the only type they all have in common? Or should x
be object
, y
be string
, and z
be int
?
On the one hand you might think the former, since variables declared in this way (all on one line) are usually presumed to all be the same type. On the other hand perhaps you'd think it's the latter, since the var
keyword is typically supposed to get the compiler to infer the most specific type for you.
Better to just prohibit this altogether than bother working out exactly how it should behave, given that it would not exactly be a "killer" feature anyway.
That's my opinion/guess, at least.
回答5:
I think, that's because for compiler it could be the same as:
var i = 2, j = "three"
And those surely aren't of the same type.