It is possible in C# to add variance annotation to type parameter, constrained to be value type:
interface IFoo<in T> where T : struct
{
void Boo(T x);
}
Why is this allowed by compiler if variance annotation makes completely no sense in a such situation?
Why this is allowed by compiler since variance annotation make completely no sense in a such situation?
It's allowed by the compiler because I never even considered that someone might try to do that when I added the variance rules to the C# 4.0 compiler.
Compiler warnings and errors are features, and in order for a feature to be implemented, it has to, at a bare minimum, be thought of at some point before you ship your compiler. I failed to do so, and therefore never had the opportunity to even debate whether there ought to be a warning for such a situation.
Now that you've brought it to my attention, the question is: should it be a feature? Should the compiler produce a warning (or error) for this case?
That's a judgment call. A number of things we'd consider are:
Is the code the sort of thing someone might type in thinking it does something sensible? One hopes not; one hopes that the developer who knows enough about the type system to make an interface variant also knows that variance only works on reference types. But maybe there are developers out there who might type this in thinking that it will work. It doesn't seem beyond plausibility at least. It's not clearly contrived.
Is the code clearly wrong? Yes, it probably is. It seems very unlikely that someone deliberately wants to write an interface that looks variant but in fact is not.
And so on.
I'll have to give it more thought, but at first glance it looks like this actually might be a decent warning to add to the compiler. I'll talk it over with the team and we'll consider adding it to the Roslyn version.
Thanks for the idea!
It is allowed simply because it is legal code. There is absolutely no harm in it. Yes, you can not use contravariant conversion, but I fail to see the problem. Nothing in the code will actually be misleading or hide some twisted gotcha.
I simply think the compiler doesn't check if T
is a value type or a reference type when checking for variance validity. It stands to reason that the C# team assumed that anyone using generic interface variance would know that doing so with value types is pointless and in in any case has no secondary effects.