Given the following companion object with overloaded versions of apply
:
object List {
def apply[T](): List[T] = new Nil
def apply[T](x1: T): List[T] = new Cons(x1, new Nil)
def apply[T](x1: T, x2: T): List[T] = new Cons(x1, new Cons(x2, new Nil))
def apply[T](elems: T*): List[T] =
elems.foldRight(List[T])((elem, l) => new Cons(elem, l))
}
And the two instantiations
List(1) // Error - Ambiguity
List('a', 'b') // Works fine
scalac complains about the first instantiation (ambiguous reference to overloaded definition) because both the single argument and the varargs method are equally specific.
Searching stackoverflow I've found that it is possible to force the single argument method. List[Int](1)
will make the compiler use def apply[T](x1: T)
.
My question is why does the second instantiation match def apply[T](x1: T, x2: T)
without extra "hints"? In other words, why is the two argument method more specific than the varargs method where the single argument method isn't?
The fixed-arity method is always more specific than the var-arity.
f(P1, P2)
doesn't apply to(a, b, c, ...)
, which is how you can think off(P*)
.Conversely, though,
f(P*)
takes the shapef(p1,..,pn)
for purposes of applicability to N args. So it always applies and is not as specific as the fixed-arity method.So that's the normal reason your
f(a,b)
is more specific thanf(P*)
.For the one-arg case, it depends on what you pick for the type param.
f[A](a: A)
does apply to(a, b, ...)
by tupling and taking A as a Tuple.By saying A = Int, then obviously A can't be taken as a Tuple.
Sample confusion about var-arity and how affects specificity:
https://issues.scala-lang.org/browse/SI-4728
You might want to post this question on stackoverload.com, the site where the overloading specialists gather. Your browser may be redirected to overloading-is-evil.com.
To answer your question we need to have a look at what happens when the Scala compiler has to perform overloading resolution. This is described in SLS 6.23.3 (for Scala 2.9).
Let's take a slightly simpler version of your example:
And look at these three calls:
Let's start with the first one. First, the compiler looks at the shape of each argument, which is a type that describes, basically, if the argument is a value or a function. Here, no difficulty,
1
is a very normal, boring value and its shape is the typeNothing
.Now it has a single argument
1
, of typeNothing
, and finds all alternatives that are applicable to it. It finds two of them:apply[T](x1: T)
: it takes a single argument of unbounded type, so it can receive a argument of typeNothing
,apply[T](elems: T*)
: it can be applied to any number (0
included) of arguments of the same unbounded type, so it can receive a single element of typeNothing
.It there were only one, it would stop there and choose that one.
The second step is identical to the above, except this time it types each argument with an undefined expected type. Basically here it looks at the two alternatives left and finds out if they are applicable to the argument
1
of typeA <: Int
. No luck, they both are. If you were two changeapply[T](x1: T)
toapply(x1: String)
and leave the other one alone, here there would only be one applicable alternative left and it would succeed and stop.Then the compiler computes the
relative weight
of each alternative left over each other. The SLS states thatAt this point, there must be one alternative that has a higher score than all others, or there is an ambiguity error. We can ignore the "defined" part, they are defined in the same place.
A
is as specific asC
because you can always callC
with the single argument ofA
,C
is as specific asA
because of type inference: you can always callA
with the argument ofC
sinceA
can take anything (its parameter's type can be inferred to whatever we want).C
's parameters is seen as aSeq[A]
soT
is inferred asSeq[A]
inA
and it can call it. SoC
is as specific asA
.This can be seen if you change
A
toapply[T <: Int](x: T)
: it goes all the way to looking for the most specific one, but this time type inference cannot find a way to makeA
applicable toC
's argument (aSeq
) because it isn't a subtype ofInt
, soA
is the most specific. Obviously, the same thing happens if you changeA
toapply(x: Int)
, type inference can't even do anything.This also explains why
Test[Int](1)
succeeds in calling the one argument version. The two final alternatives are identical, butA
's type parameter has been bound toInt
and type inference cannot change it to fitC
's argument anymore.Finally, applying the same logic gives you why
Test(1,2)
works fine:B
is as specific asC
: you can always callC
withB
's arguments,C
is not as specific asB
: no amount of type inference will manage to fit a singleSeq
into a method that takes two parameters.So
apply[T](x1: T, x2: T)
is the most specific and there are no errors.Basically for a var-arg and a normal method to produce an ambiguity, you they need to have the same number of arguments and a way to trick type inference on (at least) the last argument:
Or
And so on...
Edit: I wasn't sure at first whether the repeated parameter was seen as a
Seq[A]
or aTupleX[...]
when looking for specificity. It definitely isn't a tuple, and auto-tupling has nothing to do with any of this.