Given all three functions, this call is ambiguous.
int f( int );
int f( int && );
int f( int const & );
int q = f( 3 );
Removing f( int )
causes both Clang and GCC to prefer the rvalue reference over the lvalue reference. But instead removing either reference overload results in ambiguity with f( int )
.
Overload resolution is usually done in terms of a strict partial ordering, but int
seems to be equivalent to two things which are not equivalent to each other. What are the rules here? I seem to recall a defect report about this.
Is there any chance int &&
may be preferred over int
in a future standard? The reference must bind to an initializer, whereas the object type is not so constrained. So overloading between T
and T &&
could effectively mean "use the existing object if I've been given ownership, otherwise make a copy." (This is similar to pure pass-by-value, but saves the overhead of moving.) As these compilers currently work, this must be done by overloading T const &
and T &&
, and explicitly copying. But I'm not even sure even that is strictly standard.
As there is only one parameter, the rule is that one of the three viable parameter initializations of that parameter must be a better match than both the other two. When two initializations are compared, either one is better than the other, or neither is better (they are indistinguishable).
Without special rules about direct reference binding, all three initializations mentioned would be indistinguishable (in all three comparisons).
The special rules about direct reference binding make
int&&
better thanconst int&
, but neither is better or worse thanint
. Therefore there is no best match:int&&
is better thanconst int&
because of 13.3.3.2:But this rule does not apply when one of the initializations is not a reference binding.
You propose to make a reference binding a better match than a non-reference binding. Why not post your idea to isocpp future proposals. SO is not the best for subjective discussion / opinion.