I'm learning/experimenting with Rust, and in all the elegance that I find in this language, there is one peculiarity that baffles me and seems totally out of place.
Rust automatically dereferences pointers when making method calls. I made some tests to determine the exact behaviour:
struct X { val: i32 }
impl std::ops::Deref for X {
type Target = i32;
fn deref(&self) -> &i32 { &self.val }
}
trait M { fn m(self); }
impl M for i32 { fn m(self) { println!("i32::m()"); } }
impl M for X { fn m(self) { println!("X::m()"); } }
impl<'a> M for &'a X { fn m(self) { println!("&X::m()"); } }
impl<'a, 'b> M for &'a &'b X { fn m(self) { println!("&&X::m()"); } }
impl<'a, 'b, 'c> M for &'a &'b &'c X { fn m(self) { println!("&&&X::m()"); } }
trait RefM { fn refm(&self); }
impl RefM for i32 { fn refm(&self) { println!("i32::refm()"); } }
impl RefM for X { fn refm(&self) { println!("X::refm()"); } }
impl<'a> RefM for &'a X { fn refm(&self) { println!("&X::refm()"); } }
impl<'a, 'b> RefM for &'a &'b X { fn refm(&self) { println!("&&X::refm()"); } }
impl<'a, 'b, 'c> RefM for &'a &'b &'c X { fn refm(&self) { println!("&&&X::refm()"); } }
struct Y { val: i32 }
impl std::ops::Deref for Y {
type Target = i32;
fn deref(&self) -> &i32 { &self.val }
}
struct Z { val: Y }
impl std::ops::Deref for Z {
type Target = Y;
fn deref(&self) -> &Y { &self.val }
}
struct A;
impl std::marker::Copy for A {}
impl M for A { fn m(self) { println!("A::m()"); } }
impl<'a, 'b, 'c> M for &'a &'b &'c A { fn m(self) { println!("&&&A::m()"); } }
impl RefM for A { fn refm(&self) { println!("A::refm()"); } }
impl<'a, 'b, 'c> RefM for &'a &'b &'c A { fn refm(&self) { println!("&&&A::refm()"); } }
fn main() {
// I'll use @ to denote left side of the dot operator
(*X{val:42}).m(); // i32::refm() , self == @
X{val:42}.m(); // X::m() , self == @
(&X{val:42}).m(); // &X::m() , self == @
(&&X{val:42}).m(); // &&X::m() , self == @
(&&&X{val:42}).m(); // &&&X:m() , self == @
(&&&&X{val:42}).m(); // &&&X::m() , self == *@
(&&&&&X{val:42}).m(); // &&&X::m() , self == **@
(*X{val:42}).refm(); // i32::refm() , self == @
X{val:42}.refm(); // X::refm() , self == @
(&X{val:42}).refm(); // X::refm() , self == *@
(&&X{val:42}).refm(); // &X::refm() , self == *@
(&&&X{val:42}).refm(); // &&X::refm() , self == *@
(&&&&X{val:42}).refm(); // &&&X::refm(), self == *@
(&&&&&X{val:42}).refm(); // &&&X::refm(), self == **@
Y{val:42}.refm(); // i32::refm() , self == *@
Z{val:Y{val:42}}.refm(); // i32::refm() , self == **@
A.m(); // A::m() , self == @
// without the Copy trait, (&A).m() would be a compilation error:
// cannot move out of borrowed content
(&A).m(); // A::m() , self == *@
(&&A).m(); // &&&A::m() , self == &@
(&&&A).m(); // &&&A::m() , self == @
A.refm(); // A::refm() , self == @
(&A).refm(); // A::refm() , self == *@
(&&A).refm(); // A::refm() , self == **@
(&&&A).refm(); // &&&A::refm(), self == @
}
So, it seems that, more or less:
- The compiler will insert as many dereference operators as necessary to invoke a method.
- The compiler, when resolving methods declared using
&self
(call-by-reference):- First tries calling for a single dereference of
self
- Then tries calling for the exact type of
self
- Then, tries inserting as many dereference operators as necessary for a match
- First tries calling for a single dereference of
- Methods declared using
self
(call-by-value) for typeT
behave as if they were declared using&self
(call-by-reference) for type&T
and called on the reference to whatever is on the left side of the dot operator. - The above rules are first tried with raw built-in dereferencing, and if there's no match, the overload with
Deref
trait is used.
What are the exact auto-dereferencing rules? Can anyone give any formal rationale for such a design decision?
Your pseudo-code is pretty much correct. For this example, suppose we had a method call
foo.bar()
wherefoo: T
. I'm going to use the fully qualified syntax (FQS) to be unambiguous about what type the method is being called with, e.g.A::bar(foo)
orA::bar(&***foo)
. I'm just going to write a pile of random capital letters, each one is just some arbitrary type/trait, exceptT
is always the type of the original variablefoo
that the method is called on.The core of the algorithm is:
U
(that is, setU = T
and thenU = *T
, ...)bar
where the receiver type (the type ofself
in the method) matchesU
exactly , use it (a "by value method")&
or&mut
of the receiver), and, if some method's receiver matches&U
, use it (an "autorefd method")Notably, everything considers the "receiver type" of the method, not the
Self
type of the trait, i.e.impl ... for Foo { fn method(&self) {} }
thinks about&Foo
when matching the method, andfn method2(&mut self)
would think about&mut Foo
when matching.It is an error if there's ever multiple trait methods valid in the inner steps (that is, there can be only be zero or one trait methods valid in each of 1. or 2., but there can be one valid for each: the one from 1 will be taken first), and inherent methods take precedence over trait ones. It's also an error if we get to the end of the loop without finding anything that matches. It is also an error to have recursive
Deref
implementations, which make the loop infinite (they'll hit the "recursion limit").These rules seem to do-what-I-mean in most circumstances, although having the ability to write the unambiguous FQS form is very useful in some edge cases, and for sensible error messages for macro-generated code.
Only one auto-reference is added because
&foo
retains a strong connection tofoo
(it is the address offoo
itself), but taking more starts to lose it:&&foo
is the address of some temporary variable on the stack that stores&foo
.Examples
Suppose we have a call
foo.refm()
, iffoo
has type:X
, then we start withU = X
,refm
has receiver type&...
, so step 1 doesn't match, taking an auto-ref gives us&X
, and this does match (withSelf = X
), so the call isRefM::refm(&foo)
&X
, starts withU = &X
, which matches&self
in the first step (withSelf = X
), and so the call isRefM::refm(foo)
&&&&&X
, this doesn't match either step (the trait isn't implemented for&&&&X
or&&&&&X
), so we dereference once to getU = &&&&X
, which matches 1 (withSelf = &&&X
) and the call isRefM::refm(*foo)
Z
, doesn't match either step so it is dereferenced once, to getY
, which also doesn't match, so it's dereferenced again, to getX
, which doesn't match 1, but does match after autorefing, so the call isRefM::refm(&**foo)
.&&A
, the 1. doesn't match and neither does 2. since the trait is not implemented for&A
(for 1) or&&A
(for 2), so it is dereferenced to&A
, which matches 1., withSelf = A
Suppose we have
foo.m()
, and thatA
isn'tCopy
, iffoo
has type:A
, thenU = A
matchesself
directly so the call isM::m(foo)
withSelf = A
&A
, then 1. doesn't match, and neither does 2. (neither&A
nor&&A
implement the trait), so it is dereferenced toA
, which does match, butM::m(*foo)
requires takingA
by value and hence moving out offoo
, hence the error.&&A
, 1. doesn't match, but autorefing gives&&&A
, which does match, so the call isM::m(&foo)
withSelf = &&&A
.(This answer is based on the code, and is reasonably close to the (slightly outdated) README. Niko Matsakis, the main author of this part of the compiler/language, also glanced over this answer.)