There is a new attribute in Swift 1.2 with closure parameters in functions, and as the documentation says:
This indicates that the
parameter is only ever called (or passed as an
@
noescape parameter in a call), which means that it cannot
outlive the lifetime of the call.
In my understanding, before that, we could use [weak self]
to not let the closure to have a strong reference to e.g. its class, and self could be nil or the instance when the closure is executed, but now, @noescape
means that the closure will never be executed if the class is deinitalized. Am I understand it correctly?
And if I'm correct, why would I use a @noescape
closure insted of a regular function, when they behaves very similar?
@noescape
can be used like this:
func doIt(code: @noescape () -> ()) {
/* what we CAN */
// just call it
code()
// pass it to another function as another `@noescape` parameter
doItMore(code)
// capture it in another `@noescape` closure
doItMore {
code()
}
/* what we CANNOT do *****
// pass it as a non-`@noescape` parameter
dispatch_async(dispatch_get_main_queue(), code)
// store it
let _code:() -> () = code
// capture it in another non-`@noescape` closure
let __code = { code() }
*/
}
func doItMore(code: @noescape () -> ()) {}
Adding @noescape
guarantees that the closure will not be stored somewhere, used at a later time, or used asynchronously.
From the caller's point of view, there is no need to care about the lifetime of captured variables, as they are used within the called function or not at all. And as a bonus, we can use an implicit self
, saving us from typing self.
.
func doIt(code: @noescape () -> ()) {
code()
}
class Bar {
var i = 0
func some() {
doIt {
println(i)
// ^ we don't need `self.` anymore!
}
}
}
let bar = Bar()
bar.some() // -> outputs 0
Also, from the compiler's point of view (as documented in release notes):
This enables some minor performance optimizations.
One way to think about it, is that EVERY variable inside of the @noescape block doesn't need to be Strong (not just the self).
There are also optimizations possible since once a variable is allocated that then wrapped in a block, it can't just be normally deallocated at the end of the function. So it must be allocated on the Heap and use ARC to deconstruct. In Objective-C, you have to use the "__block" keyword to insure that the variable is created in a block friendly way. Swift will automatically detect that so the keyword isn't needed, but the cost is the same.
If the variables are being passed to a @nosecape block, than they can be stack variables, and don't need ARC to deallocate.
The variables now don't even need to be zero-referencing weak variables (which are more expensive than unsafe pointers) since they will be guaranteed to be "alive" for the life of the block.
All of this results in faster and more optimal code. And reduces the overhead for using @autoclosure blocks (which are very useful).
(In reference to Michael Gray's answer above.)
Not sure if this is specifically documented for Swift, or if even the Swift compiler takes full advantage of it. But it's standard compiler design to allocate storage for an instance on the stack if the compiler knows the function being called will not attempt to store a pointer to that instance in the heap, and issue a compile-time error if the function attempts to do so.
This is particularly beneficial when passing non-scalar value types (like enums, structs, closures) because copying them is potentially much more expensive than simply passing a pointer to the stack. Allocating the instance is also significantly less expensive (one instruction versus calling malloc()). So it's a double-win if the compiler can make this optimization.
Again, whether or not a given version of the Swift compiler actually does would have to be stated by the Swift team, or you'd have to read the source code when they open-source it. From the quote above about "minor optimization", it sounds like either it doesn't, or the Swift team considers it "minor". I would consider it a significant optimization.
Presumably the attribute is there so that (at least in the future) the compiler will be able to perform this optimization.