Full laziness has been repeatedly demonstrated to cause space leaks.
Why is full laziness on from -O
onwards? I find myself unconvinced by the reasoning in SPJ's The Implementation of Functional Programming Languages. The claim is that in
f = \y -> y + sqrt 4
sqrt 4
is unnecessarily repeated each time f
is entered so we should float it outside the lambda. I agree in the small, but since we've seen what problems this transformation causes in the large I don't believe it is worth it. It seems to me that the benefits of this transformation are obtainable unilaterally** with only local code changes and programmers who want it should implement it by hand.
Can you convince me otherwise? Is full-laziness
actually really useful? I will be especially convinced if you can provide examples which to implement by hand require multilateral cooperation or non-local transformations.
** unlike optimizations like inlining and stream fusion which to implement by hand would require multilateral cooperation between modules and non-local code changes
There's at least one common case where full laziness is "safe" and an optimization.
This really means
g = \z -> let {f = ...} in f (z+1)
and, compiled that way, will allocate a closure forf
before calling it. Obviously that's silly, and the compiler should transform the program intowhere no allocation is needed to call
g_f
. Happily the full laziness transformation does exactly that.Obviously programmers could refrain from making these local definitions that do not depend on the arguments of the top-level function, but such definitions are generally considered good style...
Another example:
In this case you can just eta reduce, but normally you can't eta reduce. And naming the function
(+1)
is quite ugly.