Why is full-laziness a default optimization?

2019-03-25 05:38发布

问题:

Full laziness has been repeatedly demonstrated to cause space leaks.

Why is full laziness on from -O onwards? I find myself unconvinced by the reasoning in SPJ's The Implementation of Functional Programming Languages. The claim is that in

f = \y -> y + sqrt 4

sqrt 4 is unnecessarily repeated each time f is entered so we should float it outside the lambda. I agree in the small, but since we've seen what problems this transformation causes in the large I don't believe it is worth it. It seems to me that the benefits of this transformation are obtainable unilaterally** with only local code changes and programmers who want it should implement it by hand.

Can you convince me otherwise? Is full-laziness actually really useful? I will be especially convinced if you can provide examples which to implement by hand require multilateral cooperation or non-local transformations.

** unlike optimizations like inlining and stream fusion which to implement by hand would require multilateral cooperation between modules and non-local code changes

回答1:

There's at least one common case where full laziness is "safe" and an optimization.

g :: Int -> Int
g z = f (z+1)
  where f 0 = 0
        f y = 1 + f (y-1)

This really means g = \z -> let {f = ...} in f (z+1) and, compiled that way, will allocate a closure for f before calling it. Obviously that's silly, and the compiler should transform the program into

g_f 0 = 0
g_f y = 1 + g_f (y-1)
g z = g_f (z+1)

where no allocation is needed to call g_f. Happily the full laziness transformation does exactly that.

Obviously programmers could refrain from making these local definitions that do not depend on the arguments of the top-level function, but such definitions are generally considered good style...

Another example:

h :: [Int] -> [Int]
h xs = map (+1) xs

In this case you can just eta reduce, but normally you can't eta reduce. And naming the function (+1) is quite ugly.



标签: haskell ghc