I can't figure out why the following code is makes fib
run in linear rather than exponential time.
def memoize(obj):
"""Memoization decorator from PythonDecoratorLibrary. Ignores
**kwargs"""
cache = obj.cache = {}
@functools.wraps(obj)
def memoizer(*args, **kwargs):
if args not in cache:
cache[args] = obj(*args, **kwargs)
return cache[args]
return memoizer
@memoize
def fib(n):
return n if n in (0, 1) else fib(n-1) + fib(n-2)
For example, fib(100)
doesn't completely blow up like I expected it to.
My understanding is that @memoize
sets fib = memoize(fib)
. So when you call fib(100)
, seeing that 100
is not in the cache, it will call obj
on 100
. But obj
is the original fib
function, so shouldn't it take just as long (on the first evaluation) as if we didn't have memoization at all?
Names are resolved lexically. Just because you call a function named
fib
from within a function namedfib
, that doesn't mean it would necessarilly be the samefib
.A (highly inaccurate) demonstration of what's going on is this:
Since the decorater affects
globals
, you get the decoratedfib
at the time the recursive call occurs."But obj is the original fib function, so shouldn't it take just as long (on the first evaluation) as if we didn't have memoization at all?"
obj
(inmemoizer
) is the indeed originalfib
function. The trick is that whenfib
calls itself recursively, it's callingmemoize(fib)
.Where
wrapped
is the function generated by functools that calls memoize.memoizer. Kinda.The recursive calls might end up being simple lookups in
obj.cache
(theoretically O(1)) which is what improves the perf significantly.obj
in the decorator is indeed the wrapped, unmodified, non-memoized function. However, when said function tries to recurse, it looks up the global namefib
, get the memoized wrapper function, and hence also causes the 99th, 98th, ... fibonacci number to be memoized along the way.