Python - anyone have a memoizing decorator that ca

2019-01-10 19:53发布

问题:

I've been using the following memoizing decorator (from the great book Python Algorithms: Mastering Basic Algorithms in the Python Language ... love it, btw).

def memo(func):
    cache = {}
    @ wraps(func)
    def wrap(*args):
        if args not in cache:
            cache[args] = func(*args)
        return cache[args]
    return wrap

The problem with this decorator is that the dictionary-based cache means that all of my arguments must be hashable.

Does anyone have an implementation (or a tweak to this one) that allows for unhashable arguments (e.g. dictionaries)?

I know that the lack of a hash value means that the question of "is this in the cache?" becomes non-trivial, but I just thought I'd ask.

===EDITED TO GIVE CONTEXT===

I am working on a function that returns a Parnas-style "uses hierarchy" given a dictionary of module: dependencies. Here's the setup:

def uses_hierarchy(requirements):
    """
    uses_hierarchy(requirements)

    Arguments:
    requirements - a dictionary of the form {mod: list of dependencies, }

    Return value:
    A dictionary of the form {level: list of mods, ...}

    Assumptions:
    - No cyclical requirements (e.g. if a requires b, b cannot require a).
    - Any dependency not listed as a mod assumed to be level 0.

    """

    levels = dict([(mod, _level(mod, requirements))
                   for mod in requirements.iterkeys()])
    reversed = dict([(value, []) for value in levels.itervalues()])
    for k, v in levels.iteritems():
        reversed[v].append(k)
    return reversed


def _level(mod, requirements):
    if not requirements.has_key(mod):
        return 0
    dependencies = requirements[mod]
    if not dependencies:
        return 0
    else:
        return max([_level(dependency, requirements)
                    for dependency in dependencies]) + 1

So that:

>>> requirements = {'a': [],
...                 'b': [],
...                 'c': ['a'],
...                 'd': ['a','b'],
...                 'e': ['c','d'],
...                 'f': ['e']
...                 }

>>> uses_hierarchy(requirements)
{0: ['a', 'b'], 1: ['c', 'd'], 2: ['e'], 3: ['f']}

_level is the function I want to memoize to make this setup more scalable. As implemented without memoization, it calculates the level of dependencies multiple times (e.g. 'a' is calculated 8 times I think in the example above).

Thanks,

Mike

回答1:

Here is the example in Alex Martelli Python Cookbook that show how to create a memoize decorator using cPickle for function that take mutable argument (original version) :

import cPickle

class MemoizeMutable:
    def __init__(self, fn):
        self.fn = fn
        self.memo = {}
    def __call__(self, *args, **kwds):
        import cPickle
        str = cPickle.dumps(args, 1)+cPickle.dumps(kwds, 1)
        if not self.memo.has_key(str): 
            print "miss"  # DEBUG INFO
            self.memo[str] = self.fn(*args, **kwds)
        else:
            print "hit"  # DEBUG INFO

        return self.memo[str]

Here is a link.

EDIT: Using the code that you have given and this memoize decorator :

_level = MemoizeMutable(_level)

equirements = {'a': [],
               'b': [],
               'c': ['a'],
               'd': ['a','b'],
               'e': ['c','d'],
               'f': ['e']
                 }

print uses_hierarchy(equirements)

i was able to reproduce this:

miss
miss
hit
miss
miss
hit
miss
hit
hit
hit
miss
hit
{0: ['a', 'b'], 1: ['c', 'd'], 2: ['e'], 3: ['f']}


回答2:

Technically you can solve this problem by turning the dict (or list or set) into a tuple. For example:

 key = tuple(the_dict.iteritems())
 key = tuple(the_list)
 key = tuple(sorted(the_set))

 cache[key] = func( .. )

But I wouldn't do this in memo, I'd rather change the functions you want to use memo on - for example, instead of accepting a dict they should only accept (key, value) pairs, instead of taking lists or sets they should just take *args.



回答3:

Since no one else has mentioned it, the Python Wiki has a Decorator Library which includes a number of memoizing decorator patterns.

My personal preference is the last one, which lets calling code simply treat the method as a lazily-evaluated property, rather than a method. But I like the implementation here better.

class lazy_property(object):
    '''Decorator: Enables the value of a property to be lazy-loaded.
    From Mercurial's util.propertycache

    Apply this decorator to a no-argument method of a class and you
    will be able to access the result as a lazy-loaded class property.
    The method becomes inaccessible, and the property isn't loaded
    until the first time it's called.  Repeated calls to the property
    don't re-run the function.

    This takes advantage of the override behavior of Descriptors - 
    __get__ is only called if an attribute with the same name does
    not exist.  By not setting __set__ this is a non-data descriptor,
    and "If an instance's dictionary has an entry with the same name
    as a non-data descriptor, the dictionary entry takes precedence."
     - http://users.rcn.com/python/download/Descriptor.htm

    To trigger a re-computation, 'del' the property - the value, not
    this class, will be deleted, and the value will be restored upon
    the next attempt to access the property.
    '''
    def __init__(self,func):
        self.func = func
        self.name = func.__name__
    def __get__(self, obj, type=None):
        result = self.func(obj)
        setattr(obj, self.name, result)
        return result

In the same file linked above there's also a lazy_dict decorator, which lets you treat a function as a dictionary with lazily-evaluated values.



回答4:

Not heavily tested, but seems to work:

from functools import wraps

def memo(func):
    cache = []
    argslist = []
    @ wraps(func)
    def wrap(*args):
        try:
            result = cache[argslist.index(args)]
            print 'cache hit'
            return result
        except ValueError:
            argslist.append(args)
            cache.append(func(*args))
            print 'cache miss'
            return cache[-1]
    return wrap

d1 = { 'a':3, 'b':42 }
d2 = { 'c':7, 'd':19 }
d3 = { 'e':34, 'f':67 }

@memo
def func(d):
    return sum(d.values())

print func(d1)
# cache miss
# 45
print func(d2)
# cache miss
# 26
print func(d3)
# cache miss
# 101
print func(d2)
# cache hit
# 26