Recently I've been writing a bunch of code like this:
class A:
def __init__(self, x):
self.x = x
self._y = None
def y(self):
if self._y is None:
self._y = big_scary_function(self.x)
return self._y
def z(self, i):
return nice_easy_function(self.y(), i)
In a given class I may have a number of things working like this y
, and I may have other things that use the stored pre-calculated values. Is this the best way to do things or would you recommend something different?
Note that I don't pre-calculate here because you might use an instance of A
without making use of y
.
I've written the sample code in Python, but I'd be interested in answers specific to other languages if relevant. Conversely I'd like to hear from Pythonistas about whether they feel this code is Pythonic or not.
First thing: this is a very common pattern in Python (there's even a cached_property
descriptor class somewhere - in Django IIRC).
This being said there are at least two potential issues here.
The first one is common to all 'cached properties' implementations and is the fact that one usually doesn't expect an attribute access to trigger some heavy computation. Whether it's really an issue depends on the context (and near-religious opinions of the reader...)
The second issue - more specific to your example - is the traditional cache invalidation / state consistency problem: Here you have y
as a function of x
- or at least that's what one would expect - but rebinding x
will not update y
accordingly. This can be easily solved in this case by making x
a property too and invalidating _y
on the setter, but then you have even more unexpected heavy computation happening.
In this case (and depending on the context and computation cost) I'd probably keep memoization (with invalidation) but provide a more explicit getter to make clear we might have some computation going on.
Edit: I misread your code and imagined a property decorator on y
- which shows how common this pattern is ;). But my remarks still make sense specially when a "self proclaimed pythonista" posts an answer in favour of a computed attribute.
Edit: if you want a more or less generic "cached property with cache invalidation", here's a possible implementation (might need more testing etc):
class cached_property(object):
"""
Descriptor that converts a method with a single self argument
into a property cached on the instance.
It also has a hook to allow for another property setter to
invalidated the cache, cf the `Square` class below for
an example.
"""
def __init__(self, func):
self.func = func
self.__doc__ = getattr(func, '__doc__')
self.name = self.encode_name(func.__name__)
def __get__(self, instance, type=None):
if instance is None:
return self
if self.name not in instance.__dict__:
instance.__dict__[self.name] = self.func(instance)
return instance.__dict__[self.name]
def __set__(self, instance, value):
raise AttributeError("attribute is read-only")
@classmethod
def encode_name(cls, name):
return "_p_cached_{}".format(name)
@classmethod
def clear_cached(cls, instance, *names):
for name in names:
cached = cls.encode_name(name)
if cached in instance.__dict__:
del instance.__dict__[cached]
@classmethod
def invalidate(cls, *names):
def _invalidate(setter):
def _setter(instance, value):
cls.clear_cached(instance, *names)
return setter(instance, value)
_setter.__name__ = setter.__name__
_setter.__doc__ = getattr(setter, '__doc__')
return _setter
return _invalidate
class Square(object):
def __init__(self, size):
self._size = size
@cached_property
def area(self):
return self.size * self.size
@property
def size(self):
return self._size
@size.setter
@cached_property.invalidate("area")
def size(self, size):
self._size = size
Not that I think the added cognitive overhead is worth the price actually - most often than not a plain inline implementation makes the code easier to understand and maintain (and doesn't require much more LOCs) - but it still might be useful if a package requires a lot of cached properties and cache invalidation.
As a self-proclaimed Pythonista, I would prefer using the property
decorator in this situation:
class A:
def __init__(self, x):
self.x = x
@property
def y(self):
if not hasattr(self, '_y'):
self._y = big_scary_function(self.x)
return self._y
def z(self, i):
return nice_easy_function(self.y, i)
Here self._y
is also lazy evaluated. The property
allows you to refer to self.x
and self.y
on the same footing. That is, when working with an instance of the class, you treat both x
and y
as attributes, even though y
is written as a method.
I've also used not hasattr(self, '_y')
instead of self._y is None
, which allows me to skip the self.y = None
declaration in __init__
. You can of course use your method here and still go with the property
decorator.
My EAFP pythonista approach is described by the following snippet.
My classes inherit _reset_attributes
from WithAttributes
and use it to invalidate the scary values.
class WithAttributes:
def _reset_attributes(self, attributes):
assert isinstance(attributes,list)
for attribute in attributes:
try:
delattr(self, '_' + attribute)
except:
pass
class Square(WithAttributes):
def __init__(self, size):
self._size = size
@property
def area(self):
try:
return self._area
except AttributeError:
self._area = self.size * self.size
return self._area
@property
def size(self):
return self._size
@size.setter
def size(self, size):
self._size = size
self._reset_attributes('area')