I would like to implement a decorator that provides per-request caching to any method, not just views. Here is an example use case.
I have a custom tag that determines if a record in a long list of records is a "favorite". In order to check if an item is a favorite, you have to query the database. Ideally, you would perform one query to get all the favorites, and then just check that cached list against each record.
One solution is to get all the favorites in the view, and then pass that set into the template, and then into each tag call.
Alternatively, the tag itself could perform the query itself, but only the first time it's called. Then the results could be cached for subsequent calls. The upside is that you can use this tag from any template, on any view, without alerting the view.
In the existing caching mechanism, you could just cache the result for 50ms, and assume that would correlate to the current request. I want to make that correlation reliable.
Here is an example of the tag I currently have.
@register.filter()
def is_favorite(record, request):
if "get_favorites" in request.POST:
favorites = request.POST["get_favorites"]
else:
favorites = get_favorites(request.user)
post = request.POST.copy()
post["get_favorites"] = favorites
request.POST = post
return record in favorites
Is there a way to get the current request object from Django, w/o passing it around? From a tag, I could just pass in request, which will always exist. But I would like to use this decorator from other functions.
Is there an existing implementation of a per-request cache?
You can always do the caching manually.
Years later, a super hack to cache SELECT statements inside a single Django request. You need to execute the
patch()
method from early on in your request scope, like in a piece of middleware.The patch() method replaces the Django internal execute_sql method with a stand-in called execute_sql_cache. That method looks at the sql to be run, and if it's a select statement, it checks a thread-local cache first. Only if it's not found in the cache does it proceed to execute the SQL. On any other type of sql statement, it blows away the cache. There is some logic to not cache large result sets, meaning anything over 100 records. This is to preserve Django's lazy query set evaluation.
EDIT: The eventual solution I came up with has been compiled into a PyPI package: https://pypi.org/project/django-request-cache/
A major problem that no other solution here solves is the fact that LocMemCache leaks memory when you create and destroy several of them over the life of a single process.
django.core.cache.backends.locmem
defines several global dictionaries that hold references to every LocalMemCache instance's cache data, and those dictionaries are never emptied.The following code solves this problem. It started as a combination of @href_'s answer and the cleaner logic used by the code linked in @squarelogic.hayden's comment, which I then refined further.
EDIT 2016-06-15: I discovered a significantly simpler solution to this problem, and kindof facepalmed for not realizing how easy this should have been from the start.
With this, you can use
request.cache
as a cache instance that lives only as long as therequest
does, and will be fully cleaned up by the garbage collector when the request is done.If you need access to the
request
object from a context where it's not normally available, you can use one of the various implementations of a so-called "global request middleware" that can be found online.Answer given by @href_ is great.
Just in case you want something shorter that could also potentially do the trick:
Then get favourites calling it like this:
This method makes use of lru cache and combining it with timestamp we make sure that cache doesn't hold anything for longer then few seconds. If you need to call costly function several times in short period of time this solves the problem.
It is not a perfect way to invalidate cache, because occasionally it will miss on very recent data:
int(..2.99.. / 3)
followed byint(..3.00..) / 3)
. Despite this drawback it still can be very effective in majority of hits.Also as a bonus you can use it outside request/response cycles, for example celery tasks or management command jobs.
I came up with a hack for caching things straight into the request object (instead of using the standard cache, which will be tied to memcached, file, database, etc.)
So, basically I am just storing the cached value (category object in this case) under a new key 'c_category' in the dictionary of the request. Or to be more precise, because we can't just create a key on the request object, I am adding the key to one of the methods of the request object - get_host().
Georgy.
Using a custom middleware you can get a Django cache instance guaranteed to be cleared for each request.
This is what I used in a project:
To use the middleware register it in settings.py, e.g:
You may then use the cache as follows:
Refer to the django low level cache api doc for more information:
Django Low-Level Cache API
It should be easy to modify a memoize decorator to use the request cache. Have a look at the Python Decorator Library for a good example of a memoize decorator:
Python Decorator Library