Per-request cache in Django?

2019-01-17 07:27发布

I would like to implement a decorator that provides per-request caching to any method, not just views. Here is an example use case.

I have a custom tag that determines if a record in a long list of records is a "favorite". In order to check if an item is a favorite, you have to query the database. Ideally, you would perform one query to get all the favorites, and then just check that cached list against each record.

One solution is to get all the favorites in the view, and then pass that set into the template, and then into each tag call.

Alternatively, the tag itself could perform the query itself, but only the first time it's called. Then the results could be cached for subsequent calls. The upside is that you can use this tag from any template, on any view, without alerting the view.

In the existing caching mechanism, you could just cache the result for 50ms, and assume that would correlate to the current request. I want to make that correlation reliable.

Here is an example of the tag I currently have.

@register.filter()
def is_favorite(record, request):

    if "get_favorites" in request.POST:
        favorites = request.POST["get_favorites"]
    else:

        favorites = get_favorites(request.user)

        post = request.POST.copy()
        post["get_favorites"] = favorites
        request.POST = post

    return record in favorites

Is there a way to get the current request object from Django, w/o passing it around? From a tag, I could just pass in request, which will always exist. But I would like to use this decorator from other functions.

Is there an existing implementation of a per-request cache?

7条回答
Lonely孤独者°
2楼-- · 2019-01-17 08:30

This one uses a python dict as the cache (not the django's cache), and is dead simple and lightweight.

  • Whenever the thread is destroyed, it's cache will be too automatically.
  • Does not require any middleware, and the content is not pickled and depickled on every access, which is faster.
  • Tested and works with gevent's monkeypatching.

The same can be probably implemented with threadlocal storage. I am not aware of any downsides of this approach, feel free to add them in the comments.

from threading import currentThread
import weakref

_request_cache = weakref.WeakKeyDictionary()

def get_request_cache():
    return _request_cache.setdefault(currentThread(), {})
查看更多
登录 后发表回答