Avoiding multiple repopulations of the same cache

2019-03-24 21:49发布

I have a high traffic website and I use hibernate. I also use ehcache to cache some entities and queries which are required to generate the pages.

The problem is "parallel cache misses" and the long explanation is that when the application boots and the cache regions are cold each cache region is being populated many times (instead of only once) by different threads because the site is being hit by many users at the same time. In addition, when some cache region invalidates it's being repopulated many times because of the same reason. How can I avoid this?

I managed to convert 1 entity and 1 query cache to a BlockingCache by providing my own implementation to hibernate.cache.provider_class but the semantics of BlockingCache do not seem to work. Even worst sometimes the BlockingCache deadlocks (blocks) and the application hangs completely. Thread dump shows that processing is blocked on the mutex of BlockingCache on a get operation.

So, the question is, does Hibernate support this kind of use?

And if not, how do you solve this problem on production?

Edit: The hibernate.cache.provider_class points to my custom cache provider which is a copy paste from SingletonEhCacheProvider and at the end of the start() method (after line 136) I do:

Ehcache cache = manager.getEhcache("foo");
if (!(cache instanceof BlockingCache)) {
    manager.replaceCacheWithDecoratedCache(cache, new BlockingCache(cache));
}

That way upon initialization, and before anyone else touches the cache named "foo", I decorate it with BlockingCache. "foo" is a query cache and "bar" (same code but omitted) is an entity cache for a pojo.

Edit 2: "Doesn't seem to work" means that the initial problem still exists. Cache "foo" is still being re-populated many times with the same data, because of the concurrency. I validate this by stressing the site with JMeter with 10 threads. I'd expect the 9 threads to block until the first one which requested data from "foo" to finish it's job (execute queries, store data in cache), and then get the data directly from the cache.

Edit 3: Another explanation for this problem can be seen at https://forum.hibernate.org/viewtopic.php?f=1&t=964391&start=0 but with no definite answer.

2条回答
劳资没心,怎么记你
2楼-- · 2019-03-24 22:30

I'm not quite sure, but:

It allows concurrent read access to elements already in the cache. If the element is null, other reads will block until an element with the same key is put into the cache.

Doesn't it means that Hibernate would wait until some other thread places the object into cache? That's what you observe, right?

Hib and cache works like this:

  1. Hib gets a request for an object
  2. Hib checks if the object is in cache -- cache.get()
  3. No? Hib loads the object from DB and puts into cache -- cache.put()

So if the object is not in cache (not placed there by some previous update operation), Hib would wait on 1) forever.

I think you need a cache variant where the thread only waits for an object for a short time. E.g. 100ms. If the object is not arrived, the thread should get null (and thus Hibernate will load the object from DB and place into the cache).

Actually, a better logic would be:

  1. Check that another thread is requesting the same object
  2. If true, wait for long (500ms) for the object to arrive
  3. If not true, return null immediately

(We cannot wait on 2 forever, as the thread may fail to put the object into cache -- due to exception).

If BlockingCache doesn't support this behaviour, you need to implement a cache yourself. I did it in past, it's not hard -- main methods are get() and put() (though API apparently has grown since that).

UPDATE

Actually, I just read the sources of BlockingCache. It does exactly what I said -- lock and wait for timeout. Thus you don't need to do anything, just use it...

public Element get(final Object key) throws RuntimeException, LockTimeoutException {
    Sync lock = getLockForKey(key);
    Element element;
        acquiredLockForKey(key, lock, LockType.WRITE);
        element = cache.get(key);
        if (element != null) {
            lock.unlock(LockType.WRITE);
        }
    return element;
}

public void put(Element element) {
    if (element == null) {
        return;
    }
    Object key = element.getObjectKey();
    Object value = element.getObjectValue();

    getLockForKey(key).lock(LockType.WRITE);
    try {
        if (value != null) {
            cache.put(element);
        } else {
            cache.remove(key);
        }
    } finally {
        getLockForKey(key).unlock(LockType.WRITE);
    }
}

So it's kind of strange it doesn't work for you. Tell me something: in your code this spot:

Ehcache cache = manager.getEhcache("foo");

is it synchronized? If multiple requests come at the same time, will there be only one instance of cache?

查看更多
Luminary・发光体
3楼-- · 2019-03-24 22:34

The biggest improvement on this issue is that ehcache now (since 2.1) supports the transactional hibernate cache policy. This vastly mitigates the problems described in this issue.

In order to go a step further (lock threads while accessing the same query cache region) one would need to implement a QueryTranslatorFactory to return custom (extended) QueryTranslatorImpl instances which would inspect query and parameters and block as necessary in the list method. This of course regards the specific use case of query cache using hql which fetch many entities.

查看更多
登录 后发表回答