I have a DAO object with a method of the following type. I have injected the DAO into service layer and I'm able to get cached results from this DAO method call. But when multiple threads invoke this method (on a proxy that wraps the DAO singleton) some of those threads still going to fetch the data from my database i.e., the fetchDataFromDb() method call is still executed. Is there a way to get around this? Is this a Spring caching bug?
@Override
@Cacheable(value = "CacheName")
public Map<String, DomainObject> fetchDataFromDb() {
....
}
Following XML configuration of my Spring application context file. This is a web application. I simulated the multiple threads using JMeter.
<cache:annotation-driven />
<!-- generic cache manager -->
<bean id="cacheManager" class="org.springframework.cache.support.SimpleCacheManager">
<property name="caches">
<set>
<bean class="org.springframework.cache.concurrent.ConcurrentMapCacheFactoryBean" p:name="CacheName" />
</set>
</property>
</bean>
I guess if multiple threads are calling this method while it is executing, all of them will hit the database.
The simplest solution I see is to make your service call the Dao in a synchronized block.
what you want is
@Cacheable(sync = true)
I could only find little documentation telling me that the behavior you are describing is a bug or not. There is however a vague hint that it would be in the documentation provided at (http://docs.spring.io/spring/docs/4.0.0.RELEASE/spring-framework-reference/html/cache.html)
The "can" is the problematic word here because it could mean that the method execution "cannot be done more than once" or that it "can be executed just once"
I would suggest that the behavior you are describing is not a bug but a functional "shortcoming". Preventing the method to be executed more than once for the same set of parameters seems to me like an easy way to write dead-locks.
I do not have the answer to this, and I hope somebody will correct my assumption (because the described behavior is very problematic).