I am trying to use Django cache to implement a lock mechanism. In Celery offical site, it claimed Django cache work fine for this. However, in my experence, it did not work. My experience is that if there are multiple threads/processes acquire the lock in almost the same time (close to ~0.003 second), all threads/processes will get the lock successfully. For other threads which acquire lock later than ~0.003 second, it fails.
Am I the only person experienced this? Please correct me if possible.
def acquire(self, block = False, slp_int = 0.001):
while True:
added = cache.add(self.ln, 'true', self.timeout)
if added:
cache.add(self.ln + '_pid', self.pid, self.timeout)
return True
if block:
sleep(slp_int)
continue
else:
return False
# Set Django backend cache to localcache
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.filebased.FileBasedCache',
'LOCATION': '/dev/shm/django_cache',
}
}
The problem is that Django makes no guarantees as to the atomicity of
.add()
. Whether or not.add()
will in fact be atomic depends on the backend you are using. With aFileBasedCache
,.add()
is not atomic:Worker A executing
.add()
could be preempted afterself.has_key(...)
but beforeself.set(...)
. Worker B executing.add()
in one shot would successfully set the key and returnTrue
. When worker A resumes, it would also set the key and returnTrue
.This issue report indicates that the example code you looked at assumes that the backend is Memcached. If you use Memcached or a backend that supports an atomic
.add()
then it should work.