I'm trying to implement what I would think is a very common caching scenario using the ServiceStack Redis client, yet I'm having difficulty finding a good example of this.
In an ASP.NET MVC app, we make a relatively long-running (and metered) call to an external web service and cache the results for a certain period of time. In the cache implementation, it is desired to block additional requests for that key until the web service call has completed, to avoid additional (expensive) calls.
So, what is the best way to implement a key-level lock? Does Redis support this out of the box? Would ServiceStack's IRedisClient.AcquireLock be a good fit for this, or is it overkill if we're not dealing with distributed locks? Or would I be best off just implementing the lock myself, something like described here?
Thanks in advance!
Redis is a non-blocking async server, there are no semantics built-in to redis to block on a client connection until a key is free.
Note: Redis is a remote NoSQL data store, therefore any lock you implement involving redis is 'distributed' by design. ServiceStack's AcquireLock uses redis's primitive SETNX locking semantics to ensure only 1 client connection has the lock, all other clients/connections are remain blocking until the lock has been freed by using an exponential retry back-off multiplier to poll.
In order to implement a distributed lock without polling you'd need to create a solution that uses a combination of SETNX + redis's Pub/Sub support to notify waiting clients that the lock has been freed.
I use a following pattern for using Redis as a global network mutex:
- LPUSH to create a mutex
- BRPOP to lock (this is blocking)
- LPUSH to unlock
Why not issue a SETNX on the key that you want to lock? This will return a 1 if the key can be set/locked. And a 0 if it already exists and thus the lock can't be taken. See http://www.redis.io/commands/setnx for specific info.