From here: https://stackoverflow.com/a/5524120/462608
If you want to lock several mutex-protected objects from a set of such
objects, where the sets could have been built by merging, you can
choose to use per object exactly one mutex, allowing more threads to
work in parallel,
or to use per object one reference to any possibly shared recursive
mutex, to lower the probability of failing to lock all mutexes
together,
or to use per object one comparable reference to any possibly shared
non-recursive mutex, circumventing the intent to lock multiple times.
I just don't understand the whole quote above. What is he referring to? Please explain in layman's words.
Here is my interpretation of the referenced quote. i hope that it's both understandable and actually matches the intent of the person who wrote that original answer.
Lets say you have a data structure that needs to be protected by a mutex. You have some options as to how 'granular' you can treat the critical sections dealing with those objects. These options can also have an influence on how a thread might need to behave to acquire the locks for multiple objects at the same time:
use one mutex per object:
struct foo {
mutex mux;
// important data fields...
};
This has the benefit that threads dealing with different objects will have no contention. If a single thread needs to deal with multiple objects while holding the locks for them (I think this is what's meant by 'set merging'), there's no need for recursive mutexes. However, care does need to be taken to avoid deadlock.
have each object refer to a recursive mutex which may be shared with other objects:
struct foo {
recursive_mutex* pmux;
// important data fields...
};
Since two objects may actually be associated with a single mutex, if thread 1 tries to lock object A and thread 2 concurrently tries to lock object B when objects A and B are sharing the same mutex, one of the threads will block until the other thread releases the mutex. Since the mutex is recursive, a single thread may lock multiple objects, even if they share the same mutex. Note that there is still the same caveat about deadlock.
The (possible) advantage to this scheme over the first is that if a thread has to lock several objects at the same time there's a certain probability that some of the objects in that set will share a mutex. Once the thread locks one of the objects, in theory the likelihood of blocking when trying to lock the next object is reduced. However, I think in practice it might be rather difficult to prove that you'll have this benefit unless you can really characterize the locking behavior of your threads and the sets of objects they will be locking (and set up the mutex sharing to mirror that model).
The last item in that quote essentially refers to using non-recursive locks in the above scenario. In that case you need to prevent a thread from trying to 'relock' a mutex (which, of course, can't be done with a non-recursive mutex), so the thread will have to somehow compare a lock that it's about to acquire with the locks that it has already acquired to determine if it should or should not acquire the lock on that object. If more than a few objects are involved, this could become a complicated scenario to ensure that a thread has acquired exactly the right set of locks.
struct foo {
mutex* pmux; // pointer to (possibly shared) non-recursive mutex
// important data fields...
};
// a set of objects a thread needs to work on in a critical section
// the objects possibly share non-recursive mutexes
struct foo* pA;
struct foo* pB;
struct foo* pC;
// acquire the necessary locks on all three objects:
mutex_lock( pA->pmux);
if (pB->pmux != pA->pmux) mutex_lock( pB->pmux);
if ((pC->pmux != pA->pmux) && (pC->pmux != pB->p-mux)) mutex_lock( pC->pmux);
// releasing the set of mutexes is similar
Instead of manually acquiring the mutexes inline, it would probably be better to pass them to a function that managed the complexity of making sure that any duplicates were ignored. And as with the previous schemes, avoiding deadlock still needs to be addressed as well.