可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
I'm sure mutex isn't enough that's the reason the concept of condition variables exist; but it beats me and I'm not able to convince myself with a concrete scenario when a condition variable is essential.
Differences between Conditional variables, Mutexes and Locks question's accepted answer says that a condition variable is a
lock with a "signaling" mechanism. It is used when threads need to
wait for a resource to become available. A thread can "wait" on a CV
and then the resource producer can "signal" the variable, in which
case the threads who wait for the CV get notified and can continue
execution
Where I get confused is that, a thread can wait on a mutex too, and when it gets signalled, is simply means the variable is now available, why would I need a condition variable?
P.S.: Also, a mutex is required to guard the condition variable anyway, when makes my vision more askew towards not seeing condition variable's purpose.
回答1:
Even though you can use them in the way you describe, mutexes weren't designed for use as a notification/synchronization mechanism. They are meant to provide mutually exclusive access to a shared resource. Using mutexes to signal a condition is awkward and I suppose would look something like this (where Thread1 is signaled by Thread2):
Thread1:
while(1) {
lock(mutex); // Blocks waiting for notification from Thread2
... // do work after notification is received
unlock(mutex); // Tells Thread2 we are done
}
Thread2:
while(1) {
... // do the work that precedes notification
unlock(mutex); // unblocks Thread1
lock(mutex); // lock the mutex so Thread1 will block again
}
There are several problems with this:
- Thread2 cannot continue to "do the work that precedes notification" until Thread1 has finished with "work after notification". With this design, Thread2 is not even necessary, that is, why not move "work that precedes" and "work after notification" into the same thread since only one can run at a given time!
- If Thread2 is not able to preempt Thread1, Thread1 will immediately re-lock the mutex when it repeats the while(1) loop and Thread1 will go about doing the "work after notification" even though there was no notification. This means you must somehow guarantee that Thread2 will lock the mutex before Thread1 does. How do you do that? Maybe force a schedule event by sleeping or by some other OS-specific means but even this is not guaranteed to work depending on timing, your OS, and the scheduling algorithm.
These two problems aren't minor, in fact, they are both major design flaws and latent bugs. The origin of both of these problems is the requirement that a mutex is locked and unlocked within the same thread. So how do you avoid the above problems? Use condition variables!
BTW, if your synchronization needs are really simple, you could use a plain old semaphore which avoids the additional complexity of condition variables.
回答2:
Mutex is for exclusive access of shared resource, while conditional variable is for waiting for a condition to be true. People may think they can implement conditional variable without the support of kernel. A common solution one might come up with is the "flag + mutex" is like:
lock(mutex)
while (!flag) {
sleep(100);
}
unlock(mutex)
do_something_on_flag_set();
but it will never work, because you never release the mutex during the waiting, no one else can set the flag in a thread-safe way. This is why we need conditional variable, when you're waiting on a condition variable, the associated mutex is not hold by your thread until it's signaled.
回答3:
I was thinking about this too and most important information, which I was missing everywhere was that mutex can own (and change) at the time only one thread. So if you have one producer and more consumers, producer would have to wait on mutex to produce. With cond. variable he can produce at any time.
回答4:
The conditional var and the mutex pair can be replaced by a binary semaphore and mutex pair. The sequence of operations of a consumer thread when using the conditional var + mutex is:
Lock the mutex
Wait on the conditional var
Process
Unlock the mutex
The producer thread sequence of operations is
Lock the mutex
Signal the conditional var
Unlock the mutex
The corresponding consumer thread sequence when using the sema+mutex pair is
Wait on the binary sema
Lock the mutex
Check for the expected condition
If the condition is true, process.
Unlock the mutex
If the condition check in the step 3 was false, go back to the step 1.
The sequence for the producer thread is:
Lock the mutex
Post the binary sema
Unlock the mutex
As you can see the unconditional processing in the step 3 when using the conditional var is replaced by the conditional processing in the step 3 and step 4 when using the binary sema.
The reason is that when using sema+mutex, in a race condition, another consumer thread may sneak in between the step 1 and 2 and process/nullify the condition. This won't happen when using conditional var. When using the conditional var, the condition is guarantied to be true after the step 2.
The binary semaphore can be replaced with the regular counting semaphore. This may result in the step 6 to step 1 loop a few more times.
回答5:
You need condition variables, to be used with a mutex (each cond.var. belongs to a mutex) to signal changing states (conditions) from one thread to another one. The idea is that a thread can wait till some condition becomes true. Such conditions are program specific (i.e. "queue is empty", "matrix is big", "some resource is almost exhausted", "some computation step has finished" etc). A mutex might have several related condition variables. And you need condition variables because such conditions may not always be expressed as simply as "a mutex is locked" (so you need to broadcast changes in conditions to other threads).
Read some good posix thread tutorials, e.g. this tutorial or that or that one. Better yet, read a good pthread book. See this question.
Also read Advanced Unix Programming and Advanced Linux Programming
P.S. Parallelism and threads are difficult concepts to grasp. Take time to read and experiment and read again.
回答6:
I think it is implementation defined.
The mutex is enough or not depends on whether you regard the mutex as a mechanism for critical sections or something more.
As mentioned in http://en.cppreference.com/w/cpp/thread/mutex/unlock,
The mutex must be locked by the current thread of execution, otherwise, the behavior is undefined.
which means a thread could only unlock a mutex which was locked/owned by itself in C++.
But in other programming languages, you might be able to share a mutex between processes.
So distinguishing the two concepts may be just performance considerations, a complex ownership identification or inter-process sharing are not worthy for simple applications.
For example, you may fix @slowjelj's case with an additional mutex (it might be an incorrect fix):
Thread1:
lock(mutex0);
while(1) {
lock(mutex0); // Blocks waiting for notification from Thread2
... // do work after notification is received
unlock(mutex1); // Tells Thread2 we are done
}
Thread2:
while(1) {
lock(mutex1); // lock the mutex so Thread1 will block again
... // do the work that precedes notification
unlock(mutex0); // unblocks Thread1
}
But your program will complain that you have triggered an assertion left by the compiler (e.g. "unlock of unowned mutex" in Visual Studio 2015).