Does pthread_mutex_lock have happens-before semant

2019-09-19 02:39发布

问题:

threadA go through this snippet

{
    global_a = 100;  // 1
    {
        pthread_mutex_lock(&b_mutex)
                ...
        pthread_mutex_unlock(&b_mutex)
    }  // 2
}

threadB go through this snippet

{
    {
        pthread_mutex_lock(&b_mutex)
                ...
        pthread_mutex_unlock(&b_mutex)
    }  // 3

    int tmp = global_a; // 4
}

and suppose that from an observer view the execution sequence indeed is

  1. threadA --- 1
  2. threadA --- 2
  3. threadB --- 3
  4. threadB --- 4

Can the code at threadB "int tmp = global_a;" see what threadA set at "global_a = 100;"?

Any suggestion is welcome.

回答1:

pthread_mutex_lock does not prevent previous instructions to be ordered after it.

Similar, pthread_mutex_unlock does not prevent followed instructions to be ordered before it.

But:

  1. In threadA global_a = 100 happens-before pthread_mutex_unlock(&b_mutex).

  2. In threadB pthread_mutex_lock(&b_mutex) happens-before int tmp = global_a;.

And if you observe

  1. pthread_mutex_unlock(&b_mutex) in threadA happens-before pthread_mutex_lock(&b_mutex) in threadB.

(in other words, threadB aquires lock after threadA releases it), then

global_a = 100; in threadA happens-before int tmp = global_a; in threadB. So, the last one sees effect of the first one.

What POSIX Standard says:

As for synchronisation details in the POSIX standard, the only reference I found (and others refer to) is short chapter about Memory Synchronization. It says that pthread_mutex_lock (and some other functions)

synchronize memory with respect to other threads

Someones interpret this as full memory barrier garantee, others (and me) prefer to think about some classic garantees, when locking and waiting actions provides memory acquire semantic, unlocking and notifying ones - memory release semantic. See, e.g., this mail.

There is no happens-before term in the POSIX. But it can be defined as usual, taking into account memory order garantees(in one's inerpretation).



回答2:

If you can guarantee the execution sequence - when yes. If you can guarantee the execution sequence you do not need even a lock on some architectures.

Lock actually do three things: 1. Do not allow different code to be executed same time. See. There is no mention of the memory here. It just guarantee that code in different threads will not be executed same time. 2. On some architecture it will insert cache coherence instructions. Which force multiple processor systems to flush data into the real memory. But you should not worry about this case cause right now "a multiprocessor is cache consistent if all writes to the same memory location are performed in some sequential order" 3. It insert memory barrier instruction. It is for processor, tell it do not mess with execution order.

Also you compiler may brake things as well. So declare your variable as volatile.