C++ Thread, shared data

2019-01-13 10:29发布

问题:

I have an application where 2 threads are running... Is there any certanty that when I change a global variable from one thread, the other will notice this change? I don't have any syncronization or Mutual exclusion system in place... but should this code work all the time (imagine a global bool named dataUpdated):

Thread 1:

while(1) {
    if (dataUpdated)
        updateScreen();
    doSomethingElse();
}

Thread 2:

while(1) {
    if (doSomething())
        dataUpdated = TRUE;
}

Does a compiler like gcc optimize this code in a way that it doesn't check for the global value, only considering it value at compile time (because it nevers get changed at the same thred)?

PS: Being this for a game-like application, it really doen't matter if there will be a read while the value is being written... all that matters is that the change gets noticed by the other thread.

回答1:

Yes. No. Maybe.

First, as others have mentioned you need to make dataUpdated volatile; otherwise the compiler may be free to lift reading it out of the loop (depending on whether or not it can see that doSomethingElse doesn't touch it).

Secondly, depending on your processor and ordering needs, you may need memory barriers. volatile is enough to guarentee that the other processor will see the change eventually, but not enough to guarentee that the changes will be seen in the order they were performed. Your example only has one flag, so it doesn't really show this phenomena. If you need and use memory barriers, you should no longer need volatile

Volatile considered harmful and Linux Kernel Memory Barriers are good background on the underlying issues; I don't really know of anything similar written specifically for threading. Thankfully threads don't raise these concerns nearly as often as hardware peripherals do, though the sort of case you describe (a flag indicating completion, with other data presumed to be valid if the flag is set) is exactly the sort of thing where ordering matterns...



回答2:

Here is an example that uses boost condition variables:

bool _updated=false;
boost::mutex _access;
boost::condition _condition;

bool updated()
{
  return _updated;
}

void thread1()
{
  boost::mutex::scoped_lock lock(_access);
  while (true)
  {
    boost::xtime xt;
    boost::xtime_get(&xt, boost::TIME_UTC);
    // note that the second parameter to timed_wait is a predicate function that is called - not the address of a variable to check
    if (_condition.timed_wait(lock, &updated, xt))
      updateScreen();
    doSomethingElse();
  }
}

void thread2()
{
  while(true)
  {
    if (doSomething())
      _updated=true;
  }
}


回答3:

Use a lock. Always always use a lock to access shared data. Marking the variable as volatile will prevent the compiler from optimizing away the memory read, but will not prevent other problems such as memory re-ordering. Without a lock there is no guarantee that the memory writes in doSomething() will be visible in the updateScreen() function.

The only other safe way is to use a memory fence, either explicitly or an implicitly using an Interlocked* function for example.



回答4:

Use the volatile keyword to hint to the compiler that the value can change at any time.

volatile int myInteger;

The above will guarantee that any access to the variable will be to and from memory without any specific optimizations and as a result all threads running on the same processor will "see" changes to the variable with the same semantics as the code reads.

Chris Jester-Young pointed out that coherency concerns to such a variable value change may arise in a multi-processor systems. This is a consideration and it depends on the platform.

Actually, there are really two considerations to think about relative to platform. They are coherency and atomicity of the memory transactions.

Atomicity is actually a consideration for both single and multi-processor platforms. The issue arises because the variable is likely multi-byte in nature and the question is if one thread could see a partial update to the value or not. ie: Some bytes changed, context switch, invalid value read by interrupting thread. For a single variable that is at the natural machine word size or smaller and naturally aligned should not be a concern. Specifically, an int type should always be OK in this regard as long as it is aligned - which should be the default case for the compiler.

Relative to coherency, this is a potential concern in a multi-processor system. The question is if the system implements full cache coherency or not between processors. If implemented, this is typically done with the MESI protocol in hardware. The question didn't state platforms, but both Intel x86 platforms and PowerPC platforms are cache coherent across processors for normally mapped program data regions. Therefore this type of issue should not be a concern for ordinary data memory accesses between threads even if there are multiple processors.

The final issue relative to atomicity that arises is specific to read-modify-write atomicity. That is, how do you guarantee that if a value is read updated in value and the written, that this happen atomically, even across processors if more than one. So, for this to work without specific synchronization objects, would require that all potential threads accessing the variable are readers ONLY but expect for only one thread can ever be a writer at one time. If this is not the case, then you do need a sync object available to be able to ensure atomic actions on read-modify-write actions to the variable.



回答5:

Your solution will use 100% CPU, among other problems. Google for "condition variable".



回答6:

Chris Jester-Young pointed out that:

This only work under Java 1.5+'s memory model. The C++ standard does not address threading, and volatile does not guarantee memory coherency between processors. You do need a memory barrier for this

being so, the only true answer is implementing a synchronization system, right?



回答7:

Use the volatile keyword to hint to the compiler that the value can change at any time.

volatile int myInteger;


回答8:

No, it's not certain. If you declare the variable volatile, then the complier is supposed to generate code that always loads the variable from memory on a read.



回答9:

If the scope is right ( "extern", global, etc. ) then the change will be noticed. The question is when? And in what order?

The problem is that the compiler can and frequently will re-order your logic to fill all it's concurrent pipelines as a performance optimization.

It doesn't really show in your specific example because there aren't any other instructions around your assignment, but imagine functions declared after your bool assign execute before the assignment.

Check-out Pipeline Hazard on wikipedia or search google for "compiler instruction reordering"



回答10:

As others have said the volatile keyword is your friend. :-)

You'll most likely find that your code would work when you had all of the optimisation options disabled in gcc. In this case (I believe) it treats everything as volatile and as a result the variable is accessed in memory for every operation.

With any sort of optimisation turned on the compiler will attempt to use a local copy held in a register. Depending on your functions this may mean that you only see the change in variable intermittently or, at worst, never.

Using the keyword volatile indicates to the compiler that the contents of this variable can change at any time and that it should not use a locally cached copy.

With all of that said you may find better results (as alluded to by Jeff) through the use of a semaphore or condition variable.

This is a reasonable introduction to the subject.