Does relaxed memory order effect can be extended t

2020-04-14 07:28发布

Let's say inside a C++11 program, we have a main thread named A that launches an asynchronous thread named B. Inside thread B, we perform an atomic store on an atomic variable with std::memory_order_relaxed memory order. Then thread A joins with thread B. Then thread A launches another thread named C that performs an atomic load operation with std::memory_order_relaxed memory order. Is it possible that thread C loaded content is different from the content written by thread B? In other words, does relaxed memory consistency here extends to even after the life of a thread?

To try this, I wrote a simple program and ran it with many tries. The program does not report a mismatch. I'm thinking since thread A imposes an order in launch of threads, mismatch cannot happen. However, I'm not sure of it.

#include <atomic>
#include <iostream>
#include <future>

int main() {

    static const int nTests = 100000;
    std::atomic<int> myAtomic( 0 );

    auto storeFunc = [&]( int inNum ){
        myAtomic.store( inNum, std::memory_order_relaxed );
    };

    auto loadFunc = [&]() {
        return myAtomic.load( std::memory_order_relaxed );
    };

    for( int ttt = 1; ttt <= nTests; ++ttt ) {
        auto writingThread = std::async( std::launch::async, storeFunc, ttt );
        writingThread.get();
        auto readingThread = std::async( std::launch::async, loadFunc );
        auto readVal = readingThread.get();
        if( readVal != ttt ) {
            std::cout << "mismatch!\t" << ttt << "\t!=\t" << readVal << "\n";
            return 1;
        }
    }

    std::cout << "done.\n";
    return 0;

}

2条回答
姐就是有狂的资本
2楼-- · 2020-04-14 07:32

If you want to test something like this, there are model checkers you can use to explore all possible executions (subject to some esoteric limitations) for a test case.
See http://plrg.eecs.uci.edu/c11modelchecker.html

查看更多
太酷不给撩
3楼-- · 2020-04-14 07:59

Before portable threading platforms generally offered you the ability to specify memory visibility or place explicit memory barriers, portable synchronization was accomplished exclusively with explicit synchronization (things like mutexes) and implicit synchronization.

Generally, before a thread is created, some data structures are set up that the thread will access when it starts up. To avoid having to use a mutex just to implement this common pattern, thread creation was defined as an implicitly synchronizing event. It's equally common to join a thread and then look at some results it computed. Again, to avoid having to use a mutex just to implement this common pattern, joining a thread is defined as an implicitly synchronizing event.

Since thread creation and structure is defined as a synchronizing operation, joining a thread necessarily happens after that thread terminates. Thus you will see anything that necessarily happened before the thread terminated. The same is true of code that changes some variables and then creates a thread -- the new thread necessarily sees all the changes that happened before it was created. Synchronization on thread creation or termination is just like synchronization on a mutex. Synchronizing operations create this kinds of ordering relationships that ensure memory visibility.

As SergeyA mentioned, you should definitely never try to prove something in the multithreaded world by testing. Certainly if a test fails, that proves you can't rely on the thing you tested. But even if a test succeeds every way you can think of to test it, that doesn't mean it won't fail on some platform, CPU, or library that you didn't test. You can never prove something like this is reliable by that kind of testing.

查看更多
登录 后发表回答