Native mutex implementation

2019-06-02 01:19发布

So in my ilumination days, i started to think about how the hell do windows/linux implement the mutex, i've implemented this synchronizer in 100... different ways, in many diferent arquitectures but never think how it is really implemented in big ass OS, for example in the ARM world i made some of my synchronizers disabling the interrupts but i always though that it wasn't a really good way to do it.

I tried to "swim" throgh the linux kernel but just like a though i can't see nothing that satisfies my curiosity. I'm not an expert in threading, but i have solid all the basic and intermediate concepts of it. So does anyone know how a mutex is implemented?

4条回答
对你真心纯属浪费
2楼-- · 2019-06-02 01:37

A quick look at code apparently from one Linux distribution seems to indicate that it is implemented using an interlocked compare and exchange. So, in some sense, the OS isn't really implementing it since the interlocked operation is probably handled at the hardware level.

Edit As Hans points out, the interlocked exchange does the compare and exchange in an atomic manner. Here is documentation for the Windows version. For fun, I just now wrote a small test to show a really simple example of creating a mutex like that. This is a simple acquire and release test.

#include <windows.h>
#include <assert.h>
#include <stdio.h>

struct homebrew {
    LONG *mutex;
    int *shared;
    int mine;
};

#define NUM_THREADS 10
#define NUM_ACQUIRES 100000

DWORD WINAPI SomeThread( LPVOID lpParam ) 
{ 
    struct homebrew *test = (struct homebrew*)lpParam;

    while ( test->mine < NUM_ACQUIRES ) {
        // Test and set the mutex.  If it currently has value 0, then it
        // is free.  Setting 1 means it is owned.  This interlocked function does
        // the test and set as an atomic operation
        if ( 0 == InterlockedCompareExchange( test->mutex, 1, 0 )) {
            // this tread now owns the mutex.  Increment the shared variable
            // without an atomic increment (relying on mutex ownership to protect it)
            (*test->shared)++;  
            test->mine++;
            // Release the mutex (4 byte aligned assignment is atomic)
            *test->mutex = 0;
        }
    }
    return 0;
}

int main( int argc, char* argv[] )
{
    LONG mymutex = 0;  // zero means
    int  shared = 0;
    HANDLE threads[NUM_THREADS];
    struct homebrew test[NUM_THREADS];
    int i;

    // Initialize each thread's structure.  All share the same mutex and a shared
    // counter
    for ( i = 0; i < NUM_THREADS; i++ ) {
        test[i].mine = 0; test[i].shared = &shared; test[i].mutex = &mymutex;
    }

    // create the threads and then wait for all to finish
    for ( i = 0; i < NUM_THREADS; i++ ) 
        threads[i] = CreateThread(NULL, 0, SomeThread, &test[i], 0, NULL);

    for ( i = 0; i < NUM_THREADS; i++ ) 
        WaitForSingleObject( threads[i], INFINITE );

    // Verify all increments occurred atomically
    printf( "shared = %d (%s)\n", shared,
            shared == NUM_THREADS * NUM_ACQUIRES ? "correct" : "wrong" );
    for ( i = 0; i < NUM_THREADS; i++ ) {
        if ( test[i].mine != NUM_ACQUIRES ) {
            printf( "Thread %d cheated.  Only %d acquires.\n", i, test[i].mine );
        }
    }

}

If I comment out the call to the InterlockedCompareExchange call and just let all threads run the increments in a free-for-all fashion, then the results do result in failures. Running it 10 times, for example, without the interlocked compare call:

shared = 748694 (wrong)
shared = 811522 (wrong)
shared = 796155 (wrong)
shared = 825947 (wrong)
shared = 1000000 (correct)
shared = 795036 (wrong)
shared = 801810 (wrong)
shared = 790812 (wrong)
shared = 724753 (wrong)
shared = 849444 (wrong)

The curious thing is that one time the results showed now incorrect contention. That might be because there is no "everyone start now" synchronization; maybe all threads started and finished in order in that case. But when I have the InterlockedExchangeCall in place, it runs without failure (or at least it ran 100 times without failure ... that doesn't prove I didn't write a subtle bug into the example).

查看更多
相关推荐>>
3楼-- · 2019-06-02 01:40

Here is the discussion from the people who implemented it ... very interesting as it shows the tradeoffs ..

Several posts from Linus T ... of course

查看更多
Animai°情兽
4楼-- · 2019-06-02 01:58

In earlier days pre-POSIX etc I used to implement synchronization by using a native mode word (e.g. 16 or 32 bit word) and the Test And Set instruction lurking on every serious processor. This instruction guarantees to test the value of a word and set it in one atomic instruction. This provides the basis for a spinlock and from that a hierarchy of synchronization functions could be built. The simplest is of course just a spinlock which performs a busy wait, not an option for more than transitory sync'ing, then a spinlock which drops the process time slice at each iteration for a lower system impact. Notional concepts like Semaphores, Mutexes, Monitors etc can be built by getting into the kernel scheduling code.

As I recall the prime usage was to implement message queues to permit multiple clients to access a database server. Another was a very early real time car race result and timing system on a quite primitive 16 bit machine and OS.

These days I use Pthreads and Semaphores and Windows Events/Mutexes (mutices?) etc and don't give a thought as to how they work, although I must admit that having been down in the engine room does give one and intuitive feel for better and more efficient multiprocessing.

查看更多
成全新的幸福
5楼-- · 2019-06-02 01:58

In windows world. The mutex before the windows vista mas implemented with a Compare Exchange to change the state of the mutex from Empty to BeingUsed, the other threads that entered the wait on the mutex the CAS will obvious fail and it must be added to the mutex queue for furder notification. Those operations (add/remove/check) of the queue would be protected by an common lock in windows kernel. After Windows XP, the mutex started to use a spin lock for performance reasons being a self-suficiant object.

In unix world i didn't get much furder but probably is very similar to the windows 7.

Finally for kernels that work on a single processor the best way is to disable the interrupts when entering the critical section and re-enabling then when exiting.

查看更多
登录 后发表回答