Why the lock inside AsyncLock does not block the t

2019-09-20 19:46发布

问题:

I'm trying to understand how the AsyncLock works.

First of all, here's a snippet to prove that it actually works:

var l = new AsyncLock();
var tasks = new List<Task>();
while (true)
{
    Console.ReadLine();
    var i = tasks.Count + 1;
    tasks.Add(Task.Run(async () =>
    {
        Console.WriteLine($"[{i}] Acquiring lock ...");
        using (await l.LockAsync())
        {
            Console.WriteLine($"[{i}] Lock acquired");
            await Task.Delay(-1);
        }
    }));
}

By "works" I mean that you can run as many tasks as you want (by hitting Enter) and the number of threads doesn't grow. If you replace it with traditional lock, you'll see that the new threads are started, which is what we try to avoid.

But the first thing you see in the source code is... the lock

Can somebody please explain me how this works, why it doesn't block, and what am I missing here?

回答1:

Can somebody please explain me how this works, why it doesn't block, and what am I missing here?

The short answer is that lock is just an internal mechanism used to guarantee thread safety. The lock is never exposed in any way, and there's no way for any thread to hold that lock for any real amount of time. In this way, it's similar to the locks used internally by various concurrent collections.

There is an alternate approach that uses lock-free programming, but I have found lock-free programming to be extremely difficult to write, read, and maintain. A great example of this (which is sadly not online) was a bunch of Dr. Dobb's articles in the late '90s, each one trying to out-do the last with a better lock-free queue implementation. It turns out they were all faulty - in some cases, the bugs took more than a decade to find.

For my own code, I do not use lock-free programming, except where the correctness of the code is trivially obvious.


As far as the async lock vs lock concepts, I'm going to take a stab at explaining this. There's a feeling I get that I have only felt when working with asynchronous coordination primitives. It's something I've thought a lot about writing a blog post on, but I don't have the right words to make it understandable. That said, here goes...

Asynchronous coordination primitives exist on a completely different plane than normal coordination primitives. Synchronous primitives block threads and signal threads. Asynchronous primitives just work on plain objects; the blocking or signaling is just "by convention".

So, with a normal lock, the calling code must take the lock immediately. But with an asynchronous "lock", the attempted lock is just a request, just an object. The calling code doesn't even need to await it. It's possible to request several locks and await them all together with Task.WhenAll. Or even combine them with other things; code can do crazy things like (a)wait for two locks to both be free or for a signal (like AsyncManualResetEvent) to be sent, and then cancel the lock requests if the signal comes in first.

From a thread perspective, it's kinda-sorta like user-mode thread scheduling. There's also some similarities to cooperative multitasking (as opposed to preemptive). But overall, the asynchronous primitives are "lifted" to a different plane, where one works only with objects and blocks of code, not threads.



回答2:

The lock inside AsyncLock is beeing released very quickly. Each task which tries to acquire AsyncLock, successfully acquires it's internal lock and the actual locking logic is done with a queue.



回答3:

By wrapping LockAsync() within using block, the lock is being released when the block ends since LockAsync returns a disposable object Key which will be disposed at the end of the using block, and upon disposing the lock will be released. see https://github.com/StephenCleary/AsyncEx/blob/master/src/Nito.AsyncEx.Coordination/AsyncLock.cs#L182-L185