It's recommended that one use ConfigureAwait(false)
whenever when you can, especially in libraries because it can help avoid deadlocks and improve performance.
I have written a library that makes heavy use of async (accesses web services for a DB). The users of the library were getting a deadlock and after much painful debugging and tinkering I tracked it down to the single use of await Task.Yield()
. Everywhere else that I have an await, I use .ConfigureAwait(false)
, however that is not supported on Task.Yield()
.
What is the recommended solution for situations where one needs the equivalent of Task.Yield().ConfigureAwait(false)
?
I've read about how there was a SwitchTo
method that was removed. I can see why that could be dangerous, but why is there no equivalent of Task.Yield().ConfigureAwait(false)
?
Edit:
To provide further context for my question, here is some code. I am implementing an open source library for accessing DynamoDB (a distributed database as a service from AWS) that supports async. A number of operations return IAsyncEnumerable<T>
as provided by the IX-Async library. That library doesn't provide a good way of generating async enumerables from data sources that provide rows in "chunks" i.e. each async request returns many items. So I have my own generic type for this. The library supports a read ahead option allowing the user to specify how much data should be requested ahead of when it is actually needed by a call to MoveNext()
.
Basically, how this works is that I make requests for chunks by calling GetMore()
and passing along state between these. I put those tasks in a chunks
queue and dequeue them and turn them into actual results that I put in a separate queue. The NextChunk()
method is the issue here. Depending on the value of ReadAhead
I will keeping getting the next chunk as soon as the last one is done (All) or not until a value is needed but not available (None) or only get the next chunk beyond the values that are currently being used (Some). Because of that, getting the next chunk should run in parallel/not block getting the next value. The enumerator code for this is:
private class ChunkedAsyncEnumerator<TState, TResult> : IAsyncEnumerator<TResult>
{
private readonly ChunkedAsyncEnumerable<TState, TResult> enumerable;
private readonly ConcurrentQueue<Task<TState>> chunks = new ConcurrentQueue<Task<TState>>();
private readonly Queue<TResult> results = new Queue<TResult>();
private CancellationTokenSource cts = new CancellationTokenSource();
private TState lastState;
private TResult current;
private bool complete; // whether we have reached the end
public ChunkedAsyncEnumerator(ChunkedAsyncEnumerable<TState, TResult> enumerable, TState initialState)
{
this.enumerable = enumerable;
lastState = initialState;
if(enumerable.ReadAhead != ReadAhead.None)
chunks.Enqueue(NextChunk(initialState));
}
private async Task<TState> NextChunk(TState state, CancellationToken? cancellationToken = null)
{
await Task.Yield(); // ** causes deadlock
var nextState = await enumerable.GetMore(state, cancellationToken ?? cts.Token).ConfigureAwait(false);
if(enumerable.ReadAhead == ReadAhead.All && !enumerable.IsComplete(nextState))
chunks.Enqueue(NextChunk(nextState)); // This is a read ahead, so it shouldn't be tied to our token
return nextState;
}
public Task<bool> MoveNext(CancellationToken cancellationToken)
{
cancellationToken.ThrowIfCancellationRequested();
if(results.Count > 0)
{
current = results.Dequeue();
return TaskConstants.True;
}
return complete ? TaskConstants.False : MoveNextAsync(cancellationToken);
}
private async Task<bool> MoveNextAsync(CancellationToken cancellationToken)
{
Task<TState> nextStateTask;
if(chunks.TryDequeue(out nextStateTask))
lastState = await nextStateTask.WithCancellation(cancellationToken).ConfigureAwait(false);
else
lastState = await NextChunk(lastState, cancellationToken).ConfigureAwait(false);
complete = enumerable.IsComplete(lastState);
foreach(var result in enumerable.GetResults(lastState))
results.Enqueue(result);
if(!complete && enumerable.ReadAhead == ReadAhead.Some)
chunks.Enqueue(NextChunk(lastState)); // This is a read ahead, so it shouldn't be tied to our token
return await MoveNext(cancellationToken).ConfigureAwait(false);
}
public TResult Current { get { return current; } }
// Dispose() implementation omitted
}
I make no claim this code is perfect. Sorry it is so long, wasn't sure how to simplify. The important part is the NextChunk
method and the call to Task.Yield()
. This functionality is used through a static construction method:
internal static class AsyncEnumerableEx
{
public static IAsyncEnumerable<TResult> GenerateChunked<TState, TResult>(
TState initialState,
Func<TState, CancellationToken, Task<TState>> getMore,
Func<TState, IEnumerable<TResult>> getResults,
Func<TState, bool> isComplete,
ReadAhead readAhead = ReadAhead.None)
{ ... }
}
I noticed you edited your question after you accepted the existing answer, so perhaps you're interested in more rants on the subject. Here you go :)
It's recommended so, only if you're absolutely sure that any API your calling in your implementation (including Framework APIs) doesn't depend on any properties of synchronization context. That's especially important for a library code, and even more so if the library is suitable for both client-side and server-side use. E.g,
CurrentCulture
is a common overlook: it would never be an issue for a desktop app, but it well may be for an ASP.NET app.Back to your code:
Most likely, the deadlock is caused by the client of your library, because they use
Task.Result
(orTask.Wait
,Task.WaitAll
,Task.IAsyncResult.AsyncWaitHandle
etc, let them search) somewhere in the outer frame of the call chain. AlbeitTask.Yield()
is redundant here, this is not your problem in the first place, but rather theirs: they shouldn't be blocking on the asynchronous APIs and should be using "Async All the Way", as also explained in the Stephen Cleary's article you linked.Removing
Task.Yield()
may or may not solve this problem, becauseenumerable.GetMore()
can also use someawait SomeApiAsync()
withoutConfigureAwait(false)
, thus posting the continuation back to the caller's synchronization context. Moreover, "SomeApiAsync
" can happen to be a well established Framework API which is still vulnerable to a deadlock, likeSendMailAsync
, we'll get back to it later.Overall, you should only be using
Task.Yield()
if for some reason you want to return to the caller immediately ("yield" the execution control back to the caller), and then continue asynchronously, at the mercy of theSynchronizationContext
installed on the calling thread (orThreadPool
, ifSynchronizationContext.Current == null
). The continuation well may be executed on the same thread upon the next iteration of the app's core message loop. Some more details can be found here:So, the right thing would be to avoid blocking code all the way. However, say, you still want to make your code deadlock-proof, you don't care about synchronization context and you're sure the same is true about any system or 3rd party API you use in your implementation.
Then, instead of reinventing
ThreadPoolEx.SwitchTo
(which was removed for a good reason), you could just useTask.Run
, as suggested in the comments:IMO, this is still a hack, with the same net effect, although a much more readable one than using a variation of
ThreadPoolEx.SwitchTo()
. Same asSwitchTo
, it still has an associated cost: a redundant thread switch which may hurt ASP.NET performance.There is another (IMO better) hack, which I proposed here to address the deadlock with aforementioned
SendMailAsync
. It doesn't incur an extra thread switch:This hack works in the way it temporarily removes the synchronization context for the synchronous scope of the original
NextChunk
method, so it won't be captured for the 1stawait
continuation inside theasync
lambda, effectively solving the deadlock problem.Stephen has provided a slightly different implementation while answering the same question. His
IgnoreSynchronizationContext
restores the original synchronization context on whatever happens to be the continuation's thread afterawait
(which could be a completely different, random pool thread). I'd rather not restore it afterawait
at all, as long as I don't care about it.Inasmuch as the useful and legit API you're looking for is missing, I filed this request proposing its addition to .NET.
I also added it to vs-threading so that the next release of the Microsoft.VisualStudio.Threading NuGet package will include this API. Note that this library is not VS-specific, so you can use it in your app.
The exact equivalent of
Task.Yield().ConfigureAwait(false)
(which doesn't exist sinceConfigureAwait
is a method onTask
andTask.Yield
returns a custom awaitable) is simply usingTask.Factory.StartNew
withCancellationToken.None
,TaskCreationOptions.PreferFairness
andTaskScheduler.Current
. In most cases however,Task.Run
(which uses the defaultTaskScheduler
) is close enough.You can verify that by looking at the source for
YieldAwaiter
and see that it usesThreadPool.QueueUserWorkItem
/ThreadPool.UnsafeQueueUserWorkItem
whenTaskScheduler.Current
is the default one (i.e. thread pool) andTask.Factory.StartNew
when it isn't.You can however create your own awaitable (as I did) that mimics
YieldAwaitable
but disregards theSynchronizationContext
:Note: I don't recommend actually using
NoContextYieldAwaitable
, it's just an answer to your question. You should be usingTask.Run
(orTask.Factory.StartNew
with a specificTaskScheduler
)