I have an API made in a portable class library which needs to reach out to platform specific APIs for sending HTTP requests. Here is the method I wrote to do an HTTP POST on WinRT:
public bool Post(IEnumerable<KeyValuePair<string, string>> headers, string data)
{
bool success = false;
HttpClient client = new HttpClient(new HttpClientHandler {AllowAutoRedirect = false});
foreach (var header in headers)
{
client.DefaultRequestHeaders.Add(header.Key, header.Value);
}
try
{
var task=client.PostAsync(endpoint, new StringContent(data, Encoding.UTF8, "text/xml")).ContinueWith( postTask =>
{
try
{
postTask.Wait(client.Timeout); //Don't wait longer than the client timeout.
success = postTask.Result.IsSuccessStatusCode;
}catch {}
}, TaskContinuationOptions.LongRunning);
task.ConfigureAwait(false);
task.Wait(client.Timeout);
}
catch
{
success = false;
}
return success;
}
This exhibits an interesting problem though when put under any kind of stress though. It appears to deadlock internally. Like if I create 5 threads and send POST requests out of them, this method will get to where it will do nothing but timeout. Content never reaches the server, and the .Continue
code is never executed. However, if I run it serially or maybe even with 2 or 3 threads it will work OK. It seems that the more threads thrown at it though make the performance exponentially worse
Exactly what am I doing wrong here?
I don't think this is where you problem is but it could be and it's really easy to implement and test it out. By default Windows sets the Max Network connections to 2 and with more than 2 threads you could be locking on the connection pool. You can add this to your app config
<system.net>
<connectionManagement>
<add address="*" maxconnection="300" />
</connectionManagement>
</system.net>
or in code you can do this
ServicePointManager.DefaultConnectionLimit = 300
I'd also consider commenting out the wait in the continue with. I don't think it's necessary.
try
{
//Comment this line out your handling it in the outside task already
//postTask.Wait(client.Timeout); //Don't wait longer than the client timeout.
success = postTask.Result.IsSuccessStatusCode;
}catch {}
And finally if the 2 things above don't work I'd try commenting out the this code.
//Task.ConfigureAwait(false);
It could be that the combination of Task.Wait plus setting Task.ConfigureAwait(false) is causing some kind of deadlock but I'm no expert on why. I just know that I have some really similar code that runs multi-threaded just fine and I don't have Task.ConfigureAwait(false) in my code, mostly because I tried out the HttpClient library but didn't upgrade to .NET 4.5 so await isn't available.
Here's some things that stick out to me with the current code:
ContinueWith
queues a delegate to run when the task is complete. So there's no need to wait for it.
LongRunning
is not needed here; it will decrease performance because your continuation is very fast, not long running at all.
ConfigureAwait
is meaningless because there's no await
(and the return value is discarded anyway).
- The timeout doesn't need to be passed to
Task.Wait
because the task will already completed after that timeout anyway.
I have an API made in a portable class library which needs to reach out to platform specific APIs for sending HTTP requests.
I recommend making your API asynchronous since it's doing HTTP. You can use Microsoft.Bcl.Async
if you want full async
/await
support in PCLs.
public async Task<bool> Post(IEnumerable<KeyValuePair<string, string>> headers, string data)
{
HttpClient client = new HttpClient(new HttpClientHandler {AllowAutoRedirect = false});
foreach (var header in headers)
{
client.DefaultRequestHeaders.Add(header.Key, header.Value);
}
try
{
var result = await client.PostAsync(endpoint, new StringContent(data, Encoding.UTF8, "text/xml")).ConfigureAwait(false);
return result.IsSuccessStatusCode;
}
catch
{
return false;
}
}