I'm trying to load test an API. I'm executing a tasks at the same time, each executing an HTTP request. I use Task.WhenAll(mytasks)
for waiting for all tasks to be finished. The requests look as follows:
using (var response = await client.SendAsync(request).ConfigureAwait(false))
{
using (var jsonResponse = await response.Content.ReadAsStreamAsync().ConfigureAwait(false))
{
var jsonSerializer = new DataContractJsonSerializer(typeof(Borders));
var borders = (Borders)jsonSerializer.ReadObject(jsonResponse);
return borders;
}
}
This works fine up to at least thousand tasks. However, if I start more then a few thousand tasks I run into HttpRequestExceptions
:
System.Net.Http.HttpRequestException: An error occurred while sending the request. ---> System.Net.WebException: The underlying connection was closed: An unexpected error occurred on a receive. ---> System.IO.IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. ---> System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host
at System.Net.Sockets.NetworkStream.EndRead(IAsyncResult asyncResult)
--- End of inner exception stack trace ---
at System.Net.ConnectStream.WriteHeadersCallback(IAsyncResult ar)
--- End of inner exception stack trace ---
at System.Net.HttpWebRequest.EndGetResponse(IAsyncResult asyncResult)
at System.Net.Http.HttpClientHandler.GetResponseCallback(IAsyncResult ar)
--- End of inner exception stack trace ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable`1.ConfiguredTaskAwaiter.GetResult()
at BusinessLogic.<GetBorder>d__6d.MoveNext() in c:\BusinessLogic.cs:line 757
So my questions: Why does this happen (with more than 1000 tasks)? How can I prevent this? I could obviously cut my block of tasks into chunks <1000 but I would like to leave this to the underlying system...
That's not a good idea. The .NET Framework has zero competency at determining the optimal degree of parallelism for IO.
It is usually not a good idea to issue that many requests in parallel because the resources that are stressed here are likely maxed out before that. Your backend server is not made to handle that degree of parallelism, apparently. It forcibly cuts you off according to the message.
Just because we have easy async IO with
await
now does not mean you can spam your resources with 1000 parallel requests.Use one of the common solutions to perform a series of async actions with a set degree of parallelism. I like http://blogs.msdn.com/b/pfxteam/archive/2012/03/05/10278165.aspx but there are solutions based on
ActionBlock
as well.