Socket has these new async methods since .NET 3.5 for use with SocketAsyncEventArgs (e.g. Socket.SendAsync()), benefits being under the hood they use IO completion ports and avoid the need to keep allocating.
We have made a class called UdpStream with a simple interface - just StartSend and a Completed event. It allocates two SocketAsyncEventArgs, one for send and one for receiving. The StartSend simply dispatches a message using SendAsync, and is called about 10 times a second. We use the Completed event on the receive SocketAsyncEventArgs, and after each event is handled we all ReceiveAsync so that it forms a receive loop. Again, we receive roughly 10 times per second.
Our system needs to support up to 500 of these UdpStream objects. In other words our server will communicate concurrently with 500 different IP endpoints.
I notice in the MSDN SocketAsyncEventArgs examples that they allocate N x SocketAsyncEventArgs, one for each outstanding receive operation you want to handle at one time. I am not clear exactly how this relates to our scenario - it seems to me that perhaps we are not getting the benefit of SocketAsyncEventArgs because we are simply allocating one per endpoint. If we end up with 500 receive SocketAsyncEventArgs I am presuming we will get no benefit. Perhaps we still get some benefit from IO completion ports?
Does this design make correct use of SocketAsyncEventArgs when scaling to 500?
For the case where we have a single "UdpStream" in use, is there any benefit to using SocketAsyncEventArgs vs using the older Begin/End API?