In some asynchronous tcp server code I have, occasionally an error occurs that causes the process to consume the entire system's memory. In looking at the logs, event viewer and some MS docs the problem happens if "the calling application makes Asynchronous IO calls to the same client multiple times then you might see a heap fragmentation and private byte increase if the remote client stops its end of I/O" which results in spikes in memory usage and pinning of System.Threading.OverlappedData struct and byte arrays.
The KB article's proposed solution is to "set an upper bound on the amount of buffers outstanding (either send or receive) with their asynchronous IO."
How does one do this? Is this referring to the byte[] that are sent into BeginRead? So is the solution simply wrapping access byte[]'s with a semaphore?
EDIT: Semaphore controlled access to byte buffers or just having static sized pool of byte buffers are two common solutions. A concern I have that still remains is that when this async client problem occurs (maybe it's some weird network event actually) having semaphores or byte buffer pools will prevent me from running out of memory, but it does not solve the problem. My pool of buffers will likely get gobbled up by the problem client(s), in effect locking correct function legitimate clients out.
EDIT 2: Came across this great answer. Basically it shows how to manually unpin objects. And while asynchronous TCP code leaves pinning up to behind the scenes runtime rules, it might be possible to override that by explicitly pinning each buffer before use, then unpinning at the end of the block or in a finally. I'm trying to figure that out now...