I know that TIME_WAIT is an integral part of TCP/IP, but there's many questions on SO (and other places) where multiple sockets are being created per second and the server ends up running out of ephemeral ports.
What I found out is that when using a TCPClient
(or Socket
for that matter), if I call either the Close()
or Dispose()
methods the socket's TCP state changes to TIME_WAIT and will respect the timeout period before fully closing.
However, if It just set the variable to null
the socket will be fully closed on the next GC run, which can of course be forced, without ever going through a TIME_WAIT state.
This doesn't make a lot of sense for me, since this is an IDisposable
object shouldn't the GC also invoke the Dispose()
method of the object?
Here's some PowerShell code that demonstrates that (no VS installed on this machine). I used TCPView from Sysinternals to check the sockets state in real time:
$sockets = @()
0..100 | % {
$sockets += New-Object System.Net.Sockets.TcpClient
$sockets[$_].Connect('localhost', 80)
}
Start-Sleep -Seconds 10
$sockets = $null
[GC]::Collect()
Using this method, the sockets never go into a TIME_WAIT state. Same if I just close the app before manually invoking Close()
or Dispose()
Can someone shed some light and explain whether this would be a good practice (which I imagine people are going to say it's not).
EDIT
GC's stake in the matter has already been answered, but I am still interested in finding out why this would have any impact on the socket state as this should be controlled by the OS, not .NET.
Also interested in finding out whether it would be good practice to use this method to prevent TIME_WAIT states and ultimately whether this is a bug somewhere (i.e., should all sockets go through a TIME_WAIT state?)
The Dispose pattern, also known as IDisposable, provides two ways for an unmanaged object to be cleaned up. The Dispose method provides a direct and fast way to clean up the resource. The finalize method, which is called by the garbage collector, is a fail-safe way to make sure that the unmanaged resource is cleaned up in case another developer using the code forgets to call the Dispose method. This is somewhat similar to C++ developers forgetting to call Delete on heap allocated memory - which results in memory leaks.
According to the referenced link:
"Although finalizers are effective in some cleanup scenarios, they have two significant drawbacks:
The finalizer is called when the GC detects that an object is eligible for collection. This happens at some undetermined period of time after the resource is not needed anymore. The delay between when the developer could or would like to release the resource and the time when the resource is actually released by the finalizer might be unacceptable in programs that acquire many scarce resources (resources that can be easily exhausted) or in cases in which resources are costly to keep in use (e.g., large unmanaged memory buffers).
When the CLR needs to call a finalizer, it must postpone collection of the object’s memory until the next round of garbage collection (the finalizers run between collections). This means that the object’s memory (and all objects it refers to) will not be released for a longer period of time."
The reason why it is taking a while for it shut down is because the code lingers by default to give the app some time to handle any queued messages. According to the TcpClient.Close method doc on MSDN:
"The Close method marks the instance as disposed and requests that the associated Socket close the TCP connection. Based on the LingerState property, the TCP connection may stay open for some time after the Close method is called when data remains to be sent. There is no notification provided when the underlying connection has completed closing.
Calling this method will eventually result in the close of the associated Socket and will also close the associated NetworkStream that is used to send and receive data if one was created."
This timeout value can be reduced or completely eliminated by the following code:
As for setting the reference to the TcpClient object to null, the recommended approach is to call the Close method. When the reference is set to null, the GC ends up calling the finalize method. The finalize method eventually calls the Dispose method in order to consolidate the code for cleaning up the unmanaged resource. So, it will work to close the socket - its just not recommended.
In my opinion, it depends on the app whether or not some linger time should be allowed to give the app time to handle queued messages. If I was certain my client app had processed all the necessary messages, then I would probably either give it a linger time of 0 seconds or perhaps 1 second if I thought that might change in the future.
For a very busy client and / or weak hardware - then I might give it more time. For a server, I would have to benchmark different values under load.
Other useful references:
What is the proper way of closing and cleaning up a Socket connection?
Are there any cases when TcpClient.Close or Socket.Close(0) could block my code?
Use
It's enough to specify a 0 seconds time out.
@Bob Bryan posted quite good answer while I was preparing mine. It shows why to avoid finalizers and how to abortively close the connection to avoid TIME_WAITs issue on the server.
I want to refer to a great answer https://stackoverflow.com/a/13088864/2138959 about
SO_LINGER
to question TCP option SO_LINGER (zero) - when it's required, which might clarify things even more to you and so that you can make you decision in each particular case which approach for closing the socket to use.To summarize, you should design your client-server communication protocol the way that the client closes the connection to avoid TIME_WAITs on the server.
The Socket class has a rather lengthy method
protected virtual void Dispose(bool disposing)
that is called withtrue
as the parameter from.Dispose()
andfalse
as a parameter from the destructor that is called by the garbage collector.Chances are, your answer to any differences in handling the socket's disposal will be found in this method. Matter of fact, it does not do anything on
false
from the destructor, so there you have your explanation.I wound up looking up a bunch of these links, and finally sorted my issues out. This was really helpful.
On the server side, I basically do nothing. receive, send the response, and exit the handler. I did add a LingerState of 1, but I don't think it does anything.
On the client side, I use the same LingerState, but after I receive (I know the data is all there, since I'm receiving based on a UInt32 length at the beginning of the packet), I Close() the client socket, then set the Socket object to NULL.
Running both the client and server really aggressively on the same machine, it's cleaning up all of the sockets immediately; I was leaving thousands in TIME_WAIT before.