My original scenario (see the question here) was that I have a hundred small text files that I want to load, parse, and store in a DLL. Clients of the DLL are transient (command line programs), and I would prefer not to reload the data on every command line invocation. (But this post is about the matrix of TCP client/server IPv4/IPv6 connections from my experiments described below.)
So, I thought I would write a Windows server to store the data and have the clients query the server using TCP. But, the TCP performance was really slow. I wrote the following code using Stopwatch
to measure the socket setup time.
// time the TCP interaction to see where the time goes
var stopwatch = new Stopwatch();
stopwatch.Start();
// create and connect socket to remote host
client = new TcpClient (hostname, hostport); // auto-connects to server
Console.WriteLine ("Connected to {0}",hostname);
// get a stream handle from the connected client
netstream = client.GetStream();
// send the command to the far end
netstream.Write(sendbuf, 0, sendbuf.Length);
Console.WriteLine ("Sent command to far end: '{0}'",cmd);
stopwatch.Stop();
sendTime = stopwatch.ElapsedMilliseconds;
Much to my surprise, that little bit of code took 1,037 milliseconds (1 second) to execute. I expected the time to be far smaller. The reason for the 1-second setup time was that the syntax of the new TcpClient(hostname, port)
code caused DNS to return both IPv6 and IPv4 addresses. During connection, the client tried IPv6 to the server first and had to wait for a timeout before falling back to IPv4 (which the server was using).
After a great answer from @Evk about TcpClient syntax and IPv6 / IPv4 here, I did several experiments as follows. Three clients and two servers were used to test the various syntaxes and behaviors:
Client 1: DNS returns only IPv4 using new TcpClient().
Client 2: DNS returns only Ipv6 using new TcpClient(AddressFamily.InternetworkV6)
Client 3: DNS returns IPv4 and IPv6 using new TcpClient(“localhost”,port)
Server 1: IPv4 new TcpListener(IPAddress.Loopback, port)
Server 2: IPv6 new TcpListener(IPAddress.IPv6Loopback, port)
From worst to best, the 6 possible pairs returned the following results:
c4xs6 - Client 1 ip4 with Server 2 ip6 – connection actively refused.
c6xs4 - Client 2 ip6 with Server 1 ip4 – connection actively refused.
c46xs4 - Client 3 (uses both v6 and v4) with Server 1 ip4, always delayed 1000ms because client tried using IPv6 before timing out and trying ip4, which worked consistently. This was the original code in this post.
C46xs6 - Client 3 (uses both v6 and v4) with Server 2 ip6, after a fresh restart of both, was fast on the first try (21ms) and on subsequent closely-spaced tries. But after waiting a minute or three, the next try was 3000ms, followed by fast 20ms times on closely-spaced subsequent tries.
C4xs4 – Same behavior as above. First try after a fresh restart was fast, as were closely-spaced subsequent tries. But after waiting a minute or two, the next try was 3000ms, followed by fast (20ms) closely-spaced subsequent tries.
C6xS6 – Same behavior as above. Fast after a fresh server reboot, but after a minute or two, a delayed try (3000ms) was observed followed by fast (20ms) responses to closely-spaced tries.
My experiments showed no consistently fast responses over time. There must be some kind of a delay or timeout or sleeping behavior when the connections go idle. I use netstream.Close; client.Close();
to close each connection on each try. (Is that right?) I don’t know what could be causing the delayed responses after a minute or two of idle no-active-connection time.
Any idea what might be causing the delay after a minute or two of idle listening time? The client is supposedly out of the system memory, having exited the console program. The server is supposedly doing nothing new, just listening for another connection.
Restarting the server gives fast initial (and closely-spaced (10-sec apart) responses, but the first connection after a 1 or 2-minute delay gives a slow response. When things go idle for a while, it's like the server decides something and starts to make the client wait for a bit before responding. I suggest that the server is at fault because (1) the client isn't even running during the idle time, and so it cannot decide anything; and (2) restarting the server gives a fast response.
Keep-alive pings don't seem appropriate because the client is not trying to keep the connection open-it closes both the netstream and the client after each short call, as shown above.
Does anyone have an idea of what is happening, and what (if anything) I can do about it?
EDIT #1 - the almost-complete source code, as requested below
Client code:
try {
var stopwatch = new Stopwatch();
stopwatch.Start();
client = new TcpClient(); // DNS forces use of v4
client.Connect("localhost", hostport);
netstream = client.GetStream();
netstream.Write(sendbuf, 0, sendbuf.Length);
stopwatch.Stop();
sendTime = stopwatch.ElapsedMilliseconds;
Console.WriteLine($"Milliseconds for sending by TCP: '{sendTime}'");
// receive the bytes back from the far end
...
catch (Exception ex) {
Console.WriteLine (ex.Message);
}
finally {
client?.Close(); // close the client after receiving server response
}
Server code:
listener = null;
// try to start a listener
listener = new TcpListener(IPAddress.Loopback, hostport); // uses v4
Console.WriteLine("Listening on v4 only...");
listener.Start();
Console.WriteLine("Listener is listening on port {0}", hostport);
rxbuf = new byte[BUFSIZE]; // the buffer
inbytes = 0; // nbytes rx
// loop forever handling clients one at a time
for (; ; ) {
client = null;
netstream = null;
try {
client = listener.AcceptTcpClient(); // get client connection
netstream = client.GetStream(); // get the receiving stream
Console.WriteLine("Accepted a client...Reading...");
totalbytes = 0;
inbytes = netstream.Read(rxbuf, // the buf
totalbytes, // write new bytes here
rxbuf.Length - totalbytes); // read this many bytes
if (inbytes == 0) // if no bytes read
break; // break loop
totalbytes = totalbytes + inbytes; // update total bytes read
inbytes = 0;
// - show what you received from the client
backcmd = Encoding.ASCII.GetString(rxbuf, 0, totalbytes);
Console.WriteLine("Rx command: '{0}'", backcmd);
Console.WriteLine("Running command: '{0}'", backcmd);
// - get a new process info structure to run the external command
System.Diagnostics.Process proc = new System.Diagnostics.Process();
proc.StartInfo.UseShellExecute = false;
... run the 40-char cmd from the client
... collect the output from the console app run on the server side
string output = proc.StandardOutput.ReadToEnd();
proc.WaitForExit();
...
byte[] tosend = Encoding.ASCII.GetBytes(output);
netstream.Write (tosend, 0, output.Length);
// client runs 'client.Close()' after receiving this text
// so the server should close this connection automatically too.
// Adding a 'client.Close();' statement here made no difference.
}
// - catch and write exceptions to console, closing stream
catch (Exception ex) {
Console.WriteLine(ex.Message);
netstream?.Close();
}
} // for ever
// I wonder if I should (could) close something here if the
// code breaks out of the loop, but everything works fine so
// I have no code here.
return 0;