I want to do some performance testing on one of our web servers, to see how the server handles a lot of persistent connections. Unfortunately, I'm not terribly familiar with HTTP and web testing. Here's the Python code I've got for this so far:
import http.client
import argparse
import threading
def make_http_connection():
conn = http.client.HTTPConnection(options.server, timeout=30)
conn.connect()
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument("num", type=int, help="Number of connections to make (integer)")
parser.add_argument("server", type=str, help="Server and port to connect to. Do not prepend \'http://\' for this")
options = parser.parse_args()
for n in range(options.num):
connThread = threading.Thread(target = make_http_connection, args = ())
connThread.daemon = True
connThread.start()
while True:
try:
pass
except KeyboardInterrupt:
break
My main question is this: How do I keep these connections alive? I've set a long timeout, but that's a very crude method and I'm not even sure it affects the connection. Would simply requesting a byte or two every once in a while do it?
(Also, on an unrelated note, is there a better procedure for waiting for a keyboard interrupt than the ugly while True:
block at the end of my code?)
If a lot is really a lot then you probably want use asynchronous io not threads.
requests + gevent = grequests
GRequests allows you to use Requests with Gevent to make asynchronous HTTP Requests easily.
Requests support persistent HTTP connections.
I'm going a bit outside my knowledge-base here, but I would assume that your thread finnishes when the function make_http_connection() completes. That is if you want them all you would want to include a:
At the end of the function. I suppose you want them all to become active at the same time? Then let the function modify a global variable and use the condition to test this value against options.num so that the processes will wait until all of them are running before they start terminating.
Side-question, guessing what you're aiming at here, can't you just ask threading to count how many live threads you have and keep running until there's none left?
This here discusses reading keyboard, if that is what you need:
Polling the keyboard
You really should be using a benchmark tool like Funkload to do that. If you don't have experience with HTTP, trying to do a performance test from the scratch like that will certainly lead to bad results.
urllib.request
doesn't support persistent connections. There is'Connection: close'
hardcoded in the code. Buthttp.client
partially supports persistent connections (including legacy http/1.0keep-alive
). So the question title might be misleading.You could use an existing http testing tools such as slowloris, httperf instead of writing one yourself.
To close http/1.1 connection a client should explicitly specify
Connection: close
header otherwise the connection is considered persistent by the server (though it may close it at any moment andhttp.client
won't know about it until it tries to read/write to the connection).conn.connect()
returns almost immediately and your thread ends. To force each thread to maintain an http connection to the server you could:Note: if the server returns
'Connection: close'
then there is a single request per connection.To wait until all threads finish or
KeyboardInterrupt
happens you could:Or something like this:
The later might not work for various reasons e.g., if there are dummy threads such as threads that started in C extensions without using
threading
module.concurrent.futures.ThreadPoolExecutor provides a higher abstraction level than
threading
module and it can hide some complexity.Instead of thread per connection model you could open multiple connections concurrently in a single thread e.g., using
requests.async
orgevent
directly.