I'm looking for a well-supported multithreaded Python HTTP server that supports chunked encoding replies. (I.e. "Transfer-Encoding: chunked" on responses). What's the best HTTP server base to start with for this purpose?
问题:
回答1:
Twisted supports chunked transfer encoding (API link) (see also the API doc for HTTPChannel). There are a number of production-grade projects using Twisted (for example, Apple uses it for the iCalendar server in Mac OS X Server), so it's quite well supported and very robust.
回答2:
Twisted supports chunked transfer and it does so transparently. i.e., if your request handler does not specify a response length, twisted will automatically switch to chunked transfer and it will generate one chunk per call to Request.write.
回答3:
I am pretty sure that WSGI compliant servers should support that. Essentially, WSGI applications return iterable chunks, which the webserver returns. I don't have first hand experience with this, but here is a list of compliant servers.
I should think that it would be fairly easy to roll your own though, if WSGI servers dont meet what you are looking for, using the Python's builtin CGIHTTPServer. It is already multithreaded, so it would just be up to you to chunk the responses.
回答4:
I managed to do it using Tornado:
#!/usr/bin/env python
import logging
import tornado.httpserver
import tornado.ioloop
import tornado.options
import tornado.web
from tornado.options import define, options
define("port", default=8080, help="run on the given port", type=int)
@tornado.web.stream_request_body
class MainHandler(tornado.web.RequestHandler):
def post(self):
print()
def data_received(self, chunk):
self.write(chunk)
logging.info(chunk)
def main():
tornado.options.parse_command_line()
application = tornado.web.Application([
(r"/", MainHandler),
])
http_server = tornado.httpserver.HTTPServer(application)
http_server.listen(options.port)
tornado.ioloop.IOLoop.current().start()
if __name__ == "__main__":
main()