The proper way to scale python tornado application

2019-07-07 01:04发布

问题:

I am searching for some way to scale one instance of tornado application to many. I have 5 servers and want to run at each 4 instances of application. The main issue I don't know how to resolve - is to make communication between instances in right way. I see next approaches to make it:

  • Use memcached for sharing data. I don't think this approach is good, because much traffic would go to server with memcached. Therefore in the future there can be trafic-related issues.
  • Open sockets between each instance. As for me it will be too hard to maintain such way of communication.
  • Use tools like ZeroMQ. I am not familiar with this technology. Is it can be the way to scale application between servers?

回答1:

I'm actually looking at something similar and the thought I have come up with is this. Use the Python Multiprocessing module ( http://docs.python.org/library/multiprocessing.html ) to link the processes together in that way on the individual servers. Then use a memcached server for session specific data. (SessionIDs, IP information, information used to tie the session to a specific user and to the thread of activity they are using) The rest being data driven from a DB instance.



回答2:

What you could do is for each server you run a memcached instance and a tornado instance. Make the memcached instances "Master replicate" with each other using repcached so each instance of tornado can access memcached data from its machine. Four servers for the tornado and memcached instances and the fifth to run haproxy to load balance the others.

www.haproxy.org/

repcached.lab.klab.org/