I've been facing issues where my server is throwing a 500 if the API isn't accessed for 30 mins at a stretch. To check the problem, I need to keep track of every single API request made. I'm using Tornado in front of Flask. This is my code so far:
import tornado.httpserver
import tornado.ioloop
import tornado.web
from flasky import app
from tornado.wsgi import WSGIContainer
from tornado.ioloop import IOLoop
from tornado.web import FallbackHandler
from tornado.log import enable_pretty_logging
enable_pretty_logging()
tr = WSGIContainer(app)
application = tornado.web.Application([
(r".*", FallbackHandler, dict(fallback=tr)),
])
if __name__ == '__main__':
application.listen(5000)
IOLoop.instance().start()
Whats the most efficient way to store the logs to some file?
I tried doing this but it only works when the process exits with 0:
import sys
import time
timestr = time.strftime("%Y%m%d-%H%M%S")
filename = "C:/Source/logs/" + timestr + ".log"
class Logger(object):
def __init__(self):
self.terminal = sys.stdout
self.log = open(filename, "a")
def write(self, message):
self.terminal.write(message)
self.log.write(message)
def flush(self):
pass
sys.stdout = Logger()
You have used
enable_pretty_logging
which is good, and if you might note the documentation says you can pass in a logger. So what is a logger? Turns out Python has very extensive support for logging actions through the builtinlogging
module (which is mentioned in the documentation too). Generally, you need to set up handlers that write to some specific file, which you can do byThis will log all info level entries (or higher) into the file. These loggers can be gathered by the
logging.getLogger
function, and you can explicitly select these as per the tornado documentation bySimply append your handler to the logger that is generating the messages you want to log to a file. If it's the
tornado.application
generating the messages you want to seeOr you can also use the builtin tornado options that enable this