I've been using Tornado for a while now and I've encountered issues with slow timing (which I asked about in this question). One possible issue that was pointed out by a fellow user was that I was using regular open("..." , 'w')
to write to files in my co-routine and that this might be a blocking piece of code.
So my question is, is there a way to do non-blocking file IO in Tornado? I couldn't find anything in my research that fit my needs.
Move all of the code associated with file IO to separate functions decorated with run_on_executor.
I'm providing another answer because as it turns out, reading/writting the whole file in a separate thread does not work for large files. You cannot receive or send the full contents of a big file in one chunk, because you may not have enough memory.
For me, it was not trivial to find out how to block the reader/writer thread when the chunk processor in the ioloop's main thread is not able to keep up with the speed. The implementation below works efficiently when the file read operation is much faster than the chunk processor, and also when the file read operation is the slower. Synchronization is realized by the combination of an async queue and a lock, and it does not block the ioloop's thread in any way.
The lock is only RELEASED in the loop's thread, it is never acquired, there is no race condition there.
I do not expect this to be accepted as an answer, but since it took me a while to figure out, I guess it may help others in their implementations.
This can be generalized not just for file read/write operations, but for any consumer/producer pair that has one side in a separate thread and the other side in the ioloop.