Current Situation
I implemented a TCP server using boost.asio which currently uses a single io_service
object on which I call the run
method from a single thread.
So far the server was able to answer the requests of the clients immediately, since it had all necessary information in the memory (no long-running operations in the receive handler were necessary).
Problem
Now requirements have changed and I need to get some information out of a database (with ODBC) - which is basically a long-running blocking operation - in order to create the response for the clients.
I see several approaches, but I don't know which one is best (and there are probably even more approaches):
First Approach
I could keep the long running operations in the handlers, and simply call io_service.run()
from multiple threads. I guess I would use as many threads as I have CPU cores available?
While this approach would be easy to implement, I don't think I would get the best performance with this approach because of the limited number of threads (which are idling most of the time since database access is more an I/O-bound operation than a compute-bound operation).
Second Approach
In section 6 of this document it says:
Use threads for long running tasks
A variant of the single-threaded design, this design still uses a single io_service::run() thread for implementing protocol logic. Long running or blocking tasks are passed to a background thread and, once completed, the result is posted back to the io_service::run() thread.
This sounds promising, but I don't know how to implement that. Can anyone provide some code snippet / example for this approach?
Third Approach
Boris Schäling explains in section 7.5 of his boost introduction how to extend boost.asio with custom services.
This looks like a lot of work. Does this approach have any benefits compared to the other approaches?
We have same long-running tasks in our server (a legacy protocol with storages). So our server is running 200 threads to avoid blocking service (yes, 200 threads is running
io_service::run
). Its not too great thing, but works well for now.The only problem we had is
asio::strand
which uses so-called "implementations" which gets locked when hadler is currently called. Solved this via increase this strands butckets and "deattaching" task viaio_service::post
without strand wrap.Some tasks may run seconds or even minutes and this does work without issues at the moment.
The approaches are not explicitly mutually exclusive. I often see a combination of the first and second:
io_service
.io_service
. Thisio_service
functions as a thread pool that will not interfere with threads handling network I/O. Alternatively, one could spawn a detached thread every time a long running or blocking task is needed; however, the overhead of thread creation/destruction may a noticeable impact.This answer that provides a thread pool implementation. Additionally, here is a basic example that tries to emphasize the interaction between two
io_services
.And the output:
Note that the single thread processing the main
io_service
posts work into thebackground_service
, and then continues to process its event loop while thebackground_service
blocks. Once thebackground_service
gets a result, it posts a handler into the mainio_service
.