The idea is to be able to replace multithreaded code with boost::asio and a thread pool, on a consumer/producer problem. Currently, each consumer thread waits on a boost::condition_variable
- when a producer adds something to the queue, it calls notify_one
/notify_all
to notify all the consumers. Now what happens when you (potentially) have 1k+ consumers? Threads won't scale!
I decided to use boost::asio
, but then I ran into the fact that it doesn't have condition variables. And then async_condition_variable
was born:
class async_condition_variable
{
private:
boost::asio::io_service& service_;
typedef boost::function<void ()> async_handler;
std::queue<async_handler> waiters_;
public:
async_condition_variable(boost::asio::io_service& service) : service_(service)
{
}
void async_wait(async_handler handler)
{
waiters_.push(handler);
}
void notify_one()
{
service_.post(waiters_.front());
waiters_.pop();
}
void notify_all()
{
while (!waiters_.empty()) {
notify_one();
}
}
};
Basically, each consumer would call async_condition_variable::wait(...)
. Then, a producer would eventually call async_condition_variable::notify_one()
or async_condition_variable::notify_all()
. Each consumer's handle would be called, and would either act on the condition or call async_condition_variable::wait(...)
again. Is this feasible or am I being crazy here? What kind of locking (mutexes) should be performed, given the fact that this would be run on a thread pool?
P.S.: Yes, this is more a RFC (Request for Comments) than a question :).
Have a list of things that need to be done when an event occurs. Have a function to add something to that list and a function to remove something from that list. Then, when the event occurs, have a pool of threads work on the list of jobs that now need to be done. You don't need threads specifically waiting for the event.
Boost::asio can be kind of hard to wrap your head around. At least, I have difficult time doing it.
You don't need to have the threads wait on anything. They do that on their own when they don't have any work to do. The examples that seemed to look like what you wanted to do had work posted to the io_service for each item.
The following code was inspired from this link. It actually open my eyes to how you could use it do a lot of things.
I'm sure this isn't perfect, but I think it gives the general idea. I hope this helps.
Code
How about using boost::signals2?
It is a thread safe spinoff of boost::signals that lets your clients subscribe a callback to a signal to be emitted.
Then, when the signal is emitted asynchronously in an io_service dispatched job all the registered callbacks will be executed (on the same thread that emitted the signal).