I am making a few attempts at making my own simple asynch TCP server using boost::asio after not having touched it for several years.
The latest example listing I can find is: http://www.boost.org/doc/libs/1_54_0/doc/html/boost_asio/tutorial/tutdaytime3/src.html
The problem I have with this example listing is that (I feel) it cheats and it cheats big, by making the tcp_connection a shared_ptr, such that it doesn't worry about the lifetime management of each connection. (I think) They do this for brevity, since it is a small tutorial, but that solution is not real world.
What if you wanted to send a message to each client on a timer, or something similar? A collection of client connections is going to be necessary in any real world non-trivial server.
I am worried about the lifetime management of each connection. I figure the natural thing to do would be to keep some collection of tcp_connection objects or pointers to them inside tcp_server. Adding to that collection from the OnConnect callback and removing from that collection OnDisconnect.
Note that OnDisconnect would most likely be called from an actual Disconnect method, which in turn would be called from OnReceive callback or OnSend callback, in the case of an error.
Well, therein lies the problem.
Consider we'd have a callstack that looked something like this:
tcp_connection::~tcp_connection
tcp_server::OnDisconnect
tcp_connection::OnDisconnect
tcp_connection::Disconnect
tcp_connection::OnReceive
This would cause errors as the call stack unwinds and we are executing code in a object that has had its destructor called...I think, right?
I imagine everyone doing server programming comes across this scenario in some fashion. What is a strategy for handling it?
I hope the explanation is good enough to follow. If not let me know and I will create my own source listing, but it will be very large.
Edit: Related
) Memory management in asynchronous C++ code
IMO not an acceptable answer, relies on cheating with shared_ptr outstanding on receive calls and nothing more, and is not real world. what if the server wanted to say "Hi" to all clients every 5 minutes. A collection of some kind is necessary. What if you are calling io_service.run on multiple threads?
I am also asking on the boost mailing list: http://boost.2283326.n4.nabble.com/How-to-design-proper-release-of-a-boost-asio-socket-or-wrapper-thereof-td4693442.html
While others have answered similarly to the second half of this answer, it seems the most complete answer I can find, came from asking the same question on the Boost Mailing list.
http://boost.2283326.n4.nabble.com/How-to-design-proper-release-of-a-boost-asio-socket-or-wrapper-thereof-td4693442.html
I will summarize here in order to assist those that arrive here from a search in the future.
There are 2 options
1) Close the socket in order to cancel any outstanding io and then post a callback for the post-disconnection logic on the io_service and let the server class be called back when the socket has been disconnected. It can then safely release the connection. As long as there was only one thread that had called io_service::run, then other asynchronous operations will have been already been resolved when the callback is made. However, if there are multiple threads that had called io_service::run, then this is not safe.
2) As others have been pointing out in their answers, using the shared_ptr to manage to connections lifetime, using outstanding io operations to keep them alive, is viable. We can use a collection weak_ptr to the connections in order to access them if we need to. The latter is the tidbit that had been omitted from other posts on the topic which confused me.
Connection lifetime is a fundamental issue with
boost::asio
. Speaking from experience, I can assure you that getting it wrong causes "undefined behaviour"...The
asio
examples useshared_ptr
to ensure that a connection is kept alive whilst it may have outstanding handlers in anasio::io_service
. Note that even in a single thread, anasio::io_service
runs asynchronously to the application code, see CppCon 2016: Michael Caisse "Asynchronous IO with Boost.Asio" for an excellent description of the precise mechanism.A
shared_ptr
enables the lifetime of a connection to be controlled by theshared_ptr
instance count. IMHO it's not "cheating and cheating big"; but an elegant solution to complicated problem.However, I agree with you that just using
shared_ptr
's to control connection lifetimes is not a complete solution since it can lead to resource leaks.In my answer here: Boost async_* functions and shared_ptr's, I proposed using a combination of
shared_ptr
andweak_ptr
to manage connection lifetimes. An HTTP server using a combination ofshared_ptr
's andweak_ptr
's can be found here: via-httplib.The HTTP server is built upon an asynchronous TCP server which uses a collection of (
shared_ptr
's to) connections, created on connects and destroyed on disconnects as you propose.The way that asio solves the "deletion problem" where there are outstanding async methods is that is splits each async-enabled object into 3 classes, eg:
there is one service per io_loop (see
use_service<>
). The service creates an impl for the server, which is now a handle class.This has separated the lifetime of the handle and the lifetime of the implementation.
Now, in the handle's destructor, a message can be sent (via the service) to the impl to cancel all outstanding IO.
The handle's destructor is free to wait for those io calls to be queued if necessary (for example if the server's work is being delegated to a background io loop or thread pool).
It has become a habit with me to implement all io_service-enabled objects this way as it makes coding with aiso very much simpler.
Like I said, I fail to see how using smart pointers is "cheating, and cheating big". I also do not think your assessment that "they do this for brevity" holds water.
Here's a slightly redacted excerpt¹ from our code base that exemplifies how using shared_ptrs doesn't preclude tracking connections.
It shows just the server side of things, with
a very simple
connection
object in connection.hpp; this uses theenable_shared_from_this
just the fixed size
connection_pool
(we have dynamically resizing pools too, hence the locking primitives). Note how we can do actions on all active connections.So you'd trivially write something like this to write to all clients, like on a timer:
a sample
listener
that shows how it ties in with theconnection_pool
(which has a sample method to close all connections)Code Listings
connection.hpp
connection_pool.hpp
listener.hpp
¹ download as gist https://gist.github.com/sehe/979af25b8ac4fd77e73cdf1da37ab4c2