boost 1.55 asio tcp cpp03 chat_server example memo

2019-06-01 00:38发布

问题:

i hope someone could give me a clue where to investigate...

i'm running the chat_server example from boost

http://www.boost.org/doc/libs/1_55_0/doc/html/boost_asio/example/cpp03/chat/chat_server.cpp

on visual studio 2010, windows 10 and i downloaded boost binaries from :

http://sourceforge.net/projects/boost/files/boost-binaries/1.55.0/boost_1_55_0-msvc-10.0-32.exe/download

i used a script to simulate 30 tcp clients, each thread behavior basically:

  1. connect to tcp server
  2. start a loop
  3. send a message to tcp server
  4. receive a message from tcp server
  5. sleep
  6. back to step 2

the strange fact is when i use windows task manager to monitor the memory consumption. The numbers from columns private working set and shared working set remains "stable" for almost 18 minutes and after that the values of private working set starts to increase almost 5 MB by minute.

So my doubts are:

  1. Does anyone ever seen anything similar before?
  2. What could cause this?

Regards

回答1:

The server retains chat history, but only the 100 most recent messages in a "ringbuffer" (actually a deque<chat_message>).

Indeed testing with a large number of clients doing a lot of chatting:

(for c in {00..99}; do for a in {001..999}; do sleep .1; echo "Client $c message $a"; done | ./chat_client localhost 6767& done)

Shows a memory increase:

The breakdown indicates it's due to allocations from deliver for _write_msgs_ which is also a queue.

3.4 GiB: std::deque<chat_message, std::allocator<chat_message> >::_M_push_back_aux(chat_message const&) (new_allocator.h:104)
3.4 GiB: chat_session::deliver(chat_message const&) (stl_deque.h:1526)

It doesn't logically grow, though, so it would appear there's some unfortunate behaviour.

Let's investigate:

On a total test run (shown above) the max write queue depth for any session is 60.

Upon restart of all the client (without reloading the server), the queue depth increases to 100 immediately for obvious reasons (all clients get the full history of 100 items delivered at once)¹.

Add shrink_to_fit

Adding a call to shrink_to_fit after each pop_front call in chat_session doesn't make the the behaviour any better (apart from the fact that c++03 doesn't have shrink_to_fit of course).

Use a different container

Dropping in a boost::circular_buffer instead of the std::deque, strangely reaches a queue depth of 100 easily, even on the first run, but it does change the memory profile dramatically:

Clearly, there's something suboptimal about using deque as... a double-ended queue o.O That's very surprising. I'll try with libc++ instead:

Using libc++ instead

Interestingly, with std::deque<> and shrink-to-fit libc++ shows a different - still bad -curve. Note also it report ever-growing _write_msgs_ queue depths. Somehow it behaves really differently... o.O


¹ Even thought the client immediately start cackling as well, the queue depth doesn't go beyond 100 - so throughput is still fine.