std::async function running serially

2019-03-01 07:31发布

问题:

When using std::async with launch::async in a for loop, my code runs serially in the same thread, as if each async call waits for the previous before launching. In the notes for std::async references (std::async), this is possible if the std::future is not bound to a reference, but that's not the case with my code. Can anyone figure out why it's running serially?

Here is my code snippet:

class __DownloadItem__ { //DownloadItem is just a "typedef shared_ptr<__DownloadItem__> DownloadItem"
    std::string buffer;
    time_t last_access;
 std::shared_future<std::string> future;
}

for(uint64_t start: chunksToDownload){
        DownloadItem cache = std::make_shared<__DownloadItem__>();
        cache->last_access = time(NULL);
        cache->future =
                std::async(std::launch::async, &FileIO::download, this, api,cache, cacheName, start, start + BLOCK_DOWNLOAD_SIZE - 1);
     }
}

The future is being stored in a shared future because multiple threads might be waiting on the same future.

I'm also using GCC 6.2.1 to compile it.

回答1:

The std::future returned by async blocks in the destructor. That means when you reach the } of

for(uint64_t start: chunksToDownload){
    DownloadItem cache = std::make_shared<__DownloadItem__>();
    cache->last_access = time(NULL);
    cache->future =
            std::async(std::launch::async, &FileIO::download, this, api,cache, cacheName, start, start + BLOCK_DOWNLOAD_SIZE - 1);
 }  // <-- When we get here

cache is destroyed which in turn calls the destructor offuture which waits for the thread to finish.

What you need to do is store each future returned from async in a separate persistent future that is declared outside of the for loop.



回答2:

That's a misfeature of std::async as defined by C++11. Its futures' destructors are special and wait for the operation to finish. More detailed info on Scott's Meyers blog.

cache is being destroyed at the end of each loop iteration, thereby calling destructors of its subobjects.

Use packaged_task or ensure you keep a container of copies of shared pointers to your cache to avoid waiting for the destructors. Personally, I'd go with packeged_task



回答3:

As you noticed yourself, the future d-tor of future returned by std::async blocks and waits for the async operation to finish (for the future to become ready). In your case, cache object goes out of scope at each of the loop iterations and thus gets destructed, together with the future it holds, so you see the mentioned effect.