function doesn't throw bad_alloc exception

2019-06-23 20:40发布

I'm trying to do an exercise form Stroustrup's C++PL4 book. The task is:

Allocate so much memory using new that bad_alloc is thrown. Report how much memory was allocated and how much time it took. Do this twice: once not writing to the allocated memory and once writing to each element.

The following code doesn't throw a std::bad_alloc exception. After I execute the program I get message "Killed" in terminal.

Also. The following code exits in ~4 seconds. But when I uncomment memory usage message

// ++i;
// std::cout << "Allocated " << i*80 << " MB so far\n";

Program will run for few minutes. After some time it prints that terabytes of memory has been allocated but I don't see much change in System Monitor app. Why is that?

I use Linux and System Monitor app to see usages.

#include <iostream>
#include <vector>
#include <chrono>

void f()
{
    std::vector<int*> vpi {};
    int i {};
    try{
        for(;;){
            int* pi = new int[10000];
            vpi.push_back(pi);
            // ++i;
            // std::cout << "Allocated " << i*80 << " MB so far\n";
        }       
    }
    catch(std::bad_alloc){
        std::cerr << "Memory exhausted\n";
    }
}

int main() {
    auto t0 = std::chrono::high_resolution_clock::now();
    f();
    auto t1 = std::chrono::high_resolution_clock::now();
    std::cout << std::chrono::duration_cast<std::chrono::milliseconds>(t0-t1).count() << " ms\n";
}

1条回答
▲ chillily
2楼-- · 2019-06-23 21:26

In the modern cruel world calling new (as well as malloc() or even brk()) doesn't necessarily allocate memory. It just sends (through a chain of layers) a request to an OS and the OS assigns a virtual memory area (rounded to system memory pages). So only subsequent accessing to a given memory performs actual allocation.

Moreover modern OSes allow memory "overcommit". Sometimes (depending on OS and its settings) applications can demand totally more memory that the OS could assign even theoretically, including all its swap areas etc, all w/o any visible problem. Look at this page for example.

This is done because in real life a situation when all applications would actually use all allocated memory in the same time is quite improbable. More often, 99.99..% of time, applications use only parts of their memory and do it sequently, so an OS has a chance to serve their requests seamlessly.

To increase chances to actually cause a memory allocation error, you may access the just allocated element, but again I wouldn't call it a verbatim warranty, just "about increasing possibilities".

In the worst case when such an OS actually finds that it can't assign enough (virtual) memory because too many apps simultaneously requested access to their seamingly allocated data, OS memory manager initiates a special procedure called "OOM killer" which simply kills heuristically (= randomly :)) chosen applications.

So relying on bad_alloc is a bad idea nowadays. Sometimes you can realiably receive it (e.g. when artificially limiting your app with ulimit/setrlimit), but in general your application will run in an environment which won't guarantee anything. Just not be a memory hog and pray for the rest :)

查看更多
登录 后发表回答