可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
I just started reading Effective C++ today and got to the point where the author talks about the operator new.
The book explains very well how you can catch (with various degrees of elegance) the std::bad_alloc exception that the operator new can raise if you run out of memory.
My question is: How often do you check for the case when there isn't enough memory to instantiate a object, if at all? and why? Is it worth the hassle?
回答1:
I catch exceptions when I can answer this question:
What will you do with the exception once you've caught it?
Most of the time, my answer is, "I have no idea. Maybe my caller knows." So I don't catch the exception. Let it bubble up to someone who knows better.
When you catch an exception and let your function proceed running, you've said to your program, "Never mind. Everything's fine here." When you say that, by golly, everything had better be fine. So, if you've run out of memory, then after you've handled std::bad_alloc
, you should not be out of memory anymore. You shouldn't just return an error code from your function, because then the caller has to check explicitly for that error code, and you're still out of memory. Your handling of that exception should free some memory. Empty some caches, commit some things to disk, etc. But how many of the functions in your program do you really want to be responsible for reducing your program's memory usage?
If you cannot solve the problem that triggered the exception, then do not handle the exception.
回答2:
The problem is that when you run out of memory there is generally not much you can do except write to a crash dump and exit the program. It's therefore useless to check every new in your program.
One exception to this is when you allocate memory for e.g. loading a file, in which case you just need to inform the user that not enough memory is available for the requested operation.
回答3:
I think the most important thing is to always be conscious of the possibility you might run out of memory. Then decide whether you care or not. Consider trying and catching for each and every allocation -- that's a lot of a hassle. Pick between increased productivity and simpler code versus an application that can gracefully handle the case of no-memory. I think both gains are extremely valuable in the right contexts, so choose carefully.
Yes, you can make your life easier by defining a template base-class that provides a custom operator new and operator delete and sets a new new-handler. Then you can use the Curiously Recurring Template pattern to derive from this base class. Your derived classes will then gracefully handle bad allocations, but you still need to remember to derive from that base-class on each new class you create. Often you may end up with multiple-inheritance, which may bring complexities of its own. No matter what you do to handle bad allocations, your code will not be as simple as if you don't bother.
There is never one answer to this. It is a choice you must make, depending on the context as always.
回答4:
Never. I have always considered that the default behaviour (there is a std::bad_alloc exception, it is not handled, and thus the program terminates with an error message) is good, for the applications I've worked in.
回答5:
In a system using virtual memory, malloc() won't return NULL, and new won't return std::bad_alloc; they will return a virtual address. When you write to the memory zone pointed by this address, the system will try to map the virtual address to a physical address. If there's no more memory available, you'll get a page fault.
So you catch for std::bad_alloc when you're on an embedded system without MMU, and hope you can do something to free some memory.
回答6:
Not handling the exception will crash your program, and the support requests you get will be somewhere between "does not work", "crashes randomly", and "I lost all my work of that day". If you think that's ok, then it's not worth the hassle indeed.
The least you can do is telling the user that he actually ran out of memory indeed, at least giving him (or support) a clue why the application is randomly crashing.
Additionally, you can try to preserve results, e.g. saving them to a recovery file. That might be easier to do before you run into the problem, though.
It would be fabulous if you could go as far as giving an error message like "You cannot insert this image because you ran out of memory". And you'd continue working as if nothing happened. However, this would mean all the code behind a user command must be transactional and give a strong exception safety guarantee.
So, identify the cost of randomly running out of memory. Based on that, evaluate which "level of protection" you need to give to your user.
回答7:
I think this largely depends on the type of applications you write. If i would write something that doesn't affect the global stability of the system, let's say a game or a movie player, i would not check for that exception. My application would call std::terminate
and i could log it somewhere, or my kernel would kill my program first to make room for other programs.
If i write a program whose stability directly corresponds with the one of the system it runs on, let's say a video driver or an init system, i would check for memory exceptions all the time (probably wrapping allocations in a function), and get some memory from a pre-allocated pool in case of an allocation failure.
I think this all depends on proportionality. What do you gain from a amazingly stable movie player, if it starts slowing down to play movies because of your aggressive checking?
Btw, someone answered malloc won't return 0 when you're out of memory for some systems. That's right, but as the manpage of malloc points out (linux specific)
In case Linux is employed under circumstances where it would be less desirable to suddenly lose some randomly picked processes, and moreover the kernel version is sufficiently recent, one can switch off this overcommitting behavior using a command like: $ echo 2 > /proc/sys/vm/overcommit_memory
See also the kernel Documentation directory, files vm/overcommit-accounting and sysctl/vm.txt.
回答8:
If you have to allocate memory for e.g. a path buffer where you know it will be only a few bytes, that may not be worth the hassle.
But when you have to allocate memory for bigger objects like images or files, you definitely should do the check.
回答9:
It's generally not worthwhile unless you're using something like the RAII (Resource Acquisition Is Initialization) pattern. In that case you're likely allocating remote resources in the constructor which may include using new to create a large buffer.
In this case, it's probable better to catch the exception as you are in a constructor. Also, since this is in RAII, it's likely just the resource requires too much memory, which allows you to provide the user with more descriptive error messages.
回答10:
It used to be that your program died from swap death way before you allocated your last possible byte, with the harddisk accessing a pagefile the size of your address space. But with modern systems holding 4GB+ yet running 32 bits processes, this behavior is much less common. Even the biggest processes may get all the physical RAM they can handle. In those cases, they can run out of memory before the harddisk dies.
There is no general handling strategy, though. Any process that has caches implemented should flush those - but a good cache would already have been flushed when the OS signalled a low memory condition. A process that responds to user requests can handle bad_alloc at the user request granularity. There's generally little benefit in keeping any memory allocated, if you run out of memory for the user action. Instead, revert to the state before the user action. A non-interactive process on the other hand might switch to from an O(N) to a slower O(N log N) algorithm.