I just started reading Effective C++ today and got to the point where the author talks about the operator new.
The book explains very well how you can catch (with various degrees of elegance) the std::bad_alloc exception that the operator new can raise if you run out of memory.
My question is: How often do you check for the case when there isn't enough memory to instantiate a object, if at all? and why? Is it worth the hassle?
It's generally not worthwhile unless you're using something like the RAII (Resource Acquisition Is Initialization) pattern. In that case you're likely allocating remote resources in the constructor which may include using new to create a large buffer.
In this case, it's probable better to catch the exception as you are in a constructor. Also, since this is in RAII, it's likely just the resource requires too much memory, which allows you to provide the user with more descriptive error messages.
The problem is that when you run out of memory there is generally not much you can do except write to a crash dump and exit the program. It's therefore useless to check every new in your program.
One exception to this is when you allocate memory for e.g. loading a file, in which case you just need to inform the user that not enough memory is available for the requested operation.
I think the most important thing is to always be conscious of the possibility you might run out of memory. Then decide whether you care or not. Consider trying and catching for each and every allocation -- that's a lot of a hassle. Pick between increased productivity and simpler code versus an application that can gracefully handle the case of no-memory. I think both gains are extremely valuable in the right contexts, so choose carefully.
Yes, you can make your life easier by defining a template base-class that provides a custom operator new and operator delete and sets a new new-handler. Then you can use the Curiously Recurring Template pattern to derive from this base class. Your derived classes will then gracefully handle bad allocations, but you still need to remember to derive from that base-class on each new class you create. Often you may end up with multiple-inheritance, which may bring complexities of its own. No matter what you do to handle bad allocations, your code will not be as simple as if you don't bother.
There is never one answer to this. It is a choice you must make, depending on the context as always.
It used to be that your program died from swap death way before you allocated your last possible byte, with the harddisk accessing a pagefile the size of your address space. But with modern systems holding 4GB+ yet running 32 bits processes, this behavior is much less common. Even the biggest processes may get all the physical RAM they can handle. In those cases, they can run out of memory before the harddisk dies.
There is no general handling strategy, though. Any process that has caches implemented should flush those - but a good cache would already have been flushed when the OS signalled a low memory condition. A process that responds to user requests can handle bad_alloc at the user request granularity. There's generally little benefit in keeping any memory allocated, if you run out of memory for the user action. Instead, revert to the state before the user action. A non-interactive process on the other hand might switch to from an O(N) to a slower O(N log N) algorithm.