So I use Qt a lot with my development and love it. The usual design pattern with Qt objects is to allocate them using new
.
Pretty much all of the examples (especially code generated by the Qt designer) do absolutely no checking for the std::bad_alloc
exception. Since the objects allocated (usually widgets and such) are small this is hardly ever a problem. After all, if you fail to allocate something like 20 bytes, odds are there's not much you can do to remedy the problem.
Currently, I've adopted a policy of wrapping "large" (anything above a page or two in size) allocations in a try/catch. If that fails, I display a message to the user, pretty much anything smaller, I'll just let the app crash with a std::bad_alloc
exception.
So, I wonder what the schools of thought on this are on this?
Is it good policy to check each and every new
operation? Or only ones I expect to have the potential to fail?
Also, it is clearly a whole different story when dealing with an embedded environment where resources can be much more constrained. I am asking in the context of a desktop application, but would be interested in answers involving other scenarios as well.
Handle it in
main()
(or the equivalent top level exception handler in Qt)The reason is that std::bad_alloc either happens when you exhaust the memory space (2 or 3 GB on 32 bits systems, doesn't happen on 64 bits systems) or when you exhaust swap space. Modern heap allocators aren't tuned to run from swap space, so that will be a slow, noisy death - chances are your users will kill your app well beforehand as it's UI is no longer responding. And on Linux, the OS memory handling is so poor by default that your app is likely to be killed automatically.
So, there is little you can do. Confess you have a bug, and see if you can save any work the user may have done. To be able to do so, it's best to abort as much as possible. Yes, this may in fact lose some of the last user input. But it's that very action that likely triggered the OOM situation.. The goal is to save whatever data you can trust.
Handle the exception when you can. If an allocation fails, and your application can't continue without that bit of memory, why bother checking for the error?
Handle the error when it can be handled, when there is a meaningful way to recover. If there's nothing you can do about the error, just let it propagate.
This is a relatively old thread, but it did come up when I was searching for "std::bad_alloc" considerations when doing new/delete overriding here in 2012.
I would not take the concept "oh well nothing you can do anyhow" as a viable option. I personally use in my own allocations the "if(alloc()){} else { error/handling }" way mentioned above. This way I can properly handle and, or, report each case in their own meaningful context.
Now, some other possible solutions are: 1) Override the new/delete for the application where you can add your own out of memory handling.
Although like other posters state, and in particular with out knowledge of the particular contexts, the main option is probably to just shut down the application. If this is the case you will want your handler to either have preallocated it's needed memory, and, or, use static memory so hopefully the handler it's self will not become exhausted.
Here you could have at least perhaps a dialog pop up and say something on the lines of: "The application ran out of memory. This a fatal error and must now self terminate. The application must be run in the minimum system memory requirements. Send debug reports to xxxx". The handler could also save any work in progress, etc., fitting the application.
At any rate you wouldn't want to use this for something critical like (warning, amateur humor ahead): the space shuttle, a heart rate monitor, a kidney dialysis machine, etc. These things require much more robust solutions of course, using fail safes, emergency garbage collection methods, 100% testing/debugging/fuzzing, etc.
2) Similar to the first, set the global "set_new_handler()" with a handler of your own to catch the out of memory condition at a global scope. Can at least handle things as mentions in #1.
The real question is reallty should you catch std::bad_alloc exceptions? I most cases if you run out of memory you are doomed anyway and might consider ending your program.
I usually catch exceptions at the point where the user has initiated an action. For console application this means in
main
, for GUI applications I put handlers in places like button on-click handlers and such.I believe that it makes little sense catching exceptions in the middle of an action, the user usually expects the operation to either succeeds or completely fail.
The problem is not "where to catch" but "what to do when an exception is catched".
If you want to check, instead of wrapping with
try catch
you'd better useMy usual practice is
in non interactive program, catch at main level an display an adequate error message there.
in program having a user interaction loop, I catch also at the loop so that the user can close some things and try to continue.
Exceptionally, there are other places where a catch is meaningful, but its rare.