Should we check if memory allocations fail?

2019-02-06 08:20发布

I've seen a lot of code that checks for NULL pointers whenever an allocation is made. This makes the code verbose, and if it's not done consistently, only when the programmer felt like it, doesn't even ensure that the program won't crash when the address space runs out. Besides, if the program can't make more allocations, it wouldn't be able to do its function anyway, right?

So my question is, isn't it better for most programs not to check at all and just let the program crash if memory runs out? At least the code is more readable that way.

Note

I'm talking about desktop apps that run on modern computers (at least 2 GB address space), and that most definitely don't operate space shuttles, life support systems, or BP's oil platforms. Most importantly I'm talking about programs that use malloc but never really go above 5 MB of memory usage.

7条回答
姐就是有狂的资本
2楼-- · 2019-02-06 09:17

In a hosted environment error checking the return of malloc makes not much sense nowadays. Most machines have a virtual address space of 64 bit. You'd need a lot of time to exhaust that. Your program will most likely fail at a completely different place, namely when your physical+swap memory is exhausted. It will have shown completely ridiculous performance before that, because it only was swapping and the user will have triggered Cntrl-C long before you ever come there.

Segfaulting "nicely" on a null pointer reference would be a clear point to see where things fail in a debugger. But in my practice I have never seen a failed malloc as a cause.

When programming for embedded systems the picture changes completely. There you definitively should check for failed malloc.

Edit: To clarify that after the edit of the question. The kind of programs/systems described there are clearly not "embedded". I have never seen malloc fail under the circumstances described there.

查看更多
登录 后发表回答