Lets say I have a function like this:
int main()
{
char* str = new char[10];
for(int i=0;i<5;i++)
{
//Do stuff with str
}
delete[] str;
return 0;
}
Why would I need to delete
str
if I am going to end the program anyways? I wouldn't care if that memory goes to a land full of unicorns if I am just going to exit, right?Is it just good practice?
Does it have deeper consequences?
I cannot agree more to Eric Lippert's excellent advice:
Other answers here have provided arguments for and against both, but the real crux of the matter is what your program does. Consider a more non-trivial example wherein the type instance being dynamically allocated is an custom class and the class destructor performs some actions which produces side effect. In such a situation the argument of memory leaks or not is trivial the more important problem is that failing to call
delete
on such a class instance will result in Undefined behavior.[basic.life] 3.8 Object lifetime
Para 4:
So the answer to your question is as Eric says "depends on what your program does"
It's a fair question, and there are a few things to consider when answering:
Important note :
delete
's freeing of memory is almost just a side-effect. The important thing it does is to destruct the object. With RAII designs, this could mean anything from closing files, freeing OS handles, terminating threads, or deleting temporary files.Some of these actions would be handled by the OS automatically when your process exits, but not all.
In your example, there's no reason NOT to call
delete
. but there's no reason to callnew
either, so you can sidestep the issue this way.Or, you can sidestep the delete (and the exception safety issues involved) by using smart pointers...
So, generally you should always be making sure your object's lifetime is properly managed.
But it's not always easy: Workarounds for the static initialization order fiasco often mean that you have no choice but to rely on the OS cleaning up a handful of singleton-type objects for you.
new
anddelete
are reserved keyword brothers. They should cooperate with each other through a code block or through the parent object's lifecyle. Whenever the younger brother commits a fault (new
), the older brother will want to to clean (delete
) it up. Then the mother (your program) will be happy and proud of them.Another reason that I haven't see mentioned yet is to keep the output of static and dynamic analyzer tools (e.g. valgrind or Coverity) cleaner and quieter. Clean output with zero memory leaks or zero reported issues means that when a new one pops up it is easier to detect and fix.
You never know how your simple example will be used or evolved. Its better to start as clean and crisp as possible.
Yes it is good practice. You should NEVER assume that your OS will take care of your memory deallocation, if you get into this habit, it will screw you later on.
To answer your question, however, upon exiting from the main, the OS frees all memory held by that process, so that includes any threads that you may have spawned or variables allocated. The OS will take care of freeing up that memory for others to use.