Lets say I have a function like this:
int main()
{
char* str = new char[10];
for(int i=0;i<5;i++)
{
//Do stuff with str
}
delete[] str;
return 0;
}
Why would I need to delete
str
if I am going to end the program anyways? I wouldn't care if that memory goes to a land full of unicorns if I am just going to exit, right?Is it just good practice?
Does it have deeper consequences?
Contrary answer: No, it is a waste of time. A program with a vast amount of allocated data would have to touch nearly every page in order to return all of the allocations to the free list. This wastes CPU time, creates memory pressure for uninteresting data, and possibly even causes the process to swap pages back in from disk. Simply exiting releases all of the memory back to the OS without any further action.
(not that I disagree with the reasons in "Yes", I just think there are arguments both ways)
TECHNICALLY, a programmer shouldn't rely on the OS to do any thing. The OS isn't required to reclaim lost memory in this fashion.
If you do write the code that deletes all your dynamically allocated memory, then you are future proofing the code and letting others use it in a larger project.
Source: Allocation and GC Myths(PostScript alert!)
I think it's a very poor practice to use malloc/new without calling free/delete.
If the memory's going to get reclaimed anyway, what harm can there be from explicitly deallocating when you need to?
Maybe if the OS "reclaims" memory faster than free does then you'll see increased performance; this technique won't help you with any program that must remain running for any long period of time.
Having said that, so I'd recommend you use free/delete.
If you get into this habit, who's to say that you won't one day accidentally apply this approach somewhere it matters?
One should always deallocate resources after one is done with them, be it file handles/memory/mutexs. By having that habit, one will not make that sort of mistake when building servers. Some servers are expected to run 24x7. In those cases, any leak of any sort means that your server will eventually run out of that resource and hang/crash in some way. A short utility program, ya a leak isn't that bad. Any server, any leak is death. Do yourself a favor. Clean up after yourself. It's a good habit.
If in fact your question really is "I have this trivial program, is it OK that I don't free a few bytes before it exits?" the answer is yes, that's fine. On any modern operating system that's going to be just fine. And the program is trivial; it's not like you're going to be putting it into a pacemaker or running the braking systems of a Toyota Camry with this thing. If the only customer is you then the only person you can possibly impact by being sloppy is you.
The problem then comes in when you start to generalize to non-trivial cases from the answer to this question asked about a trivial case.
So let's instead ask two questions about some non-trivial cases.
Yes, and I'll tell you why. One of the worst things that can happen to a long-running service is if it accidentally leaks memory. Even tiny leaks can add up to huge leaks over time. A standard technique for finding and fixing memory leaks is to instrument the allocation heaps so that at shutdown time they log all the resources that were ever allocated without being freed. Unless you like chasing down a lot of false positives and spending a lot of time in the debugger, always free your memory even if doing so is not strictly speaking necessary.
The user is already expecting that shutting the service down might take billions of nanoseconds so who cares if you cause a little extra pressure on the virtual allocator making sure that everything is cleaned up? This is just the price you pay for big complicated software. And it's not like you're shutting down the service all the time, so again, who cares if its a few milliseconds slower than it could be?
Of course not. The operating system is going to take care of that for you. If your heap is corrupt, the attackers may be hoping that you free memory as part of their exploit. Every millisecond counts. And why would you bother polishing the doorknobs and mopping the kitchen before you drop a tactical nuke on the building?
So the answer to the question "should I free memory before my program exits?" is "it depends on what your program does".
You got a lot of professional experience answers. Here I'm telling a naive but an answer I considered as the fact.
Summary
A: Will answer in some detail.
A: It is considered a good practice. Release resources/memory you've retrieved when you're sure about it no longer used.
A: You need or need not, in fact, you tell why. There're some explanation follows.
I think it depends. Here are some assumed questions; the term program may mean either an application or a function.
A: If universe destroyed was acceptable, then no. However, the program might not work correctly as expected, and even be a program that doesn't complete what it supposed to. You might want to seriously think about why you build a program like this?
A: No. See Explanation.
A: Closely.
And I consider it depends on
How much does the program care about others, and the universe where it is?
About the term universe, see Explanation.
For summary, it depends on what do you care about.
Explanation
Important: If we define the term program as a function, then its universe is application. There are many details omitted; as an idea for understanding, it's long enough, though.
We may ever seen this kind of diagram which illustrates the relationship between application software and system software.
But for being aware the scope of which one covers, I'd suggest a reversed layout. Since we are talking about software only, the hardware layer is omitted in following diagram.
With this diagram, we realize that the OS covers the biggest scope, which is current the universe, sometimes we say the environment. You may imagine that the whole achitecture consists of a lot of disks like the diagram, to be either a cylinder or torus(a ball is fine but difficult to imagine). Here I should mention that the outmost of OS is in fact a unibody, the runtime may be either single or multiple by different implemention.
It's important that the runtime is responsible to both OS and applications, but the latter is more critical. Runtime is the universe of applications, if it's destroyed all applications run under it are gone.
Unlike human on the Earth, we live here, but we don't consist of Earth; we will still live in other suitable environment if the time when the Earth are destroying and we weren't there.
However, we can no longer exist when the universe is destroyed, because we are not only live in the universe, but also consist of the universe.
As mentioned above, the runtime is also responsible to the OS. The left circle in the following diagram is what it may looks like.
This is mostly like a C program in the OS. When the relationship between an application and OS match this, is just the same situation as runtime in the OS above. In this diagram, the OS is the universe of applications. The reason of the applications here should be responsible to the OS, is that OS might not virtualize the code of them, or allowed to be crashed. If OS are always prevent they to do so, then it's self-responsible, no matter what applications do. But think about the drivers, it's one of the scenarios that OS must allowed to be crashed, since this kind of applications are treated as part of OS.
Finally, let's see the right circle in the diagram above. In this case, the application itself be the universe. Sometimes, we call this kind of application operating system. If an OS never allowed custom code to be loaded and run, then it does all the things itself. Even it allowed, after itself is terminated, the memory goes nowhere but hardware. All the deallocation that may be necessary, is before it was terminated.
So, how much does your program care about the others? How much does it care about its universe? And how's the expectation of the program that it done its work? It depends on what do you care about.
Your Operating System should take care of the memory and clean it up when you exit your program, but it is in general good practice to free up any memory you have reserved. I think personally it is best to get into the correct mindset of doing so, as while you are doing simple programs, you are most likely doing so to learn.
Either way, the only way to guaranteed that the memory is freed up is by doing so yourself.
Instead of talking about this specific example i will talk about general cases, so generally it is important to explicitly call delete to de-allocate memory because (in case of C++) you may have some code in the destructor that you want to execute. Like maybe writing some data to a log file or sending shutting down signal to some other process etc. If you let the OS free your memory for you, your code in your destructor will not be executed.
On the other hand most operating systems will deallocate the memory when your program ends. But it is good practice to deallocate it yourself and like I gave destructor example above the OS won't call your destructor, which can create undesirable behavior in certain cases!
I personally consider it bad practice to rely on OS to free your memory (even though it will do) the reason is if later on you have to integrate your code with a larger program you will spend hours to track down and fix memory leaks!
So clean your room before leaving!