可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
I've run into memory leaks many times. Usually when I'm malloc
-ing like there's no tomorrow, or dangling FILE *
s like dirty laundry. I generally assume (read: hope desperately) that all memory is cleaned up at least when the program terminates. Are there any situations where leaked memory won't be collected when the program terminates, or crashes?
If the answer varies widely from language-to-language, then let's focus on C(++).
Please note hyperbolic usage of the phrase, 'like there's no tomorrow', and 'dangling ... like dirty laundry'. Unsafe* malloc
*ing can hurt the ones you love. Also, please use caution with dirty laundry.
回答1:
No. Operating systems free all resources held by processes when they exit.
This applies to all resources the operating system maintains: memory, open files, network connections, window handles...
That said, if the program is running on an embedded system without an operating system, or with a very simple or buggy operating system, the memory might be unusable until a reboot. But if you were in that situation you probably wouldn't be asking this question.
The operating system may take a long time to free certain resources. For example the TCP port that a network server uses to accept connections may take minutes to become free, even if properly closed by the program. A networked program may also hold remote resources such as database objects. The remote system should free those resources when the network connection is lost, but it may take even longer than the local operating system.
回答2:
The C Standard does not specify that memory allocated by malloc
is released when the program terminates. This done by the operating system and not all OSes (usually these are in the embedded world) release the memory when the program terminates.
回答3:
As all the answers have covered most aspects of your question w.r.t. modern OSes, but historically, there is one that is worth mentioning if you have ever programmed in the DOS world. Terminant and Stay Resident (TSR) programs would usually return control to the system but would reside in memory which could be revived by a software / hardware interrupt. It was normal to see messages like "out of memory! try unloading some of your TSRs" when working on these OSes.
So technically the program terminates, but because it still resides on memory, any memory leak would not be released unless you unload the program.
So you can consider this to be another case apart from OSes not reclaiming memory either because it's buggy or because the embedded OS is designed to do so.
I remember one more example. Customer Information Control System (CICS), a transaction server which runs primarily on IBM mainframes is pseudo-conversational. When executed, it processes the user entered data, generates another set of data for the user, transferring to the user terminal node and terminates. On activating the attention key, it again revives to process another set of data. Because the way it behaves, technically again, the OS won't reclaim memory from the terminated CICS Programs, unless you recycle the CICS transaction server.
回答4:
Like the others have said, most operating systems will reclaim allocated memory upon process termination (and probably other resources like network sockets, file handles, etc).
Having said that, the memory may not be the only thing you need to worry about when dealing with new/delete (instead of raw malloc/free). The memory that's allocated in new may get reclaimed, but things that may be done in the destructors of the objects will not happen. Perhaps the destructor of some class writes a sentinel value into a file upon destruction. If the process just terminates, the file handle may get flushed and the memory reclaimed, but that sentinel value wouldn't get written.
Moral of the story, always clean up after yourself. Don't let things dangle. Don't rely on the OS cleaning up after you. Clean up after yourself.
回答5:
This is more likely to depend on operating system than language. Ultimately any program in any language will get it's memory from the operating system.
I've never heard of an operating system that doesn't recycle memory when a program exits/crashes. So if your program has an upper bound on the memory it needs to allocate, then just allocating and never freeing is perfectly reasonable.
回答6:
If the program is ever turned into a dynamic component ("plugin") that is loaded into another program's address space, it will be troublesome, even on an operating system with tidy memory management. We don't even have to think about the code being ported to less capable systems.
On the other hand, releasing all memory can impact the performance of a program's cleanup.
One program I was working on, a certain test case required 30 seconds or more for the program to exit, because it was recursing through the graph of all dynamic memory and releasing it piece by piece.
A reasonable solution is to have the capability there and cover it with test cases, but turn it off in production code so the application quits fast.
回答7:
All operating systems deserving the title will clean up the mess your process made after termination. But there are always unforeseen events, what if it was denied access somehow and some poor programmer did not foresee the possibility and so it doesn't try again a bit later?
Always safer to just clean up yourself IF memory leaks are mission critical - otherwise not really worth the effort IMO if that effort is costly.
Edit:
You do need to clean up memory leaks if they are in place where they will accumulate, like in loops. The memory leaks I speak of are ones that build up in constant time throughout the course of the program, if you have a leak of any other sort it will most likely be a serious problem sooner or later.
In technical terms if your leaks are of memory 'complexity' O(1) they are fine in most cases, O(logn) already unpleasant (and in some cases fatal) and O(N)+ intolerable.
回答8:
Shared memory on POSIX compliant systems persists until shm_unlink is called or the system is rebooted.
回答9:
If you have interprocess communication, this can lead to other processes never completing and consuming resources depending on the protocol.
To give an example, I was once experimenting with printing to a PDF printer in Java when I terminated the JVM in the middle of a printer job, the PDF spooling process remained active, and I had to kill it in the task manager before I could retry printing.