Are memory leaks ever ok? [closed]

2019-01-08 02:35发布

Is it ever acceptable to have a memory leak in your C or C++ application?

What if you allocate some memory and use it until the very last line of code in your application (for example, a global object's destructor)? As long as the memory consumption doesn't grow over time, is it OK to trust the OS to free your memory for you when your application terminates (on Windows, Mac, and Linux)? Would you even consider this a real memory leak if the memory was being used continuously until it was freed by the OS.

What if a third party library forced this situation on you? Would refuse to use that third party library no matter how great it otherwise might be?

I only see one practical disadvantage, and that is that these benign leaks will show up with memory leak detection tools as false positives.

30条回答
Ridiculous、
2楼-- · 2019-01-08 03:31

In this sort of question context is everything. Personally I can't stand leaks, and in my code I go to great lengths to fix them if they crop up, but it is not always worth it to fix a leak, and when people are paying me by the hour I have on occasion told them it was not worth my fee for me to fix a leak in their code. Let me give you an example:

I was triaging a project, doing some perf work and fixing a lot of bugs. There was a leak during the applications initialization that I tracked down, and fully understood. Fixing it properly would have required a day or so refactoring a piece of otherwise functional code. I could have done something hacky (like stuffing the value into a global and grabbing it some point I know it was no longer in use to free), but that would have just caused more confusion to the next guy who had to touch the code.

Personally I would not have written the code that way in the first place, but most of us don't get to always work on pristine well designed codebases, and sometimes you have to look at these things pragmatically. The amount of time it would have taken me to fix that 150 byte leak could instead be spent making algorithmic improvements that shaved off megabytes of ram.

Ultimately, I decided that leaking 150 bytes for an app that used around a gig of ram and ran on a dedicated machine was not worth fixing it, so I wrote a comment saying that it was leaked, what needed to be changed in order to fix it, and why it was not worth it at the time.

查看更多
劳资没心,怎么记你
3楼-- · 2019-01-08 03:31

I think you've answered your own question. The biggest drawback is how they interfere with the memory leak detection tools, but I think that drawback is a HUGE drawback for certain types of applications.

I work with legacy server applications that are supposed to be rock solid but they have leaks and the globals DO get in the way of the memory detection tools. It's a big deal.

In the book "Collapse" by Jared Diamond, the author wonders about what the guy was thinking who cut down the last tree on Easter Island, the tree he would have needed in order to build a canoe to get off the island. I wonder about the day many years ago when that first global was added to our codebase. THAT was the day it should have been caught.

查看更多
走好不送
4楼-- · 2019-01-08 03:32

If you allocate a bunch of heap at the beginning of your program, and you don't free it when you exit, that is not a memory leak per se. A memory leak is when your program loops over a section of code, and that code allocates heap and then "loses track" of it without freeing it.

In fact, there is no need to make calls to free() or delete right before you exit. When the process exits, all of its memory is reclaimed by the OS (this is certainly the case with POSIX. On other OSes – particularly embedded ones – YMMV).

The only caution I'd have with not freeing the memory at exit time is that if you ever refactor your program so that it, for example, becomes a service that waits for input, does whatever your program does, then loops around to wait for another service call, then what you've coded can turn into a memory leak.

查看更多
Animai°情兽
5楼-- · 2019-01-08 03:33

No.

As professionals, the question we should not be asking ourselves is, "Is it ever OK to do this?" but rather "Is there ever a good reason to do this?" And "hunting down that memory leak is a pain" isn't a good reason.

I like to keep things simple. And the simple rule is that my program should have no memory leaks.

That makes my life simple, too. If I detect a memory leak, I eliminate it, rather than run through some elaborate decision tree structure to determine whether it's an "acceptable" memory leak.

It's similar to compiler warnings – will the warning be fatal to my particular application? Maybe not.

But it's ultimately a matter of professional discipline. Tolerating compiler warnings and tolerating memory leaks is a bad habit that will ultimately bite me in the rear.

To take things to an extreme, would it ever be acceptable for a surgeon to leave some piece of operating equipment inside a patient?

Although it is possible that a circumstance could arise where the cost/risk of removing that piece of equipment exceeds the cost/risk of leaving it in, and there could be circumstances where it was harmless, if I saw this question posted on SurgeonOverflow.com and saw any answer other than "no," it would seriously undermine my confidence in the medical profession.

If a third party library forced this situation on me, it would lead me to seriously suspect the overall quality of the library in question. It would be as if I test drove a car and found a couple loose washers and nuts in one of the cupholders – it may not be a big deal in itself, but it portrays a lack of commitment to quality, so I would consider alternatives.

查看更多
对你真心纯属浪费
6楼-- · 2019-01-08 03:33

You have to first realize that there's a big difference between a perceived memory leak and an actual memory leak. Very frequently analysis tools will report many red herrings, and label something as having been leaked (memory or resources such as handles etc) where it actually isn't. Often times this is due to the analysis tool's architecture. For example, certain analysis tools will report run time objects as memory leaks because it never sees those object freed. But the deallocation occurs in the runtime's shutdown code, which the analysis tool might not be able to see.

With that said, there will still be times when you will have actual memory leaks that are either very difficult to find or very difficult to fix. So now the question becomes is it ever OK to leave them in the code?

The ideal answer is, "no, never." A more pragmatic answer may be "no, almost never." Very often in real life you have limited number of resources and time to resolve and endless list of tasks. When one of the tasks is eliminating memory leaks, the law of diminishing returns very often comes in to play. You could eliminate say 98% of all memory leaks in an application in a week, but the remaining 2% might take months. In some cases it might even be impossible to eliminate certain leaks because of the application's architecture without a major refactoring of code. You have to weigh the costs and benefits of eliminating the remaining 2%.

查看更多
霸刀☆藐视天下
7楼-- · 2019-01-08 03:39

Historically, it did matter on some operating systems under some edge cases. These edge cases could exist in the future.

Here's an example, on SunOS in the Sun 3 era, there was an issue if a process used exec (or more traditionally fork and then exec), the subsequent new process would inherit the same memory footprint as the parent and it could not be shrunk. If a parent process allocated 1/2 gig of memory and didn't free it before calling exec, the child process would start using that same 1/2 gig (even though it wasn't allocated). This behavior was best exhibited by SunTools (their default windowing system), which was a memory hog. Every app that it spawned was created via fork/exec and inherited SunTools footprint, quickly filling up swap space.

查看更多
登录 后发表回答