When can a memory leak occur?

2020-06-03 02:37发布

问题:

I don't know what to think here...

We have a component that runs as a service. It runs perfectly well on my local machine, but on some other machine (on both machine RAM's are equal to 2GB) it starts to generate bad_alloc exceptions on the second and consecutive days. The thing is that the memory usage of the process stays the same at aproximately 50Mb level. The other weird thing is that by means of tracing messages we have localized the exception to be thrown from a stringstream object which does but insert no more than 1-2 Kb data into the stream. We're using STL-Port if that matters.

Now, when you get a bad_alloc exception, you think it's a memory leak. But all our manual allocations are wrapped into a smart pointer. Also, I can't understand how a stringstream object lacks memory when the whole process uses only ~50Mb (the memory usage stays approximtely constant(and sure doesn't rise) from day to day).

I can't provide you with code, because the project is really big, and the part which throws the exception really does nothing else but create a stringstream and << some data and then log it.

So, my question is... How can a memory leak/bad_alloc occur when the process uses only 50Mb memory out of 2GB ? What other wild guesses do you have as to what could possibly be wrong?

Thanks in advance, I know the question is vague etc., I'm just sort of desperate and I tried my best to explain the problem.

回答1:

bad_alloc doesn't necessarily mean there is not enough memory. The allocation functions might also fail because the heap is corrupted. You might have some buffer overrun or code writing into deleted memory, etc.

You could also use Valgrind or one of its Windows replacements to find the leak/overrun.



回答2:

One likely reason within your description is that you try to allocate a block of some unreasonably big size because of an error in your code. Something like this;

 size_t numberOfElements;//uninitialized
 if( .... ) {
    numberOfElements = obtain();
 }
 elements = new Element[numberOfElements];

now if numberOfElements is left uninitialized it can contain some unreasonably big number and so you effectively try to allocate a block of say 3GB which the memory manager refuses to do.

So it can be not that your program is short on memory, but that it tries to allocate more memory than it could possibly be allowed to under even the best condition.



回答3:

Just a hunch,

But I have had trouble in the past when allocating arrays as so

int array1[SIZE];  // SIZE limited by COMPILER to the size of the stack frame

when SIZE is a large number.

The solution was to allocate with the new operator

int* array2 = new int[SIZE];  // SIZE limited only by OS/Hardware

I found this very confusing, the reason turned out to be the stack frame as discussed here in the solution by Martin York: Is there a max array length limit in C++?

All the best,

Tom



回答4:

Check the profile of other processes on the machine using Process Explorer from sysinternals - you will get bad_alloc if memory is short, even if it's not you that's causing memory pressure.

Check your own memory usage using umdh to get snapshots and compare usage profile over time. You'll have to do this early in the cycle to avoid blowing up the tool, but if your process's behaviour is not degrading over time (ie. no sudden pathological behaviour) you should get accurate info on its memory usage at time T vs time T+t.



回答5:

Another long shot: you don't say in which of the three operations the error occurs (construction, << or log), but the problem may be memory fragmentation, rather than memory consumption. Maybe stringstream can't find a contiguous memory block long enough to hold a couple of Kb.

If this is the case, and if you exercise that function on the first day (without mishap) then you could make the stringstream a static variable and reuse it. As far as I know stringstream does not deallocate it's buffer space during its lifetime, so if it establishes a big buffer on the first day it will continue to have it from then on (for added safety you could run a 5Kb dummy string through it when it is first constructed).



回答6:

I fail to see why a stream would throw. Don't you have a dump of the failed process? Or perhaps attach a debugger to it to see what the allocator is trying to allocate?

But if you did overload the operator <<, then perhaps your code does have a bug.

Just my 2 (euro) cts...

1. Fragmentation ?

The memory could be fragmented.

At one moment, you try to allocate SIZE bytes, but the allocator finds no contiguous chunk of SIZE bytes in memory, and then throw a bad_alloc.

Note: This answer was written before I read this possibility was ruled out.

2. signed vs. unsigned ?

Another possibility would be the use of a signed value for the size to be allocated:

char * p = new char[i] ;

If the value of i is negative (e.g. -1), the cast into the unsigned integral size_t will make it go beyond what is available to the memory allocator.

As the use of signed integral is quite common in user code, if only to be used as a negative value for an invalid value (e.g. -1 for a failed search), this is a possibility.



回答7:

 ~className(){

 //delete stuff in here

}


回答8:

By way of example, Memory leaks can occur when you use the new operator in c++ and forget to use the delete operator.

Or, in other words, when you allocate a block of memory and you forget to deallocate it.