I have the following code:
const int bufferSize = 1024 * 1024;
var buffer = new byte[bufferSize];
for (int i = 0; i < 10; i++)
{
const int writesCount = 400;
using (var stream = new MemoryStream(writesCount * bufferSize))
{
for (int j = 0; j < writesCount; j++)
{
stream.Write(buffer, 0, buffer.Length);
}
stream.Close();
}
}
which I run on a 32-bit machine.
The first iteration finishes just fine and then on the next iteration I get a System.OutOfMemoryException
exception on the line that new
s the MemoryStream
.
Why isn't the previous MemoryStream
memory reclaimed despite using
statement? How do I force release of memory used by the MemoryStream
?
I don't think the problem is the garbage collector not doing its job. If the GC is under memory pressure it should run and reclaim the 400 MBs you've just allocated.
This is more likely down to the GC not finding a contigious 400 MB block.
Rather, an “out of memory” error happens because the process is unable
to find a large enough section of contiguous unused pages in its
virtual address space to do the requested mapping.
You should read Eric Lippert's blog entry "Out Of Memory" Does Not Refer to Physical Memory
You're far better off doing both of the below.
- Reusing the memory block you've allocated (why are you creating another with the exact same size)
- Allocating much smaller chunks (less than 85KBs)
Prior to Dotnet 4.5, Dotnet constructed two heaps, Small Object Heap (SOH) and Large Object Heap (LOH). See Large Object Hearp Improvements in .NET 4.5 by Brandon Bray. Your MemoryStream
is being allocated in LOH, and not compacted (defragmented) for the duration of the process, making it much more likely that multiple calls to allocate this large amount of memory will throw an OutOfMemoryException
The CLR manages two different heaps for allocation, the small object
heap (SOH) and the large object heap (LOH). Any allocation greater
than or equal to 85,000 bytes goes on the LOH. Copying large objects
has a performance penalty, so the LOH is not compacted unlike the SOH.
Another defining characteristic is that the LOH is only collected
during a generation 2 collection. Together, these have the built-in
assumption that large object allocations are infrequent.
Looks like you're allocating too much than your system can handle. Your code runs fine on my machine, but if I change it like this :
const int bufferSize = 1024 * 1024 * 2;
I get the same error as you.
But if I change the target processor to x64, then the code runs, which seems logical as you can address lot more memory.
Detailed explanation on this article : http://www.guylangston.net/blog/Article/MaxMemory
And some information on this question : Maximum Memory a .NET process can allocate
First of all, Dispose()
does not guarantee that memory will be released (it does not mark objects for GC collection, in case of MemoryStream
- it releases nothing, as MemoryStream
has no unmanaged resources). The only reliable way to free memory used by MemoryStream
is to lose all references to it and wait for garbage collection to occur (and if you have OutOfMemoryException
- garbage collector already tried but failed to free enough memory). Also, allocating such large objects (anything > 85000 bytes) have some consequences - these objects are going to large object heap (LOH), which can get fragmented (and cannot be compacted). As .NET object must occupy a contiguous sequence of bytes, it can lead to a situation where you have enough memory, but there is no room for large object. Garbage collector won't help in this case.
It seems like the main problem here is that reference to a stream
object is kept on stack, preventing garbage collection of stream
object (even forcing garbage collection won't help, as GC considers that object is still alive, you can check this creating a WeakRefrence
to it). Refactoring this sample can fix it:
static void Main(string[] args)
{
const int bufferSize = 1024 * 1024 * 2;
var buffer = new byte[bufferSize];
for(int i = 0; i < 10; i++)
{
const int writesCount = 400;
Write(buffer, writesCount, bufferSize);
}
}
static void Write(byte[] buffer, int writesCount, int bufferSize)
{
using(var stream = new MemoryStream(writesCount * bufferSize))
{
for(int j = 0; j < writesCount; j++)
{
stream.Write(buffer, 0, buffer.Length);
}
}
}
Here is a sample which proves that object can't be garbage collected:
static void Main(string[] args)
{
const int bufferSize = 1024 * 1024 * 2;
var buffer = new byte[bufferSize];
WeakReference wref = null;
for(int i = 0; i < 10; i++)
{
if(wref != null)
{
// force garbage collection
GC.Collect();
GC.WaitForPendingFinalizers();
GC.Collect();
// check if object is still alive
Console.WriteLine(wref.IsAlive); // true
}
const int writesCount = 400;
using(var stream = new MemoryStream(writesCount * bufferSize))
{
for(int j = 0; j < writesCount; j++)
{
stream.Write(buffer, 0, buffer.Length);
}
// weak reference won't prevent garbage collection
wref = new WeakReference(stream);
}
}
}
Try to force garbage collection when you are sure that it is necessary to clean unreferenced objects.
GC.Collect();
GC.WaitForPendingFinalizers();
GC.Collect();
Another alternative is to use the Stream
with external storage: FileStream
, for example.
But, in general case, it would be better to use one small enough buffer (array, allocated one time) and use it for read/write calls. Avoid having many large objects in .NET (see CLR Inside Out: Large Object Heap Uncovered).
Update
Assuming that the writesCount
is the constant, the why not allocate one buffer and reuse it?
const int bufferSize = 1024 * 1024;
const int writesCount = 400;
byte[] streamBuffer = new byte[writesCount * bufferSize];
byte[] buffer = new byte[bufferSize];
for (int i = 0; i < 10; i++)
{
using (var stream = new MemoryStream(streamBuffer))
{
for (int j = 0; j < writesCount; j++)
{
stream.Write(buffer, 0, buffer.Length);
}
}
}