-->

Heap fragmentation when using byte arrays

2019-03-14 19:52发布

问题:

I have a C# 4.0 application (single producer/single consumer) which transfers huge amount of data in chunks. Although there's no new memory allocation I run out of memory after a while.

I profiled memory using Redgate memory profiler and there are a lot of free memory there. It says free memory cannot be used because of fragmentation.

I use a blocking collection as the buffer and byte arrays as the members:

BlockingCollection<byte[]> segments = new BlockingCollection<byte[]>(8);
// producer:
segments.Add(buffer);
// consumer:
byte[] buffer = _segments.Take();

How can I avoid managed memory fragmentation?

回答1:

You probably ran into the large object heap problem - objects larger than 85,000 bytes are put on the large object heap which is not compacted which can lead to strange out of memory situations. Although apparently the performance in .NET 4 has been improved it's far from perfect. The solution is to basically use your own buffer pool which contains a few statically allocated chunks of memory and reuse those.
There is a whole bunch of questions around that on SO.

Update: Microsoft provides a buffer manager as part of the WCF stack. There is also one on codeproject.



回答2:

How long are your byte[] array? Do they fall into the small object or large object heap? If you experience memory fragmentation, I would say they fall into the LOH.

You should therefore reuse the same byte arrays (use a pool) or use smaller chunks. The LOH is never compacted, so it can become quite fragmented. Sadly there is no way around this. (Apart from knowing this limitation and avoiding it)



回答3:

The GC doesn’t compact the large object heap for you, you can still programmatically compact it. The following code snippet illustrates how this can be achieved.

GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
GC.Collect();