可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
I have a problem where a couple 3 dimensional arrays allocate a huge amount of memory and the program sometimes needs to replace them with bigger/smaller ones and throws an OutOfMemoryException.
Example: there are 5 allocated 96MB arrays (200x200x200, 12 bytes of data in each entry) and the program needs to replace them with 210x210x210 (111MB). It does it in a manner similar to this:
array1 = new Vector3[210,210,210];
Where array1-array5 are the same fields used previously. This should set the old arrays as candidates for garbage collection but seemingly the GC does not act quickly enough and leaves the old arrays allocated before allocating the new ones - which causes the OOM - whereas if they where freed before the new allocations the space should be enough.
What I'm looking for is a way to do something like this:
GC.Collect(array1) // this would set the reference to null and free the memory
array1 = new Vector3[210,210,210];
I'm not sure if a full garbage collecion would be a good idea since that code may (in some situations) need to be executed fairly often.
Is there a proper way of doing this?
回答1:
This is not an exact answer to the original question, "how to force GC', yet, I think it will help you to reexamine your issue.
After seeing your comment,
- Putting the GC.Collect(); does seem to help, altought it still does not solve the problem completely - for some reason the program still crashes when about 1.3GB are allocated (I'm using System.GC.GetTotalMemory( false ); to find the real amount allocated).
I will suspect you may have memory fragmentation. If the object is large (85000 bytes under .net 2.0 CLR if I remember correctly, I do not know whether it has been changed or not), the object will be allocated in a special heap, Large Object Heap (LOH). GC does reclaim the memory being used by unreachable objects in LOH, yet, it does not perform compaction, in LOH as it does to other heaps (gen0, gen1, and gen2), due to performance.
If you do frequently allocate and deallocate large objects, it will make LOH fragmented and even though you have more free memory in total than what you need, you may not have a contiguous memory space anymore, hence, will get OutOfMemory exception.
I can think two workarounds at this moment.
- Move to 64-bit machine/OS and take advantage of it :) (Easiest, but possibly hardest as well depending on your resource constraints)
- If you cannot do #1, then try to allocate a huge chuck of memory first and use them (it may require to write some helper class to manipulate a smaller array, which in fact resides in a larger array) to avoid fragmentation. This may help a little bit, yet, it may not completely solve the issue and you may have to deal with the complexity.
回答2:
Seems you've run into LOH (Large object heap) fragmentation issue.
Large Object Heap
CLR Inside Out Large Object Heap Uncovered
You can check to see if you're having loh fragmentation issues using SOS
Check this question for an example of how to use SOS to inspect the loh.
回答3:
Forcing a Garbage Collection is not always a good idea (it can actually promote the lifetimes of objects in some situations). If you have to, you would use:
array1 = null;
GC.Collect();
array1 = new Vector3[210,210,210];
回答4:
Isn't this just large object heap fragmentation? Objects > 85,000 bytes are allocated on the large object heap. The GC frees up space in this heap but never compacts the remaining objects. This can result in insufficent contiguous memory to successfully allocate a large object.
Alan.
回答5:
If I had to speculate you problem is not really that you are going from Vector3[200,200,200] to a Vector3[210,210,210] but that most likely you have similar previous steps before this one:
i.e.
// first you have
Vector3[10,10,10];
// then
Vector3[20,20,20];
// then maybe
Vector3[30,30,30];
// .. and so on ..
// ...
// then
Vector3[200,200,200];
// and eventually you try
Vector3[210,210,210] // and you get an OutOfMemoryException..
If that is true, I would suggest a better allocation strategy. Try over allocating - maybe doubling the size every time as opposed to always allocating just the space that you need. Especially if these arrays are ever used by objects that need to pin the buffers (i.e. if that have ties to native code)
So, instead of the above, have something like this:
// first start with an arbitrary size
Vector3[64,64,64];
// then double that
Vector3[128,128,128];
// and then.. so in thee steps you go to where otherwise
// it would have taken you 20..
Vector3[256,256,256];
回答6:
They might not be getting collected because they're being referenced somewhere you're not expecting.
As a test, try changing your references to WeakReferences instead and see if that resolves your OOM problem. If it doesn't then you're referencing them somewhere else.
回答7:
I understand what you're trying to do and pushing for immediate garbage collection is probably not the right approach (since the GC is subtle in its ways and quick to anger).
That said, if you want that functionality, why not create it?
public static void Collect(ref object o)
{
o = null;
GC.Collect();
}
回答8:
An OutOfMemory exception internally triggers a GC cycle automatically once and attempts the allocation again before actually throwing the exception to your code. The only way you could be having OutOfMemory exceptions is if you're holding references to too much memory. Clear the references as soon as you can by assigning them null.
回答9:
Part of the problem may be that you're allocating a multidimensional array, which is represented as a single contiguous block of memory on the large object heap (more details here). This can block other allocations as there isn't a free contiguous block to use, even if there is still some free space somewhere, hence the OOM.
Try allocating it as a jagged array - Vector3[210][210][210] - which spreads the arrays around memory rather than as a single block, and see if that improves matters
回答10:
John, Creating objects > 85000 bytes will make the object end up in the large object heap. The large object heap is never compacted, instead the free space is reused again.
This means that if you are allocating larger arrays every time, you can end up in situations where LOH is fragmented, hence the OOM.
you can verify this is the case by breaking with the debugger at the point of OOM and getting a dump, submitting this dump to MS through a connect bug (http://connect.microsoft.com) would be a great start.
What I can assure you is that the GC will do the right thing trying to satisfy you allocation request, this includes kicking off a GC to clean the old garbage to satisfy the new allocation requests.
I don't know what is the policy of sharing out memory dumps on Stackoverflow, but I would be happy to take a look to understand your problem more.