Garbage Collection every 100 seconds

2019-03-11 13:14发布

问题:

Did any one encountered a scenario where application under high memory allocation load performed second generation collection every 100 seconds ?

We using 64-bit server with 8-16 GB of physical memory.

Application has several GB of data what is stored in cache and can't be cleaned from it because it's actually used by application. In addition it receives a lot of request that allocate GEN 0 object during processing.

What it's odd to me is the fact what GEN 2 collection performed evey 100 seconds like a clock. I was thinking it shuld be less predictable

回答1:

If you are under high memory load, and using a lot of objects, then yes: GC will get busy... if it is hitting gen-2, then it sounds like you've got a lot of mid/long-life objects hanging around...

I'm assuming that memory usage is fairly stable? The above could indicate some kind of pseudo-leak (perhaps holding onto too many objects via static events, etc), or could just mean that you have a high memory usage!

How much memory are you using? Could you consider x64 and a ton of memory? Alternatively, would the 3gb switch (x86) buy you a few more bytes?



回答2:

If you are running a dual core CPU, try setting GCServer="true" in the app/web.config.

In my case, it does about 10% of the original GC's and the application feels a lot snappier.



回答3:

A 2nd generation collection occurring like clockwork would suggest that either the GC.Collect method is being called like clockwork, or the allocation is like clockwork.

The randomness you expect to see in garbage collection is not likely to happen unless the allocations, or GC.Collect calls, are truely random.

Given that your server application is under such high load, and you create new objects during processing, I would seriously consider refactoring the code to see if fewer objects could be newly created during processing, by using object pools for example.

An object pool differs from most garbage collectors in that objects in a pool can be reused as soon as they are put back in the pool, and the garbage collector needs to perform a collection before a previous block of memory holding an object can be used again as a different object.



回答4:

You are probably generating more objects then can fit in the young heap all at once or you have a memory leak.

If you are intensively creating a lot of objects that need to be alive all at the same time, then you are probably overflowing the young section and some of the objects have to be copied to an older generation. This forces full collections more often as the older heap fills up. You probably need to find a way to allocate fewer objects in this case unless there is a way to request a larger young heap like there is with the Sun JVM.

If you actually store off the objects somewhere (say in a list owned by an old object), then you have a logical leak. In general, you don't want old objects referring to young ones. This tends to get the young objects promoted and the GC algorithms generally are optimized for it not happening. Also, you may want to considering clearing references if this significantly shortens the scope that an object can be alive in (although it usually is superfluous).

Barring that and you just have unusually high memory usage, there probably isn't a whole lot that you can do. Remember that for any long running program, you will eventually have to do some GCing, and the more memory you need at a time, the more often it comes.



回答5:

For that to happen the memory use should be very consistent for both the process and the system as well. Garbage collection is triggered by either of these events:

  • Generation 0's budget is full
  • GC.Collect() is called
  • CLR wants to free memory
  • AppDomain shutdown
  • CLR shutdown

The likely candidates in your case are probably regular collection (i.e. due to allocation) or a timed Collect().

EDIT: Just to clarify about allocations. Allocation of regular objects always happen in generation 0 (exception is large objects of 85000 bytes or more, which are allocated on the large object heap). Instances are only moved to generation 1 and 2, when they survive a collection. There are no direct allocations in generation 1 and 2.

Also, generation 2 collection (also known as a full collect) is performed when generation 0 / 1 collections do not free sufficient memory (or when a full collect is explicitly requested).



回答6:

I'm assuming you are using .NET as well. I'm not sure what tools you are using, but I'm a huge fan of Red Gate's Ants profiler. I use it at work. It can identify which objects are hogging resources. Once you narrow it down, hopefully, you can find the offending code and free up resources properly.

Check your code and make sure you're calling Dispose() whenever possible.

Let us know how it goes.



回答7:

Since you are running on a Server, I am assuming it is a multicore machine as well. If this is the case, then you get Server GC flavor by default, so you don't need to set anything in your config file.

The fact that you are getting Gen2 collection every 100 second is a factor of your allocation pattern and object lifetime pattern. if your allocation pattern is consistent, you will get consistent GC's, you can verify this behavior by looking under .Net CLR Memory perf counters under perfmon

You will need to track the following metrics

  1. Gen0 Collections

  2. Gen 1 Collections

  3. Gen 2 Collections

  4. Allocated Bytes per second
  5. Bytes in all heaps.

you should see the last metric moving like a jigsaw, increasing, a Gen2 Collection kicks in, decrease again, and the cycle repeats itself.

To avoid this, you will need to check

  • Can you can objects between request, in pools?, this will avoid GC all together.
  • If not, can you decrease the number of objects you allocate per request?

Hope this helps. Thanks



回答8:

I assume this is for .net.

GC collects when it wants to base on its algorithm. You can suggest the garbage collector to collect but it may not actually do anything.

you can use GC.Collect() to ask the GC to look if garbage can be collected. However it may not actually remove items from memory.

NOTE: Also, make sure you are clearing references correctly. Meaning unhooking events, Clearing references between objects. This will help the GC in collected objects that are no longer in scope.



回答9:

That the garbage collector is being invoked frequently is not in itself necessarily a huge issue - it could however be a flag that you are not optimally handling your memory well (for instance not passing massive strings by reference into methods).

Garbage collection should be non-deterministic. That noted, if you are running a number of critical tasks, but your thread(s) are sleeping at some juncture (like every 100 seconds) it is reasonable that the garbage collector may take the opportunity to collect at that point. More likely is that the consumption due to allocation peaks at more-or-less regular intervals and the garbage collector is invoked to retrieve unused memory.

I highly suggest profiling the memory consumption of your application.



回答10:

Could it be just your application that creates a huge object every 100s, and so GC is forced to make its work?



回答11:

I've seen this 100 second frequency too, it doesn't happen on all production setups, but I've seen it locally and on other setups.



回答12:

I do not understand what is causing the “performed second generation collection every 100 seconds”, it is very rare to see a real life system that does anything to such a “clock work” cycle.

If you are under high memory load, and using a lot of objects, then yes: GC will get busy... if it is hitting gen-2, then it sounds like you've got a lot of mid/long-life objects hanging around... You are probably generating more objects then can fit in the young heap all at once or you have a memory leak.

Assuming you don’t have a leak, have you checked with a memory profiler? I am also assuming you are not creating a lot of unnecessary garbage (e.g string1 += string2 inside of a loop).

I can think of two things that may help.

By limiting the number of requests (threads) Asp.net processes at the same time, you may limit the number of live objects and also speed up the processing of a single request, so not keeping object alive for as long. (Are you getting a lot of thread contact switches?)

If you are storing object in the Asp.net cache and/or the Asp.net session. You could try using an out-of-process store for this caches information, e.g the Asp.net session server, a 3rd party session server, memcache or the recently released cache server from Microsoft (Velocity). Just rereading data from the database when you need it, may be better then storing it in long lived object.

Failing the above, how much memory are you using? Could you consider x64 and a ton of memory? Or a webfarm..