When is it OK to catch an OutOfMemoryException and

2020-01-25 07:19发布

Yesterday I took part in a discussion on SO devoted to OutOfMemoryException and the pros and cons of handling it (C# try {} catch {}).

My pros for handling it were:

  • The fact that OutOfMemoryException was thrown doesn't generally mean that the state of a program was corrupted;
  • According to documentation "the following Microsoft intermediate (MSIL) instructions throw OutOfMemoryException: box, newarr, newobj" which just (usually) means that the CLR attempted to find a block of memory of a given size and was unable to do that; it does not mean that no single byte left at our disposition;

But not all people were agree with that and speculated about unknown program state after this exception and an inability to do something useful since it will require even more memory.

Therefore my question is: what are the serious reasons not to handle OutOfMemoryException and immediately give up when it occurs?

Edited: Do you think that OOME is as fatal as ExecutionEngineException?

标签: c# .net
10条回答
太酷不给撩
2楼-- · 2020-01-25 07:33

One practical reason for catching this exception is to attempt a graceful shutdown, with a friendly error message instead of an exception trace.

查看更多
爱情/是我丢掉的垃圾
3楼-- · 2020-01-25 07:35

IMO, since you can't predict what you can/can't do after an OOM (so you can't reliably process the error), or what else did/didn't happen when unrolling the stack to where you are (so the BCL hasn't reliably processed the error), your app must now be assumed to be in a corrupt state. If you "fix" your code by handling this exception you are burying your head in the sand.

I could be wrong here, but to me this message says BIG TROUBLE. The correct fix is to figure out why you have chomped though memory, and address that (for example, have you got a leak? could you switch to a streaming API?). Even switching to x64 isn't a magic bullet here; arrays (and hence lists) are still size limited; and the increased reference size means you can fix numerically fewer references in the 2GB object cap.

If you need to chance processing some data, and are happy for it to fail: launch a second process (an AppDomain isn't good enough). If it blows up, tear down the process. Problem solved, and your original process/AppDomain is safe.

查看更多
虎瘦雄心在
4楼-- · 2020-01-25 07:44

It all depends on the situation.

Quite a few years ago now I was working on a real-time 3D rendering engine. At the time we loaded all the geometry for the model into memory on start up, but only loaded the texture images when we needed to display them. This meant when the day came our customers were loading huge (2GB) models we were able to cope. The geometry occupied less than 2GB, but when all the textures were added it would be > 2GB. By trapping the out of memory error that was raised when we tried to load the texture we were able to carry on displaying the model, but just as the plain geometry.

We still had a problem if the geometry was > 2GB, but that was a different story.

Obviously, if you get an out of memory error with something fundamental to your application then you've got no choice but to shut down - but do that as gracefully as you can.

查看更多
Bombasti
5楼-- · 2020-01-25 07:45

The problem is larger than .NET. Almost any application written from the fifties to now has big problems if no memory is available.

With virtual address spaces the problem has been sort-of salvaged but NOT solved because even address spaces of 2GB or 4GB may become too small. There are no commonly available patterns to handle out-of-memory. There could be an out-of-memory warning method, a panic method etc. that is guaranteed to still have memory available.

If you receive an OutOfMemoryException from .NET almost anything may be the case. 2 MB still available, just 100 bytes, whatever. I wouldn't want to catch this exception (except to shutdown without a failure dialog). We need better concepts. Then you may get a MemoryLowException where you CAN react to all sorts of situations.

查看更多
闹够了就滚
6楼-- · 2020-01-25 07:49

Suggest Christopher Brumme's comment in "Framework Design Guideline" p.238 (7.3.7 OutOfMemoryException):

At one end of the spectrum, an OutOfMemoryException could be the result of a failure to obtain 12 bytes for implicitly autoboxing, or a failure to JIT some code that is required for critical backout. These cases are catastrophic failures and ideally would result in termination of the process. At the other end of the spectrum, an OutOfMemoryException could be the result of a thread asking for a 1 GB byte array. The fact that we failed this allocation attempt has no impact on the consistency and viability of the rest of the process.

The sad fact is that CRL 2.0 cannot distinguish among any points on this spectrum. In most managed processes, all OutOfMemoryExceptions are considered equivalent and they all result in a managed exception being propagated up the thread. However, you cannot depend on your backout code being executed, because we might fail to JIT some of your backout methods, or we might fail to execute static constructors required for backout.

Also, keep in mind that all other exceptions can get folded into an OutOfMemoryException if there isn't enough memory to instantiate those other exception objects. Also, we will give you a unique OutOfMemoryException with its own stack trace if we can. But if we are tight enough on memory, you will share an uninteresting global instance with everyone else in the process.

My best recommendation is that you treat OutOfMemoryException like any other application exception. You make your best attempts to handle it and ramain consistent. In the future, I hope the CLR can do a better job of distinguishing catastrophic OOM from the 1 GB byte array case. If so, we might provoke termination of the process for the catastrophic cases, leaving the application to deal with the less risky ones. By threating all OOM cases as the less risky ones, you are preparing for that day.

查看更多
来,给爷笑一个
7楼-- · 2020-01-25 07:50

Marc Gravell has already provided an excellent answer; seeing as how I partly "inspired" this question, I would like to add one thing:

One of the core principles of exception handling is never to throw an exception inside an exception handler. (Note - re-throwing a domain-specific and/or wrapped exception is OK; I am talking about an unexpected exception here.)

There are all sorts of reasons why you need to prevent this from happening:

  • At best, you mask the original exception; it becomes impossible to know for sure where the program originally failed.

  • In some cases, the runtime may simply be unable to handle an unhandled exception in an exception handler (say that 5 times fast). In ASP.NET, for example, installing an exception handler at certain stages of the pipeline and failing in that handler will simply kill the request - or crash the worker process, I forget which.

  • In other cases, you may open yourself up to the possibility of an infinite loop in the exception handler. This may sound like a silly thing to do, but I have seen cases where somebody tries to handle an exception by logging it, and when the logging fails... they try to log the failure. Most of us probably wouldn't deliberately write code like this, but depending on how you structure your program's exception handling, you can end up doing it by accident.

So what does this have to do with OutOfMemoryException specifically?

An OutOfMemoryException doesn't tell you anything about why the memory allocation failed. You might assume that it was because you tried to allocate a huge buffer, but maybe it wasn't. Maybe some other rogue process on the system has literally consumed all of the available address space and you don't have a single byte left. Maybe some other thread in your own program went awry and went into an infinite loop, allocating new memory on each iteration, and that thread has long since failed by the time the OutOfMemoryException ends up on your current stack frame. The point is that you don't actually know just how bad the memory situation is, even if you think you do.

So start thinking about this situation now. Some operation just failed at an unspecified point deep in the bowels of the .NET framework and propagated up an OutOfMemoryException. What meaningful work can you perform in your exception handler that does not involve allocating more memory? Write to a log file? That takes memory. Display an error message? That takes even more memory. Send an alert e-mail? Don't even think about it.

If you try to do these things - and fail - then you'll end up with non-deterministic behaviour. You'll possibly mask the out-of-memory error and get mysterious bug reports with mysterious error messages bubbling up from all kinds of low-level components you wrote that aren't supposed to be able to fail. Fundamentally, you've violated your own program's invariants, and this is going to be a nightmare to debug if your program ever does end up running under low-memory conditions.

One of the arguments presented to me before was that you might catch an OutOfMemoryException and then switch to lower-memory code, like a smaller buffer or a streaming model. However, this "Expection Handling" is a well-known anti-pattern. If you know you're about to chew up a huge amount of memory and aren't sure whether or not the system can handle it, then check the available memory, or better yet, just refactor your code so that it doesn't need so much memory all at once. Don't rely on the OutOfMemoryException to do it for you, because - who knows - maybe the allocation will just barely succeed and trigger a bunch of out-of-memory errors immediately after your exception handler (possibly in some completely different component).

So my simple answer to this question is: Never.

My weasel-answer to this question is: It's OK in a global exception handler, if you're really really careful. Not in a try-catch block.

查看更多
登录 后发表回答