可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
First of all, a disclaimer:
I have experience in other languages, but am still learning the subtleties of C#
On to the problem... I am looking at some code, which uses the try/catch blocks in a way that concerns me. When a parsing routine is called, rather than return an error code, the programmer used the following logic
catch (TclException e) {
throw new TclRuntimeError("unexpected TclException: " + e.Message,e);
}
This is caught by the caller, which throws the same error ...
... which is caught by the caller, which throws the same error ...
..... which is caught by the caller, which throws the same error ...
back up about 6 levels.
Am I right in thinking all these catch/throw blocks are causing a performance problem, or is this a reasonable implementation under C#?
回答1:
Throw (rather than catch) is expensive.
Don't put a catch block in unless you are going to do something useful (i.e. convert to a more useful exception, handle the error).
Just rethrowing the exception (throw statement without argument) or, even worse, throw the same object as just caught is definitely the wrong thing.
EDIT: To avoid ambiguity:
Rethrow:
catch (SomeException) {
throw;
}
Create exception from previous exception object, where all the runtime provided state (notably stack trace) is overwritten:
catch (SomeException e) {
throw e;
}
The latter case is a pointless way to throw away information about the exception. and without anything preceding the throw in the catch block is doubly pointless. It can be worse:
catch (SomeException e) {
throw new SomeException(e.Message);
}
which loses almost all the useful state information e contained (including what was originally thrown.)
回答2:
It's a bad design under any language.
Exception are designed to be caught at the level which you can deal with them. Catching an exception, just to throw it again is just a pointless waste of time (it also causes you to lose valuable information about the location of the original error).
Apparently, the guy who wrote that code, used to use error codes, and then switched to exceptions without actually understanding how they work. Exceptions automatically "bubble up" the stack if there is no catch at one level.
Also, note that exceptions are for exceptional occurrences. Thing that should never happen. They should not be used in place to normal validity checking (i.e., don't catch a divide-by-zero exception; check to see if the divisor is zero beforehand).
回答3:
According to msdn:Performance Tips and Tricks you can use try and catch without any performance problem until the real throw occure.
回答4:
In general, throwing an exception is costly in .NET. Simply having a try/catch/finally block is not. So, yes, the existing code is bad from a performance perspective because when it does throw, it throws 5-6 bloated exceptions without adding any value over simply letting the original exception naturally bubble up 5-6 stack frames.
Worse yet, the existing code is really bad from a design perspective. One of the main benefits of exception handling (as compared to returning error codes) is that you don't need to check for exceptions/return codes everywhere (in the call stack). You need only catch them at the small number of places you actually want to handle them. Ignoring an exception (unlike ignoring a return code) doesn't ignore or hide problem. It just means it'll be handled higher up the call stack.
回答5:
That's not terrible in and of itself, but one should really watch the nesting.
The acceptable use case is as such:
I am a low-ish component that may encounter a number of different errors however my consumer is only interested in a specific type of exception. Therefore I may do this:
catch(IOException ex)
{
throw new PlatformException("some additional context", ex);
}
Now this allows the consumer to do:
try
{
component.TryThing();
}
catch(PlatformException ex)
{
// handle error
}
Yes, I know some people will say but the consumer should catch the IOException but this depends on how abstract the consuming code actually is. What if the Impl is saving something to disk and the consumer has no valid reason to think their operation will touch the disk? In such a scenario it would make no sense to put this exception handling in the consuming code.
What we are generally trying to avoid by using this pattern is placing a "catch-all" exception handler in business logic code because we want to find out all the possible types of exceptions as they might lead to a more fundamental problem that needs to be investigated.
If we don't catch, it bubbles up, hits the "top top" level handler and should halt the app from going any further. This means the customer will report that exception and you will get a chance to look into it. This is important when you're trying to build robust software. You need to find all these error cases and write specific code to handle them.
What isn't very pretty is nesting these an excessive amount and that is the problem you should solve with this code.
As another poster stated exceptions are for reasonably exceptional behaviour but don't take this too far. Basically the code should express the "normal" operation and exceptions should handle potential problems you may come across.
In terms of performance exceptions are fine, you will get horrible results if you perf test with a debugger to an embedded device but in release without a debugger they're actually pretty quick.
The main thing people forget when discussing the performance of exceptions is that in error cases everything slows down anyway because the user has encountered a problem. Do we really care about speed when the network is down and the user can't save their work? I very much doubt getting the error report back to the user a few ms quicker is going to make a difference.
The main guideline to remember when discussing exceptions is that Exceptions should not occur within the normal application flow (normal meaning no errors). Everything else stems from that statement.
In the exact example you give I'm unsure though. It seems to me that no benefit is really gained from wrapping what seems to be a generic tcl exception in another generic sounding tcl exception. If anything I would suggest tracking down the original creator of the code and learning whether there is any particular logic behind his thinking. Chances are though that you could just kill the catch.
回答6:
Try / Catch / Throw are slow - a better implementation would be to check the value before catching it, but if you absolutly can't continue, you are better off throwing and catching only when it counts. Checking and logging is more efficient otherwise.
回答7:
If every layer of the stack just rethrows the same type with the same information, adding nothing new, then it's completely absurd.
If it was happening at the boundary between independently developed libraries, it's understandable. Sometimes a library author wants to control what exceptions escape from their library, so that they can change their implementation later without having to figure out how to simulate the previous version's exception-throwing behaviour.
It's generally a bad idea to catch and re-throw under any circumstances without a very good reason. This is because as soon as a catch block is found, all the finally blocks between the throw and the catch are executed. This should only happen if the exception can be recovered from. It's okay in this case because a specific type is being caught, so the author of the code (hopefully) knows that they can safely undo any of their own internal state changes in response to that specific exception type.
So those try/catch blocks potentially have a cost at design time - they make the program messier. But at runtime they will only impose a significant cost when the exception is thrown, because the transmission of the exception up the stack has been made more complicated.
回答8:
Exceptions are slow, try not to use them.
See the answer I gave here.
Basically, Chris Brumme (who was on the CLR team) said they are implemented as SEH exceptions, so you take a big hit when they are thrown, have to suffer the penalties as they bubble through the OS stack. Its a truly excellent article and goes in depth in what happens when you throw an exception. eg.:
Of course, the biggest cost of
exceptions is when you actually throw
one. I’ll return to this near the end
of the blog.
Performance. Exceptions have a direct
cost when you actually throw and catch
an exception. They may also have an
indirect cost associated with pushing
handlers on method entry. And they
can often have an insidious cost by
restricting codegen opportunities.
However, there is a serious long term
performance problem with exceptions
and this must be factored into your
decision.
Consider some of the things that
happen when you throw an exception:
Grab a stack trace by interpreting metadata emitted by the
compiler to guide our stack unwind.
Run through a chain of handlers up the stack, calling each handler
twice.
Compensate for mismatches between SEH, C++ and managed
exceptions.
Allocate a managed Exception instance and run its constructor.
Most likely, this involves looking up
resources for the various error
messages.
Probably take a trip through the OS kernel. Often take a hardware
exception.
Notify any attached debuggers, profilers, vectored exception handlers
and other interested parties.
This is light years away from
returning a -1 from your function
call. Exceptions are inherently
non-local, and if there’s an obvious
and enduring trend for today’s
architectures, it’s that you must
remain local for good performance.
Some people will claim that exceptions are not a problem, have no performance issues and are generally a good thing. These people get lots of popular votes, but they are just plain wrong. I've seen Microsoft staff make the same claims (usually with the technical knowledge usually reserved for the marketing department), but the bottom line from the horse's mouth is to use them sparingly.
The old adage, exceptions should only be used for exceptional circumstances, is as true for C# as it is for any other language.