Is there any overhead in Java for using a try/catch block, as opposed to an if block (assuming that the enclosed code otherwise does not request so)?
For example, take the following two simple implementations of a "safe trim" method for strings:
public String tryTrim(String raw) {
try {
return raw.trim();
} catch (Exception e) {
}
return null;
}
public String ifTrim(String raw) {
if (raw == null) {
return null;
}
return raw.trim();
}
If the raw
input is only rarely null
, is there any performance difference between the two methods?
Furthermore, is it a good programming pattern to use the tryTrim()
approach for simplifying the layout of code, especially when many if blocks checking rare error conditions can be avoided by enclosing the code in one try/catch block?
For example, it is a common case to have a method with N parameters
, which uses M <= N
of them near its start, failing quickly and deterministically if any such parameter is "invalid" (e.g., a null or empty string), without affecting the rest of the code.
In such cases, instead of having to write k * M
if blocks (where k
is the average number of checks per parameter, e.g. k = 2
for null or empty strings), a try/catch block would significantly shorten the code and a 1-2 line comment could be used to explicitly note the "unconventional" logic.
Such a pattern would also speed up the method, especially if the error conditions occur rarely, and it would do so without compromising program safety (assuming that the error conditions are "normal", e.g. as in a string processing method where null or empty values are acceptable, albeit seldom in presence).
I know you're asking about performance overhead, but you really should not use
try
/catch
andif
interchangeably.try
/catch
is for things that go wrong that are outside of your control and not in the normal program flow. For example, trying to write to a file and the file system is full? That situation should typically be handled withtry
/catch
.if
statements should be normal flow and ordinary error checking. So, for example, user fails to populate a required input field? Useif
for that, nottry
/catch
.It seems to me that your example code strongly suggests that the correct approach there is an
if
statement and not atry
/catch
.To answer your question, I would surmise that there is generally more overhead in a
try
/catch
than anif
. To know for sure, get a Java profiler and find out for the specific code you care about. It's possible that the answer may vary depending on the situation.Use the second version. Never use exceptions for control flow when other alternatives are available, as that is not what they are there for. Exceptions are for exceptional circumstances.
While on the topic, do not catch
Exception
here, and especially do not swallow it. In your case, you would expect aNullPointerException
. If you were to catch something, that is what you would catch (but go back to paragraph one, do not do this). When you catch (and swallow!)Exception
, you are saying "no matter what goes wrong, I can handle it. I don't care what it is." Your program might be in an irrevocable state! Only catch what you are prepared to deal with, let everything else propogate to a layer that can deal with it, even if that layer is the top layer and all it does is log the exception and then hit the eject switch.Otherwise exceptions are fast to throw and catch (though an
if
is probably still faster), but the slow thing is creating the exception's stack trace, because it needs to walk through all of the current stack. (In general it's bad to use exceptions for control flow, but when that really is needed and the exceptions must be fast, it's possible to skip building the stack trace by overriding theThrowable.fillInStackTrace()
method, or to save one exception instance and throw it repeatedly instead of always creating a new exception instance.)This question has almost been "answered to death", but I think there are a few more points that could usefully be made:
Using
try / catch
for non-exceptional control flow is bad style (in Java). (There is often debate about what "non-exceptional" means ... but that's a different topic.)Part of the reason it is bad style is that
try / catch
is orders of magnitude more expensive than an regular control flow statement1. The actual difference depends on the program and the platform, but I'd expect it to be 1000 or more times more expensive. Among other things, the creation the exception object captures a stack trace, looking up and copying information about each frame on the stack. The deeper the stack is, the more that needs to be copied.Another part of the reason it is bad style is that the code is harder to read.
1 - The JIT in recent versions of Java 7 can optimize exception handling to drastically reduce the overheads, in some cases. However, these optimizations are not enabled by default.
There are also issues with the way that you've written the example:
Catching
Exception
is very bad practice, because there is a chance that you will catch other unchecked exceptions by accident. For instance, if you did that around a call toraw.substring(1)
you would also catch potentialStringIndexOutOfBoundsException
s ... and hide bugs.What your example is trying to do is (probably) a result of poor practice in dealing with
null
strings. As a general principle, you should try to minimize the use ofnull
strings, and attempt to limit their (intentional) spread. Where possible, use an empty string instead ofnull
to mean "no value". And when you do have a case where you need to pass or return anull
string, document it clearly in your method javadocs. If your methods get called with anull
when they shouldn't ... it is a bug. Let it throw an exception. Don't try to compensate for the bug by (in this example) returningnull
.FOLLOWUP
... and most of the points in my answer are not about null values!
Yes there are situations where
null
values are expected, and you need deal with them.But I would argue that what
tryTrim()
is doing is (typically) the wrong way to deal withnull
. Compare these three bits of code:Ultimately you have to deal with the
null
differently from a regular string, and it is usually a good idea do this as soon as possible. The further thenull
is allowed to propagate from its point origin, the more likely it is that the programmer will forget that anull
value is a possibility, and write buggy code that assumes a non-null value. And forgetting that an HTTP request parameter could be missing (i.e.param == null
) is a classic case where this happens.I'm not saying that
tryTrim()
is inherently bad. But the fact that a programmer feels the need write methods like this is probably indicative of less than ideal null handling.As far as overhead goes, you can test for yourself:
}
The numbers I got are:
As far as what style - it is a whole separate question. The if statement looks pretty natural, but the try looks really strange for multiple reason: - you caught Exception even though you are checking for NULL value, are you expecting something "exceptional" to happen (otherwise catch NullException)? - you caught that Exception, are you going to report it or swallow? etc. etc. etc.
Edit: See my comment for why this is an invalid test, but I really didn't want to leave this standing here. Just by swapping tryTrim and ifTrim, we suddenly get the following results (on my machine):
Instead of starting to explain all of this, just read this for the beginning - Cliff also has some great slides about the whole topic, but I can't find the link at the moment.
Knowing how exception handling works in Hotspot, I'm fairly certain that in a correct test try/catch without an exception) would be the baseline performance (because there's no overhead whatsoever), but the JIT can play some tricks with Nullpointer checks followed by method invocations (no explicit check, but catch the hw exception if the object is null) in which case we'd get the same result. Also don't forget: We're talking about the difference of one easily predictable if which would be ONE CPU cycle! The trim call will cost a million times that.