Throwing always the same exception instance in Java - java

It was always told me that Java exception handling is quite expensive.
I'm asking if is it a good practice creating an exception instance of a specific type at the beginning of the program and without creating a new one, throwing always the same exception object.
I just want to make an example. Common code:
if (!checkSomething(myObject))
throw new CustomException("your object is invalid");
alternative:
static CustomException MYEXP = new CustomException("your object is invalid");
//somewhere else
if (!checkSomething(myObject))
throw MYEXP;
Of course, I'm doing some assumptions here:
MyCustomException has no parameters
client code, whenever is a good practice or not, is heavlily based on exception handling and refactoring is not an option.
So questions are:
Is this a good practice?
Does this damage some JVM mechanism?
If 1 is yes, there is the possibility of a performance gain? (I think not, but not sure)
If 1 and 3 are yes, why is not sponsored as practice?
If 1 is no, why Martin Odersky told in his introduction to Scala that this is how Scala works in some cases? (At minute 28.30 he tolds that break is implemented has throwing an exception, audience says that this is time consuming and he replies that exception is not created every time)Fosdem 2009
I hope this is not a idle/stupid question, I'm curious about this. I think that real cost in exception handling is handling and not creation.
edit
Added reference of precise discussion on FOSDEM presentation
disclaimer: none of my code works like proposed and I have no intention to manage exception like this, I'm just doing a "what-if" question and this curiosity is generated from the affermation that video. I thought: if it's done in Scala, why is not in Java?

No, don't do that. The expensive part is not handling the exception, it is generating the stacktrace. Unfortunately the stacktrace is also the useful part. If you throw a saved exception you will be passing on a misleading stacktrace.
It could be that within the implementation of Scala there are situations where it makes sense to do this. (Maybe they are doing something recursive and want to generate an exception object upfront so in case they run out of memory they can still produce an exception.) They also have a lot of information about what they're doing so they have a better chance of getting it right. But optimizations made by JVM language implementors are a very special case.
So you wouldn't be breaking anything, unless you think providing misleading information constitutes breakage. It seems like a big risk to me.
Trying out Thomas Eding's suggestion for how to create an exception with no stacktrace seems to work:
groovy:000> class MyException extends Exception {
groovy:001> public Throwable fillInStackTrace() {}}
===> true
groovy:000> e = new MyException()
===> MyException
groovy:000> Arrays.asList(e.stackTrace)
===> []
Also check out the JLS:
The NullPointerException (which is a kind of RuntimeException) that is
thrown by method blowUp is not caught by the try statement in main,
because a NullPointerException is not assignable to a variable of type
BlewIt. This causes the finally clause to execute, after which the
thread executing main, which is the only thread of the test program,
terminates because of an uncaught exception, which typically results
in printing the exception name and a simple backtrace. However, a
backtrace is not required by this specification.
The problem with mandating a backtrace is that an exception can be
created at one point in the program and thrown at a later one. It is
prohibitively expensive to store a stack trace in an exception unless
it is actually thrown (in which case the trace may be generated while
unwinding the stack). Hence we do not mandate a back trace in every
exception.

Q1. Is this a good practice?
Not in my book. It adds complexity and hinders diagnostic (see my answer to Q2).
Q2. Does this damage some JVM mechanism?
You won't get a meaningful stack trace from such an exception object.
Q3. If 1 is yes, there are performances gain? (I think not, but not sure)
Q4. If 1 and 3 are yes, why is not sponsored as practice?
Due to the problems outlined above.
Q5. If 1 is no, why Martin Odersky told in his introduction to Scala that this is how Scala works in some cases? (sorry, but I can't remember the context of this affirmation at the moment) Fosdem 2009
Hard to answer without the context.

You can do that, but the exception
must have no stacktrace, since the initial stacktrace will only serve to confuse in subsequent uses.
must not accept suppressed exceptions. if multiple threads try to add suppressed exceptions to it, things will corrupt.
So your exception constructor must do
super(msg, cause, /*enableSuppression*/false, /*writableStackTrace*/false);
see http://docs.oracle.com/javase/7/docs/api/java/lang/Throwable.html#Throwable%28java.lang.String,%20java.lang.Throwable,%20boolean,%20boolean%29
Now, is it useful? Yes, otherwise why would these two boolean flags exist in the first place?:)
In some complicated cases, exception can be used as a flow control device, it may produce a simpler, faster code. Such exceptions are known as "control exceptions".
If an exception is really to indicate something exceptionally wrong with the program, then use the traditional exception.

Even though Exceptions are relatively expensive and should be kept to a minimum, they don't cost so much that you should do obtuse things "for performance purposes" This is so often a bad excuse that it is even considered by some that premature optimisation should be avoided at all costs. While that is not entirely true, you can measure how slow exceptions are.
long start = System.nanoTime();
int exceptionCount = 0;
for (int i = 0; i < 20000; i++)
try {
int j = i / (i & 1);
} catch (ArithmeticException ae) {
exceptionCount++;
}
long time = System.nanoTime() - start;
System.out.printf("Each exception took average of %,d ns%n", time / exceptionCount);
prints what I believe is a reasonable estimate.
Each exception took average of 3,064 ns
Note: as the number of loops increases, the Exception is optimised away. i.e. for a 10x the iterations
Each exception took average of 327 ns
and for 10x more
Each exception took average of 35 ns
and for 10x more
Each exception took average of 5 ns
If the exception is thrown enough, it appears the JIT is smart enough to optimise the Exception away.

Related

Can this code cause an infinite loop while searching for the lowest level cause of an exception?

public static Throwable getCause(#Nonnull Throwable t) {
while ((t instanceof ExecutionException || t instanceof CompletionException) && t.getCause() != null) {
t = t.getCause();
}
return t;
}
Is this code dangerous in a sense that the while loop may never end? Just wondering if someone can cause this to go on forever.
If so, what might be a better way to handle this? I'm thinking maybe adding an upper bound limit.
Is this code dangerous in a sense that the while loop may never end? Just wondering if someone can cause this to go on forever.
In short: Theoretically? Yes. But practically? No. Your code is fine as is.
In long:
Theoretically, yes
Sure, one can create a loop in the causal chain just fine. GetCause() is just a method, and not a final one at that; exceptions are just classes, so one can make their own exception and write public Throwable getCause() { return this; }.
Practically, no
... but just because someone could do that, doesn't mean you should deal with it. Because why do you want to deal with that? Perhaps you're thinking: Well, if some programmer is intentionally trying to blow up the system, I'd want to be robust and not blow up even when they try.
But therein lies a problem: If someone wants to blow up a system, they can. It's nearly trivial to do so. Imagine this:
public class HahaHackingIsSoEasy extends RuntimeException {
#Override public Throwable getCause() {
while (true) ;
}
}
And I throw that. Your code will hang just the same, and if you attempt to detect loops, that's not going to solve the problem. And if you try to stop me from doing this, too, by firing up a separate thread with a timeout and then using Thread.stop() (deprecated, and dangerous) to stop me, I'll just write a loop without a savepoint in which case neither stop() nor using JVMTI to hook in as a debugger and stop that way is going to work.
The conclusion is: There are only 2 reliable ways to stop intentionally malicious code:
The best, by far: Don't run the malicious code in the first place.
The distant second best option: Run it in a highly controlled sandbox environment.
The JVM is un-sandboxable from inside itself (no, the SecurityManager isn't good enough. It has absolutely no way to stop (savepoint-less) infinite loops, for example), so this at the very least involves firing up an entirely separate JVM just to do the job you want to do, so that you can set timeouts and memory limits on it, and possibly an entire new virtual machine. It'll take thousands of times the resources, and is extremely complicated; I rather doubt that's what you intended to do here.
But what about unintentional loops?
The one question that remains is, given that we already wrote off malicious code (not 'we can deal with it', but rather 'if its intentionally malicious you cannot stop them with a loop detector'), what if it's an accident?
Generally, the best way to deal with accidents is to not deal with them at all, not in code: Let them happen; that's why you have operations teams and server maintainers and the like (you're going to have to have those, no matter what happens. Might as well use them). Once it happens, you figure it out, and you fix it.
That leaves just one final corner case which is: What if loops in causal chains have a plausible, desired usecase?
And that's a fair question. Fortunately, the answer is a simple: No, there is no plausible/desired usecase. Loops in causal chains do not happen unless there is a bug (in which case, the answer is: Find it, fix it), or there is malicious case (in which case, the answer is: Do not run it and call your security team).
The loop is following the exception hierarchy down to the root cause.
If that one points back to one of the already visited exceptions there is a bigger fail in the causality. Therefore I'd say it will never go into an infinite loop.
Of course it is possible, you can't prevent someone write something like:
public class ExceptionWithCauseAsItself extends ExecutionException {
#Override
public Throwable getCause() {
return this;
}
}
Following the principle of Defensive Programming, the method should not fall into infinite loop even when someone throw something like ExceptionWithCauseAsItself.
Since your case is not only getting the root cause, probably there is no library to fit what you use. I suggest refer to Apache Common Langs ExceptionUtils.getRootCause to get some idea on how to tackle recursive cause structures.
But as suggested by rzwitserloot, it is just impossible to defence when someone just want to messy you up.
So why does ExceptionUtils.getRootCause mention below?
this method handles recursive cause structures
that might otherwise cause infinite loops
Browsing the history, getThrowableList implementation is using ExceptionUtils.getCause, which tried to get cause by introspect different method, and hence it may cause cyclic cause chain.
This behaviour is already rectified in this commit by calling Throwable#getCause instead. So cyclic cause chain should not happen in general.
More reference related to this topic:
Why is exception.getCause() == exception?.
How can I loop through Exception getCause() to find root cause with detail message
Cycles in chained exceptions

Why explicitly throw a NullPointerException rather than letting it happen naturally?

When reading JDK source code, I find it common that the author will check the parameters if they are null and then throw new NullPointerException() manually.
Why do they do it? I think there's no need to do so since it will throw new NullPointerException() when it calls any method. (Here is some source code of HashMap, for instance :)
public V computeIfPresent(K key,
BiFunction<? super K, ? super V, ? extends V> remappingFunction) {
if (remappingFunction == null)
throw new NullPointerException();
Node<K,V> e; V oldValue;
int hash = hash(key);
if ((e = getNode(hash, key)) != null &&
(oldValue = e.value) != null) {
V v = remappingFunction.apply(key, oldValue);
if (v != null) {
e.value = v;
afterNodeAccess(e);
return v;
}
else
removeNode(hash, key, null, false, true);
}
return null;
}
There are a number of reasons that come to mind, several being closely related:
Fail-fast: If it's going to fail, best to fail sooner rather than later. This allows problems to be caught closer to their source, making them easier to identify and recover from. It also avoids wasting CPU cycles on code that's bound to fail.
Intent: Throwing the exception explicitly makes it clear to maintainers that the error is there purposely and the author was aware of the consequences.
Consistency: If the error were allowed to happen naturally, it might not occur in every scenario. If no mapping is found, for example, remappingFunction would never be used and the exception wouldn't be thrown. Validating input in advance allows for more deterministic behavior and clearer documentation.
Stability: Code evolves over time. Code that encounters an exception naturally might, after a bit of refactoring, cease to do so, or do so under different circumstances. Throwing it explicitly makes it less likely for behavior to change inadvertently.
It is for clarity, consistency, and to prevent extra, unnecessary work from being performed.
Consider what would happen if there wasn't a guard clause at the top of the method. It would always call hash(key) and getNode(hash, key) even when null had been passed in for the remappingFunction before the NPE was thrown.
Even worse, if the if condition is false then we take the else branch, which doesn't use the remappingFunction at all, which means the method doesn't always throw NPE when a null is passed; whether it does depends on the state of the map.
Both scenarios are bad. If null is not a valid value for remappingFunction the method should consistently throw an exception regardless of the internal state of the object at the time of the call, and it should do so without doing unnecessary work that is pointless given that it is just going to throw. Finally, it is a good principle of clean, clear code to have the guard right up front so that anyone reviewing the source code can readily see that it will do so.
Even if the exception were currently thrown by every branch of code, it is possible that a future revision of the code would change that. Performing the check at the beginning ensures it will definitely be performed.
In addition to the reasons listed by #shmosel's excellent answer ...
Performance: There may be / have been performance benefits (on some JVMs) to throwing the NPE explicitly rather than letting the JVM do it.
It depends on the strategy that the Java interpreter and JIT compiler take to detecting the dereferencing of null pointers. One strategy is to not test for null, but instead trap the SIGSEGV that happens when an instruction tries to access address 0. This is the fastest approach in the case where the reference is always valid, but it is expensive in the NPE case.
An explicit test for null in the code would avoid the SIGSEGV performance hit in a scenario where NPEs were frequent.
(I doubt that this would be a worthwhile micro-optimization in a modern JVM, but it could have been in the past.)
Compatibility: The likely reason that there is no message in the exception is for compatibility with NPEs that are thrown by the JVM itself. In a compliant Java implementation, an NPE thrown by the JVM has a null message. (Android Java is different.)
Apart from what other people have pointed out, it's worth noting the role of convention here. In C#, for example, you also have the same convention of explicitly raising an exception in cases like this, but it's specifically an ArgumentNullException, which is somewhat more specific. (The C# convention is that NullReferenceException always represents a bug of some kind - quite simply, it shouldn't ever happen in production code; granted, ArgumentNullException usually does, too, but it could be a bug more along the line of "you don't understand how to use the library correctly" kind of bug).
So, basically, in C# NullReferenceException means that your program actually tried to use it, whereas ArgumentNullException it means that it recognized that the value was wrong and it didn't even bother to try to use it. The implications can actually be different (depending on the circumstances) because ArgumentNullException means that the method in question didn't have side effects yet (since it failed the method preconditions).
Incidentally, if you're raising something like ArgumentNullException or IllegalArgumentException, that's part of the point of doing the check: you want a different exception than you'd "normally" get.
Either way, explicitly raising the exception reinforces the good practice of being explicit about your method's pre-conditions and expected arguments, which makes the code easier to read, use, and maintain. If you didn't explicitly check for null, I don't know if it's because you thought that no one would ever pass a null argument, you're counting it to throw the exception anyway, or you just forgot to check for that.
It is so you will get the exception as soon as you perpetrate the error, rather than later on when you're using the map and won't understand why it happened.
It turns a seemingly erratic error condition into a clear contract violation: The function has some preconditions for working correctly, so it checks them beforehand, enforcing them to be met.
The effect is, that you won't have to debug computeIfPresent() when you get the exception out of it. Once you see that the exception comes from the precondition check, you know that you called the function with an illegal argument. If the check were not there, you would need to exclude the possibility that there is some bug within computeIfPresent() itself that leads to the exception being thrown.
Obviously, throwing the generic NullPointerException is a really bad choice, as it does not signal a contract violation in and of itself. IllegalArgumentException would be a better choice.
Sidenote:
I don't know whether Java allows this (I doubt it), but C/C++ programmers use an assert() in this case, which is significantly better for debugging: It tells the program to crash immediately and as hard as possible should the provided condition evaluate to false. So, if you ran
void MyClass_foo(MyClass* me, int (*someFunction)(int)) {
assert(me);
assert(someFunction);
...
}
under a debugger, and something passed NULL into either argument, the program would stop right at the line telling which argument was NULL, and you would be able to examine all local variables of the entire call stack at leisure.
It's because it's possible for it not to happen naturally. Let's see piece of code like this:
bool isUserAMoron(User user) {
Connection c = UnstableDatabase.getConnection();
if (user.name == "Moron") {
// In this case we don't need to connect to DB
return true;
} else {
return c.makeMoronishCheck(user.id);
}
}
(of course there is numerous problems in this sample about code quality. Sorry to lazy to imagine perfect sample)
Situation when c will not be actually used and NullPointerException will not be thrown even if c == null is possible.
In more complicated situations it's becomes very non-easy to hunt down such cases. This is why general check like if (c == null) throw new NullPointerException() is better.
It is intentional to protect further damage, or to getting into inconsistent state.
Apart from all other excellent answers here, I'd also like to add a few cases.
You can add a message if you create your own exception
If you throw your own NullPointerException you can add a message (which you definitely should!)
The default message is a null from new NullPointerException() and all methods that use it, for instance Objects.requireNonNull. If you print that null it can even translate to an empty string...
A bit short and uninformative...
The stack trace will give a lot of information, but for the user to know what was null they have to dig up the code and look at the exact row.
Now imagine that NPE being wrapped and sent over the net, e.g. as a message in a web service error, perhaps between different departments or even organizations. Worst case scenario, no one may figure out what null stands for...
Chained method calls will keep you guessing
An exception will only tell you on what row the exception occurred. Consider the following row:
repository.getService(someObject.someMethod());
If you get an NPE and it points at this row, which one of repository and someObject was null?
Instead, checking these variables when you get them will at least point to a row where they are hopefully the only variable being handled. And, as mentioned before, even better if your error message contains the name of the variable or similar.
Errors when processing lots of input should give identifying information
Imagine that your program is processing an input file with thousands of rows and suddenly there's a NullPointerException. You look at the place and realize some input was incorrect... what input? You'll need more information about the row number, perhaps the column or even the whole row text to understand what row in that file needs fixing.

Why is it possible to recover from a StackOverflowError?

I'm surprised at how it is possible to continue execution even after a StackOverflowError has occurred in Java.
I know that StackOverflowError is a sublass of the class Error.
The class Error is decumented as "a subclass of Throwable that indicates serious problems that a reasonable application should not try to catch."
This sounds more like a recommendation than a rule, subtending that catching a Error like a StackOverflowError is in fact permitted and it's up to the programmer's reasonability not to do so. And see, I tested this code and it terminates normally.
public class Test
{
public static void main(String[] args)
{
try {
foo();
} catch (StackOverflowError e) {
bar();
}
System.out.println("normal termination");
}
private static void foo() {
System.out.println("foo");
foo();
}
private static void bar() {
System.out.println("bar");
}
}
How can this be? I think by the time the StackOverflowError is thrown, the stack should be so full that there is no room for calling another function. Is the error handling block running in a different stack, or what is going on here?
When the stack overflows and StackOverflowError is thrown, the usual exception handling unwinds the stack. Unwinding the stack means:
abort the execution of the currently active function
delete its stack frame, proceed with the calling function
abort the execution of the caller
delete its stack frame, proceed with the calling function
and so on...
... until the exception is caught. This is normal (in fact, necessary) and independent of which exception is thrown and why. Since you catch the exception outside of the first call to foo(), the thousands of foo stack frames that filled the stack have all been unwound and most of the stack is free to be used again.
When the StackOverflowError is thrown, the stack is full. However, when it's caught, all those foo calls have been popped from the stack. bar can run normally because the stack is no longer overflowing with foos. (Note that I don't think the JLS guarantees you can recover from a stack overflow like this.)
When the StackOverFlow occurs, the JVM will pop down to the catch, freeing the stack.
In you example, it get rids of all the stacked foo.
Because the stack doesn't actually overflow. A better name might be AttemptToOverflowStack. Basically what it means is that the last attempt to adjust the stack frame errs because there isn't enough free space left on the stack. The stack could actually have lots of space left, just not enough space. So, whatever operation would have depended upon the call succeeding (typically a method invocation), never gets exectued and all that is left is for the program to deal with that fact. Which means that it is really no different from any other exception. In fact, you could catch the exception in the function that is making the call.
As has already been answered, it is possible to execute code, and in particular to call functions, after catching a StackOverflowError because the normal exception handling procedure of the JVM unwinds the stack between the throw and the catch points, freeing stack-space for you to use. And your experiment confirms that is the case.
However, that is not quite the same as saying that it is, in general, possible to recover from a StackOverflowError.
A StackOverflowError IS-A VirtualMachineError, which IS-AN Error. As you point out, Java provides some vague advice for an Error:
indicates serious problems that a reasonable application should not try to catch
and you, reasonably, conclude that should sounds like catching an Error might be OK in some circumstances. Note that performing one experiment does not demonstrate that something is, in general, safe to do. Only the rules of the Java language and the specifications of the classes you use can do that. A VirtualMachineError is a special class of exception, because the Java Language Specification and the Java Virtual Machine Specification provide information about the semantics of this exception. In particular, the latter says:
A Java Virtual Machine implementation throws an object that is an instance of a subclass of the class VirtualMethodError when an internal error or resource limitation prevents it from implementing the semantics described in this chapter. This specification cannot predict where internal errors or resource limitations may be encountered and does not mandate precisely when they can be reported. Thus, any of the VirtualMethodError subclasses defined below may be thrown at any time during the operation of the Java Virtual Machine:
...
StackOverflowError: The Java Virtual Machine implementation has run out of stack space for a thread, typically because the thread is doing an unbounded number of recursive invocations as a result of a fault in the executing program.
The crucial problem is that you "cannot predict" where or when a StackOverflowError will be thrown. There are no guarantees about where it will not be thrown. You can not rely on it being thrown on entry to a method, for example. It could be thrown at a point within a method.
This unpredictability is potentially disastrous. As it can be thrown within a method, it could be thrown part way through a sequence of operations that the class considers to be one "atomic" operation, leaving the object in a partially modified, inconsistent, state. With the object in an inconsistent state, any attempt to use that object could result in erroneous behaviour. In all practical cases you can not know which object is in an inconsistent state, so you have to assume that no objects are trustworthy. Any recovery operation or attempt to continue after the exception is caught could therefore have erroneous behaviour. The only safe thing to do is therefore to not catch a StackOverflowError, but rather to allow the program to terminate. (In practice you might attempt to do some error logging to assist troubleshooting, but you can not rely on that logging operating correctly). That is, you can not reliably recover from a StackOverflowError.

Throwing a new exception while throwing an old exception

If a destructor throws in C++ during stack unwinding caused by an exception, the program terminates. (That's why destructors should never throw in C++.) Example:
struct Foo
{
~Foo()
{
throw 2; // whoops, already throwing 1 at this point, let's terminate!
}
};
int main()
{
Foo foo;
throw 1;
}
terminate called after throwing an instance of 'int'
This application has requested the Runtime to terminate it in an unusual way.
Please contact the application's support team for more information.
If a finally block is entered in Java because of an exception in the corresponding try block and that finally block throws a second exception, the first exception is silently swallowed. Example:
public static void foo() throws Exception
{
try
{
throw new Exception("first");
}
finally
{
throw new Exception("second");
}
}
public static void main(String[] args)
{
try
{
foo();
}
catch (Exception e)
{
System.out.println(e.getMessage()); // prints "second"
}
}
This question crossed my mind: Could a programming language handle multiple exceptions being thrown at the same time? Would that be useful? Have you ever missed that ability? Is there a language that already supports this? Is there any experience with such an approach?
Any thoughts?
Think in terms of flow control. Exceptions are fundamentally just fancy setjmp/longjmp or setcc/callcc anyway. The exception object is used to select a particular place to jump to, like an address. The exception handler simply recurses on the current exception, longjmping until it is handled.
Handling two exceptions at a time is simply a matter of bundling them together into one, such that the result produces coherent flow control. I can think of two alternatives:
Combine them into an uncatchable exception. It would amount to unwinding the entire stack and ignoring all handlers. This creates the risk of an exception cascade causing totally random behavior.
Somehow construct their Cartesian product. Yeah, right.
The C++ methodology serves the interest of predictability well.
You can chain exceptions. http://java.sun.com/docs/books/tutorial/essential/exceptions/chained.html
try {
} catch (IOException e) {
throw new SampleException("Other IOException", e);
}
You can also have a try catch inside your finnally too.
try{
}catch(Exception e){
}finally{
try{
throw new SampleException("foo");
}catch(Exception e){
}
}
Edit:
Also you can have multiple catches.
I don't think multiple exceptions would be a good idea, because an exception is already something you need to recover from. The only reason to have more than one exception I can think of is if you use it as part of your logic (like multiple returns), wich would be deviating from the original purpose of the idea of the Exception.
Besides, how can you produce two exceptions at the same time?
Could a programming language handle multiple exceptions? Sure, I don't see why not. Would this be useful? No, I would say it would not be. Error handling and resumption is very hard as it is - I don't see how adding combinatorial explosion to the problem would help things.
Yes, it is possible for a language to support throwing multiple exceptions at a time; however, that also means that programmers need to handle multiple exceptions at a time as well, so there is definitely a tradeoff. I have heard of languages that have this although I am having trouble coming up with the list off the top of my head; I believe LINQ or PLINQ may be one of those languages, but I don't quite remember. Anyway, there are different ways that multiple exceptions can be thrown... one way is to use exception chaining, either by forcing one exception to become the "cause" or "previouslyProgatingException" of the other, or to bottle all of the exceptions up into a single exception representing the fact that multiple exceptions have been thrown. I suppose a language could also introduce a catch clause that lets you specify multiple exception types at once, although that would be a poor design choice, IMHO, as the number of handlers is large enough as is, and that would result in an explosion of catch clauses just to handle every single possible combination.
C++ std::exception_ptr allows you store exceptions. So it should be possible to embed exceptions in other exceptions and give you the impression that you have the stack on thrown exceptions. This could be useful if you want to know the root cause of the actual exception.
One situation where multiple thrown exceptions in parallel might be useful, is unit testing with JUnit:
If a test fails, an exception is thrown (either produced by code under test or an assertion).
Each #After method is invoked after the test, whether the test fails or succeeds.
If an After method fails, another exception is thrown.
Only the exception thrown in the After method is displayed in my IDE (Eclipse) for the test result.
I know that JUnit notifies its test listeners about both exceptions, and when debugging a test in Eclipse I can see the first exception appearing in the JUnit view, only to be replaced by the second exception shortly after.
This problem should probably be resolved by making Eclipse remember all notifications for a given test, not only the last one. Having "parallel exceptions", where the exception from the finally does not swallow the one from the try, would solve this issue too.
If you think about it, the situation you've described has Exception("First") as the root cause of Exception("second"), conceptually. The most useful thing for the user would probably be to get a stack dump showing a chain in that order...
In managed platforms, I can think of situations where it might be useful to have a disposer "elevate" an exception to something which is stronger, but not totally fatal to an application. For example, a "command" object's disposer might attempt to unwind the state of its associated connection to cancel any partially-performed commands. If that works, the underlying code may attempt to do other things with the connection. If the attempted "cancel" doesn't work, the exception should probably propagate out to the level where the connection would have been destroyed. In such a case, it may be useful for the exception to contain an "inner exception", though the only way I know to achieve that would be to have the attempted unwinding in a catch block rather than a "finally" block.

Which is better/more efficient: check for bad values or catch Exceptions in Java

Which is more efficient in Java: to check for bad values to prevent exceptions or let the exceptions happen and catch them?
Here are two blocks of sample code to illustrate this difference:
void doSomething(type value1) {
ResultType result = genericError;
if (value1 == badvalue || value1 == badvalue2 || ...) {
result = specificError;
} else {
DoSomeActionThatFailsIfValue1IsBad(value1);
// ...
result = success;
}
callback(result);
}
versus
void doSomething(type value1) {
ResultType result = genericError;
try {
DoSomeActionThatFailsIfValue1IsBad(value1);
// ...
result = success;
} catch (ExceptionType e) {
result = specificError;
} finally {
callback(result);
}
}
On the one hand, you're always doing a comparison. On the other hand, I honestly don't know what the internals of the system do to generate an exception, throw it, and then trigger the catch clause. It has the sound of being less efficient, but if it doesn't add overhead in the non-error case, then it's more efficient, on average. Which is it? Does it add similar checking anyway? Is that checking there in the implicit code added for exception handling, even with the additional layer of explicit checking? Perhaps it always depends on the type of exception? What am I not considering?
Let's also assume that all "bad values" are known -- that's an obvious issue. If you don't know all the bad values -- or the list is too long and not regular -- then exception handling may be the only way, anyway.
So, what are the pros and cons of each, and why?
Side questions to consider:
How does your answer change if the value is "bad" (would throw an exception) most of the time?
How much of this would depend on the specifics of the VM in use?
If this same question was asked for language-X, would the answer be different? (Which, more generally, is asking if it can be assumed checking values is always more efficient than relying on exception handling simply because it adds more overhead by current compilers/interpreters.)
(New) The act of throwing an exception is slow. Does entering a try block have overhead, even if an exception is not thrown?
Similarities on SO:
This is similar to the code sample in this answer, but states they are similar only in concept, not compiled reality.
The premise is similar to this question but, in my case, the requester of the task (e.g. "Something") isn't the caller of the method (e.g. "doSomething") (thus no returns).
And this one is very similar, but I didn't find an answer to my question.
And similar to far too many other questions to list, except:
I'm not asking about theoretical best practice. I'm asking more about runtime performance and efficiency (which should mean, for specific cases, there are non-opinion answers), especially on resource limited platforms. For instance, if the only bad value was simply a null object, would it be better/more efficient to check for that or just attempt to use it and catch the exception?
"How does your answer change if the value is "bad" (would throw an exception) most of the time?" I think that's the key right there. Exceptions are expensive as compared to comparisons, so you really want to use exceptions for exceptional conditions.
Similarly, your question about how this answer might change depending on the language/environment ties into that: The expense of exceptions is different in different environments. .Net 1.1 and 2.0 are incredibly slow the first time an exception is thrown, for instance.
Purely from an efficiency standpoint, and given your code examples, I think it depends on how often you expect to see bad values. If bad values are not too uncommon, it's faster to do the comparison because exceptions are expensive. If bad values are very rare, however, it may be faster to use the exception.
The bottom line, though, is that if you're looking for performance, profile your code. This block of code may not even be a concern. If it is, then try it both ways and see which is faster. Again, it depends on how often you expect to see bad values.
I could find surprisingly little current information about the cost of throwing Exceptions. Pretty obviously there must be some, you are creating an object, and probably getting stack trace information.
In the specific example you talk about:
if (value1 == badvalue || value1 == badvalue2 || ...) {
result = specificError;
} else {
DoSomeActionThatFailsIfValue1IsBad(value1);
// ...
result = success;
}
The problem for me here is that you are in danger if (probably incompletely) replicating logic in the caller that should be owned by the method you are calling.
Hence I would not perform those checks. Your code is not performing an experiment, it does "know" the data it's supposed to be sending down I suppose? Hence the likelyhood of the Exception being thrown should be low. Hence keep it simple, let the callee do the checks.
In my opinion you should have try/catch blocks around anything that could potentially throw exceptions, if only to have a safe running system. You have finer control of error responses if you check for possible data errors fist. So I suggest doing both.
Well, exceptions are more expensive, yes but for me, its about weighting the cost of efficiency vs bad design. unless your use case demands it, always stick to the best design.
the question really is, when do you throw an exception? in exceptional situations.
if your arguments are not in the range that you're looking for, i'd suggest returning an error code or a boolean.
for instance, a method,
public int IsAuthenticated(String username, String password)
{
if(!Validated(username,password)
{
// just an error
// log it
return -2;
}
// contacting the Database here
if cannot connect to db
{
// woww this is HUUGE
throw new DBException('cannot connect'); // or something like that
}
// validate against db here
if validated, return 0;
// etc etc
}
thats my 2 cents
My personal opinion is that exceptions indicate that something is broken - this might well be an API called with illegal arguments or division by zero or file not found etc. This means that exceptions could be thrown by checking values.
For the reader of your code - again my personal opinion - it is much easier to follow the flow if you can be certain that it is not put aside by all kinds of strange throws (which is essentially gotos in disguise if used as part of the program flow). You simply have less to think about.
This is in my opinion a good thing. "Smart" code is hard to wrap your head around.
On a side note - JVM's get much much smarter - coding for efficiency usually doesn't pay off.
Normally, one would assume that try-catch is more expensive because it looks heavier in the code, but that entirely depends on the JIT. My guess is that it's impossible to tell without having a real case and some performance measurements. The comparisons could be more expensive, especially when you have many values, for example, or because you have to call equals() since == won't work in many cases.
As for which one you should chose (as in "code style"), my answer is: Make sure that the user gets a useful error message when it fails. Anything else is a matter of taste and I can't give you rules for that.
To be safe, assume exceptions are expensive. They often are, and if they aren't it will at least push you towards using exceptions wisely. (Entering a try block is usually trivially cheap, since implementors do their best to make it so, even at the cost of making exceptions more expensive. After all, if exceptions are used properly, the code will enter the try block many times more often than it will throw.)
More importantly, exceptions are a style issue. Exceptions for exceptional conditions make code simpler because there's less error-checking code, so the actual functionality is clearer and more compact.
However, if exceptions might be thrown in more normal circumstances, there's invisible flows of control that the reader has to keep in mind, comparable to Intercal's COME FROM...UNLESS... statement. (Intercal was one of the very early joke languages.) This is very confusing, and can easily lead to misreading and misunderstanding the code.
My advice, which applies to every language and environment I know about:
Don't worry about efficiency here. There are strong reasons besides efficiency for using exceptions in a way that will prove efficient.
Use try blocks freely.
Use exceptions for exceptional conditions. If an exception is likely, test for it and handle it in another way.
a question like this is like asking,
"is it more efficient to write an interface or a base class with all abstract functions"
does it matter which is more efficient? only one of them is the right way for a given situation
Note that if your code doesn't throw exceptions then it doesn't always imply that the input is within bounds. Relying on throwing exceptions by the standard Java (API + JVM), such as NullPointerException or ArrayIndexOutOfBoundsExceptions is a very unhealthy way to validate input. Garbage-in sometimes generates garbage-but-no-exception-out.
And yes, exceptions are quite expensive. They should not be thrown during a normal processing flow.
Optimizationally, I think you're going to find it's probably a wash. They'll both perform alright, I don't think exception throwing is ever going to be your bottleneck. You should probably be more concerned with what Java is designed to do (and what other Java programmers will expect) and that is thrown exceptions. Java is very much designed around throwing/catching exceptions and you can bet the designers made that process as efficient as possible.
I think it's mostly a philosophy and language culture sort of thing. In Java, the general accepted practice is that the method signature is a contract between your method and the code calling it. So if you receive an improper value, you generally throw an unchecked exception and let it be dealt with at a higher level:
public void setAge(int age)
{
if(age < 0)
{
throw new IllegalArgumentException("Array can't be negative");
}
this.age = age;
}
In this case, the caller broke their end of the contract, so you spit their input back at them with an exception. The "throws" clause is for use when you can't fulfill your end of the contract for some reason.
public void readFile(String filename) throws IOException
{
File myfile = new File(filename);
FileInputStream fis = new FileInputStream(myfile);
//do stuff
fis.read();
//do more stuff
}
In this case, as the method writer, you've broken your end of the contract because the user gave you valid input, but you couldn't complete their request due to an IOException.
Hope that kinda puts you on the right track. Good luck!

Categories