What is an alternative to exceptions for flow control? - java

I inherited a java application that processes requests and throws an exception if it determines a request should be cancelled. Exceptions have been convenient for the previous developer because they are an easy way to exit out of a tree of logic that no longer applies (yes a goto) and it prints a stack trace to the log which is good info to have. It seems to be the consenus on forums and blogs that exceptions should not be used for flow control and over half of the requests processed are cancelled, so they're definitely not exceptional situations. One argument is performance, which doesn't apply because our exception-ful code has run fast enough for years. Another argument is that the goto-ness of it is not clear, which I agree with. My question is: What is the alternative. The only thing I can think of is to have every method return true if processing should continue or false if it shouldn't. This seems like it would hugely bloat my code changing this:
checkSomething();
checkSomethingElse();
into this:
boolean continueProcessing = false;
continueProcessing = checkSomething();
if (!continueProcessing) {
return false;
}
continueProcessing = checkSomethingElse();
if (!continueProcessing) {
return false;
}
And then what if the method is supposed to return something? Any guidance would be great. I'd really like to observe whatever "best practices" are available.
Update:
Another bit I probably should have mentioned in the first place is that a request is cancelled more than 50% of the time and does not mean something didn't go right, it means the request wasn't needed after all.

In your scenario, the alternatives are throwing an exception and returning a result.
Which is best (and which is "best practice") depends on whether the failure of the "check..." methods should be classed as normal or exceptional. To some degree this is a judgement call that you have to make. In making that call there are a couple of things to bear in mind:
Creating + throwing + catching an exception is roughly 3 orders of magnitude SLOWER than testing a boolean result.
The big problem with returning status codes is that it is easy to forget to test them.
In summary, best practice is to not use exceptions to implement normal flow control, but it is up to you to decide where the borderline between normal and exceptional is.
Without seeing the real code / context, my gut feeling is that this is likely to be an appropriate place to use exceptions.

See How slow are Java exceptions? for a great discussion on this topic.

tl;dr
Separation of Concerns, and IMO you should do this:
continue = checkSomething() && checkSomethingElse();
or perhaps there are other design problems in the code.
What's the concern of the function -- as you want to define it (this can be a subjective question)? If the error fits into the function's concern, then don't throw an exception. If you are confused about whether or not the error fits into the concern, then perhaps the function is trying to accomplish too many concerns on its own (bad design).
Error control options:
don't report error. It either is handled directly by function or doesn't matter enough
return value is
null instead of an object
the error information (perhaps even a different data type than the object returned on success).
an argument passed in will be used to store error data.
trigger an event
call a closure passed to function if an error occurs.
throw an exception. (I'm arguing this should usually only be done if it's not a part of the arbitrarily defined purpose of the function)
If the purpose of the code is to check some state, then knowing that the state is false is directly the point of the function. Don't throw an exception, but return false.
That's what it looks like you are wanting. You have process X which is running checkers Y and Z. Control flow for process X (or any calling process) is not the same concern as checking states of a system.

How about
if (!checkSomething()) return false;
if (!checkSomethingElse()) return false;
No need for the flag if you exit early.

int error = 0;
do
{//try-alt
error = operation_1(); if(error > 0){break;}
error = operation_2(); if(error > 0){break;}
error = operation_3(); if(error > 0){break;}
}while(false);
switch(error) //catch-alt
{
case 1: handle(1); break;
case 2: handle(2); break;
case 3: handle(3); break;
}

Related

Can this code cause an infinite loop while searching for the lowest level cause of an exception?

public static Throwable getCause(#Nonnull Throwable t) {
while ((t instanceof ExecutionException || t instanceof CompletionException) && t.getCause() != null) {
t = t.getCause();
}
return t;
}
Is this code dangerous in a sense that the while loop may never end? Just wondering if someone can cause this to go on forever.
If so, what might be a better way to handle this? I'm thinking maybe adding an upper bound limit.
Is this code dangerous in a sense that the while loop may never end? Just wondering if someone can cause this to go on forever.
In short: Theoretically? Yes. But practically? No. Your code is fine as is.
In long:
Theoretically, yes
Sure, one can create a loop in the causal chain just fine. GetCause() is just a method, and not a final one at that; exceptions are just classes, so one can make their own exception and write public Throwable getCause() { return this; }.
Practically, no
... but just because someone could do that, doesn't mean you should deal with it. Because why do you want to deal with that? Perhaps you're thinking: Well, if some programmer is intentionally trying to blow up the system, I'd want to be robust and not blow up even when they try.
But therein lies a problem: If someone wants to blow up a system, they can. It's nearly trivial to do so. Imagine this:
public class HahaHackingIsSoEasy extends RuntimeException {
#Override public Throwable getCause() {
while (true) ;
}
}
And I throw that. Your code will hang just the same, and if you attempt to detect loops, that's not going to solve the problem. And if you try to stop me from doing this, too, by firing up a separate thread with a timeout and then using Thread.stop() (deprecated, and dangerous) to stop me, I'll just write a loop without a savepoint in which case neither stop() nor using JVMTI to hook in as a debugger and stop that way is going to work.
The conclusion is: There are only 2 reliable ways to stop intentionally malicious code:
The best, by far: Don't run the malicious code in the first place.
The distant second best option: Run it in a highly controlled sandbox environment.
The JVM is un-sandboxable from inside itself (no, the SecurityManager isn't good enough. It has absolutely no way to stop (savepoint-less) infinite loops, for example), so this at the very least involves firing up an entirely separate JVM just to do the job you want to do, so that you can set timeouts and memory limits on it, and possibly an entire new virtual machine. It'll take thousands of times the resources, and is extremely complicated; I rather doubt that's what you intended to do here.
But what about unintentional loops?
The one question that remains is, given that we already wrote off malicious code (not 'we can deal with it', but rather 'if its intentionally malicious you cannot stop them with a loop detector'), what if it's an accident?
Generally, the best way to deal with accidents is to not deal with them at all, not in code: Let them happen; that's why you have operations teams and server maintainers and the like (you're going to have to have those, no matter what happens. Might as well use them). Once it happens, you figure it out, and you fix it.
That leaves just one final corner case which is: What if loops in causal chains have a plausible, desired usecase?
And that's a fair question. Fortunately, the answer is a simple: No, there is no plausible/desired usecase. Loops in causal chains do not happen unless there is a bug (in which case, the answer is: Find it, fix it), or there is malicious case (in which case, the answer is: Do not run it and call your security team).
The loop is following the exception hierarchy down to the root cause.
If that one points back to one of the already visited exceptions there is a bigger fail in the causality. Therefore I'd say it will never go into an infinite loop.
Of course it is possible, you can't prevent someone write something like:
public class ExceptionWithCauseAsItself extends ExecutionException {
#Override
public Throwable getCause() {
return this;
}
}
Following the principle of Defensive Programming, the method should not fall into infinite loop even when someone throw something like ExceptionWithCauseAsItself.
Since your case is not only getting the root cause, probably there is no library to fit what you use. I suggest refer to Apache Common Langs ExceptionUtils.getRootCause to get some idea on how to tackle recursive cause structures.
But as suggested by rzwitserloot, it is just impossible to defence when someone just want to messy you up.
So why does ExceptionUtils.getRootCause mention below?
this method handles recursive cause structures
that might otherwise cause infinite loops
Browsing the history, getThrowableList implementation is using ExceptionUtils.getCause, which tried to get cause by introspect different method, and hence it may cause cyclic cause chain.
This behaviour is already rectified in this commit by calling Throwable#getCause instead. So cyclic cause chain should not happen in general.
More reference related to this topic:
Why is exception.getCause() == exception?.
How can I loop through Exception getCause() to find root cause with detail message
Cycles in chained exceptions

Why explicitly throw a NullPointerException rather than letting it happen naturally?

When reading JDK source code, I find it common that the author will check the parameters if they are null and then throw new NullPointerException() manually.
Why do they do it? I think there's no need to do so since it will throw new NullPointerException() when it calls any method. (Here is some source code of HashMap, for instance :)
public V computeIfPresent(K key,
BiFunction<? super K, ? super V, ? extends V> remappingFunction) {
if (remappingFunction == null)
throw new NullPointerException();
Node<K,V> e; V oldValue;
int hash = hash(key);
if ((e = getNode(hash, key)) != null &&
(oldValue = e.value) != null) {
V v = remappingFunction.apply(key, oldValue);
if (v != null) {
e.value = v;
afterNodeAccess(e);
return v;
}
else
removeNode(hash, key, null, false, true);
}
return null;
}
There are a number of reasons that come to mind, several being closely related:
Fail-fast: If it's going to fail, best to fail sooner rather than later. This allows problems to be caught closer to their source, making them easier to identify and recover from. It also avoids wasting CPU cycles on code that's bound to fail.
Intent: Throwing the exception explicitly makes it clear to maintainers that the error is there purposely and the author was aware of the consequences.
Consistency: If the error were allowed to happen naturally, it might not occur in every scenario. If no mapping is found, for example, remappingFunction would never be used and the exception wouldn't be thrown. Validating input in advance allows for more deterministic behavior and clearer documentation.
Stability: Code evolves over time. Code that encounters an exception naturally might, after a bit of refactoring, cease to do so, or do so under different circumstances. Throwing it explicitly makes it less likely for behavior to change inadvertently.
It is for clarity, consistency, and to prevent extra, unnecessary work from being performed.
Consider what would happen if there wasn't a guard clause at the top of the method. It would always call hash(key) and getNode(hash, key) even when null had been passed in for the remappingFunction before the NPE was thrown.
Even worse, if the if condition is false then we take the else branch, which doesn't use the remappingFunction at all, which means the method doesn't always throw NPE when a null is passed; whether it does depends on the state of the map.
Both scenarios are bad. If null is not a valid value for remappingFunction the method should consistently throw an exception regardless of the internal state of the object at the time of the call, and it should do so without doing unnecessary work that is pointless given that it is just going to throw. Finally, it is a good principle of clean, clear code to have the guard right up front so that anyone reviewing the source code can readily see that it will do so.
Even if the exception were currently thrown by every branch of code, it is possible that a future revision of the code would change that. Performing the check at the beginning ensures it will definitely be performed.
In addition to the reasons listed by #shmosel's excellent answer ...
Performance: There may be / have been performance benefits (on some JVMs) to throwing the NPE explicitly rather than letting the JVM do it.
It depends on the strategy that the Java interpreter and JIT compiler take to detecting the dereferencing of null pointers. One strategy is to not test for null, but instead trap the SIGSEGV that happens when an instruction tries to access address 0. This is the fastest approach in the case where the reference is always valid, but it is expensive in the NPE case.
An explicit test for null in the code would avoid the SIGSEGV performance hit in a scenario where NPEs were frequent.
(I doubt that this would be a worthwhile micro-optimization in a modern JVM, but it could have been in the past.)
Compatibility: The likely reason that there is no message in the exception is for compatibility with NPEs that are thrown by the JVM itself. In a compliant Java implementation, an NPE thrown by the JVM has a null message. (Android Java is different.)
Apart from what other people have pointed out, it's worth noting the role of convention here. In C#, for example, you also have the same convention of explicitly raising an exception in cases like this, but it's specifically an ArgumentNullException, which is somewhat more specific. (The C# convention is that NullReferenceException always represents a bug of some kind - quite simply, it shouldn't ever happen in production code; granted, ArgumentNullException usually does, too, but it could be a bug more along the line of "you don't understand how to use the library correctly" kind of bug).
So, basically, in C# NullReferenceException means that your program actually tried to use it, whereas ArgumentNullException it means that it recognized that the value was wrong and it didn't even bother to try to use it. The implications can actually be different (depending on the circumstances) because ArgumentNullException means that the method in question didn't have side effects yet (since it failed the method preconditions).
Incidentally, if you're raising something like ArgumentNullException or IllegalArgumentException, that's part of the point of doing the check: you want a different exception than you'd "normally" get.
Either way, explicitly raising the exception reinforces the good practice of being explicit about your method's pre-conditions and expected arguments, which makes the code easier to read, use, and maintain. If you didn't explicitly check for null, I don't know if it's because you thought that no one would ever pass a null argument, you're counting it to throw the exception anyway, or you just forgot to check for that.
It is so you will get the exception as soon as you perpetrate the error, rather than later on when you're using the map and won't understand why it happened.
It turns a seemingly erratic error condition into a clear contract violation: The function has some preconditions for working correctly, so it checks them beforehand, enforcing them to be met.
The effect is, that you won't have to debug computeIfPresent() when you get the exception out of it. Once you see that the exception comes from the precondition check, you know that you called the function with an illegal argument. If the check were not there, you would need to exclude the possibility that there is some bug within computeIfPresent() itself that leads to the exception being thrown.
Obviously, throwing the generic NullPointerException is a really bad choice, as it does not signal a contract violation in and of itself. IllegalArgumentException would be a better choice.
Sidenote:
I don't know whether Java allows this (I doubt it), but C/C++ programmers use an assert() in this case, which is significantly better for debugging: It tells the program to crash immediately and as hard as possible should the provided condition evaluate to false. So, if you ran
void MyClass_foo(MyClass* me, int (*someFunction)(int)) {
assert(me);
assert(someFunction);
...
}
under a debugger, and something passed NULL into either argument, the program would stop right at the line telling which argument was NULL, and you would be able to examine all local variables of the entire call stack at leisure.
It's because it's possible for it not to happen naturally. Let's see piece of code like this:
bool isUserAMoron(User user) {
Connection c = UnstableDatabase.getConnection();
if (user.name == "Moron") {
// In this case we don't need to connect to DB
return true;
} else {
return c.makeMoronishCheck(user.id);
}
}
(of course there is numerous problems in this sample about code quality. Sorry to lazy to imagine perfect sample)
Situation when c will not be actually used and NullPointerException will not be thrown even if c == null is possible.
In more complicated situations it's becomes very non-easy to hunt down such cases. This is why general check like if (c == null) throw new NullPointerException() is better.
It is intentional to protect further damage, or to getting into inconsistent state.
Apart from all other excellent answers here, I'd also like to add a few cases.
You can add a message if you create your own exception
If you throw your own NullPointerException you can add a message (which you definitely should!)
The default message is a null from new NullPointerException() and all methods that use it, for instance Objects.requireNonNull. If you print that null it can even translate to an empty string...
A bit short and uninformative...
The stack trace will give a lot of information, but for the user to know what was null they have to dig up the code and look at the exact row.
Now imagine that NPE being wrapped and sent over the net, e.g. as a message in a web service error, perhaps between different departments or even organizations. Worst case scenario, no one may figure out what null stands for...
Chained method calls will keep you guessing
An exception will only tell you on what row the exception occurred. Consider the following row:
repository.getService(someObject.someMethod());
If you get an NPE and it points at this row, which one of repository and someObject was null?
Instead, checking these variables when you get them will at least point to a row where they are hopefully the only variable being handled. And, as mentioned before, even better if your error message contains the name of the variable or similar.
Errors when processing lots of input should give identifying information
Imagine that your program is processing an input file with thousands of rows and suddenly there's a NullPointerException. You look at the place and realize some input was incorrect... what input? You'll need more information about the row number, perhaps the column or even the whole row text to understand what row in that file needs fixing.

Return value: boolean vs int flag vs exception

I need to define a set of functions as part of an interface for a public facing SDK and trying to figure out the best way of signalling to the caller code if something was successful or not, and if not, why not.
My first consideration was return value vs exception. I think I need to use a combination of both for this one. Exceptions for erroneous states/errors caused by bugs. Return value to be used for times when function ran as expected and just want to return the result of the call to the caller.
Now, for return type. I could use ints (flags) or booleans. It has been argued internally that booleans should be used as it is simpler, and no need for flags. Booleans limit you to specifying success or failure, with no explanation.
I would make two arguments for using flags instead. First, you can return more than two possible values. Allowing for { success, fail, fail_reason_1, fail_reason_2, fail_reason_3 }. Note as this is an interface for implementations on various hardware devices it is probably desirable to be able to notify why an operation failed. e.g. the connected device doesn't support beep, has no LCD, doesn't support embedded crypto etc.
Who knows what the requirements will be in the future. Returning a bool locks you in now, whereas using flags allows you greater flexibility in the future. So what if you never need more than two values in the future. At least you have the option.
Considering this will be a public facing SDK I want as much flexibility so as to prevent breaking changes in the future.
Thanks
In my opinion the difference between returning a value to indicate the result of a method call and to throw an exception is that a value is just a notification about what happened. The method call should be considered as being performed successfully regarding the contract it defines. For example have a look how boolean : Set.add() is defined.
If a method throws an exception, this should indicate either an incorrect use of the method or a call while the object / whole system was in an illegal state. For example, trying to buy something for an user while his account does not have enough credits.
An exception is perfectly suited for capturing the different failure types: either by an exception hierarchy or by adding properties to the exception like getFailureCode() or combining them.
I would not use flags to indicate a failure condition in case the failure must be handled. Because ignoring return values is much to easy and can be missed by programmers, while exception have to be ignored actively.
The short answer is that it varies a lot based on your domain, clients, and the specifics of what you're providing. If you haven't already, you need to sit down with your clients and figure out how they're going to be using the library. Just off the top of my head, here are some things I'd be thinking about:
First, all the failure types you identified are basically the same thing - UnsupportedOperationException, with a reason why. Are there other failure types? If not, and you use exceptions, you possibly want one exception class with a type/cause/whatever as an enum property.
Second, it looks like many/most/all of your methods are going to have one or more failure types like this. Is that true? If so, does your API provide any way of determining what features a device supports? Consider this:
// Best if I probably don't care about the result if it's not SUCCESS
final Result result = myApiObject.someMethod();
if (result == Result.BEEP_UNSUPPORTED) {
....
} else if (result == Result.NO_DISPLAY) {
....
} else if ...
and this:
// Best if I have to handle every possible failure condition, and I care what
// the failure type is
try {
myApiObject.someMethod();
} catch (final BeepUnsupportedException e) {
....
} catch (final NoDisplayException e) {
....
}
and this:
// Best if I have to consider every possible failure condition, but it probably
// doesn't matter what the failure type is
try {
myApiObject.someMethod();
} catch (final MyApiException e) {
if (Cause.BEEP_UNSUPPORTED == e.getCause()) {
....
} else ....
and this:
// Best if I know in advance what features I may need
// someMethod() may still return response codes or throw an exception. In this case
// it's possibly OK to make it a RuntimeException, since clients are expected to
// poll for features.
if (myApiObject.supports(Feature.BEEP)
&& myApiObject.supports(Feature.DISPLAY)) {
myApiObject.someMethod();
} else ...
Which of these is closest to the way your clients will want to use your system?
Third, how fatal are these errors? I totally agree with #Harmlezz that failures which must be handled need to be Exceptions. What neither of us can tell you without more information is whether your failures need to be handled or not. If clients will mostly be ignoring failures (they usually want the code to fail silently), or only mostly only care about SUCCESS or !SUCCESS, then return codes are probably okay. If failures need to be handled, then you should be throwing a checked exception. You probably want to stay away from unchecked exceptions, since these failures aren't programming errors and may be handled correctly by clients.

Which way of setting something when it is null, is faster?

Which is faster in Java, and why?
try {
object.doSomething()
} catch (NullPointerException e) {
if (object == null) {
object = new .....;
object.doSomething();
} else throw e;
}
or
if (object == null) {
object = new .....;
}
object.doSomething();
and why?
The code would be called often, and object is only null the first time it's called, so don't take the cost of the thrown NPEinto account (it only happens once).
P.S. I know the second is better because of simplicity, readability, etc, and I'd surely go for that in real software. I know all about the evil of premature optimization, no need to mention it.
I'm merely curious about these little details.
You should absolutely use the latter way, not because it's faster, but because it's more idiomatic. Exceptions should not be used for control flow in your java programs.
this is purely anecdotal, but all the microbenchmarking I have ever done has shown that using exceptions for control flow won't be as performant as conditionals, although it's probably impossible to support this as a generalization and the JVM is very good at optimizing around things like this anyways, so YMMV.
Forget about speed - look at the size of the code in the first snippet versus the second.
Is the simpler option the best one? Easiest to read, takes up less space, etc. You should strive for code simplicity first, and then worry about speed once you've measured something as slow.
Besides, think about what the runtime needs to do in order to determine that it needs to throw a NullPointerException - it has to check if the current reference is null. So even without measuring, it would logically make sense that performing the check yourself is simpler, rather than leaving it up to the JRE to make the check and create a NullPointerException and unwind the stack.
Regardless of speed, the first way is not good programming practice. For example, what if object was not null but object.doSomething() resulted in the NullPointerException?
This is one reason why you should not use exceptions to control program flow!
To answer your question, version 1 is much slower when it explodes because creating Exceptions is quite expensive, but it is not faster than version 2 because the JVM must do the null check itself anyway so you're not saving anytime. The compiler is likely to optimize the code so it's no faster anyway.
Also Exceptions should be reserved for the exceptional. Initial state of null is not exceptional.
Use the lazy initialization pattern:
SomeClass getIt() {
if (it == null)
it = new SomeClass();
return it;
}
...
getIt().someMethod();
Check the The Java Specialists' Newsletter - Issue 187 Cost of Causing Exceptions for some interesting internal details.
a thrown exception (first example) is nearly always slower than normal control flow code (second example)
that aside the second is much cleaner and easier to understand
I'm going to say the second solution is faster. Not because I'm an expert on the JIT or VM but because it makes sense that a single branch-if-equal assembly-level routine is faster than looking up the object in memory, determining that it is null (the same test, I assume), throwing an exception and possibly mucking up the stack.

Which is better/more efficient: check for bad values or catch Exceptions in Java

Which is more efficient in Java: to check for bad values to prevent exceptions or let the exceptions happen and catch them?
Here are two blocks of sample code to illustrate this difference:
void doSomething(type value1) {
ResultType result = genericError;
if (value1 == badvalue || value1 == badvalue2 || ...) {
result = specificError;
} else {
DoSomeActionThatFailsIfValue1IsBad(value1);
// ...
result = success;
}
callback(result);
}
versus
void doSomething(type value1) {
ResultType result = genericError;
try {
DoSomeActionThatFailsIfValue1IsBad(value1);
// ...
result = success;
} catch (ExceptionType e) {
result = specificError;
} finally {
callback(result);
}
}
On the one hand, you're always doing a comparison. On the other hand, I honestly don't know what the internals of the system do to generate an exception, throw it, and then trigger the catch clause. It has the sound of being less efficient, but if it doesn't add overhead in the non-error case, then it's more efficient, on average. Which is it? Does it add similar checking anyway? Is that checking there in the implicit code added for exception handling, even with the additional layer of explicit checking? Perhaps it always depends on the type of exception? What am I not considering?
Let's also assume that all "bad values" are known -- that's an obvious issue. If you don't know all the bad values -- or the list is too long and not regular -- then exception handling may be the only way, anyway.
So, what are the pros and cons of each, and why?
Side questions to consider:
How does your answer change if the value is "bad" (would throw an exception) most of the time?
How much of this would depend on the specifics of the VM in use?
If this same question was asked for language-X, would the answer be different? (Which, more generally, is asking if it can be assumed checking values is always more efficient than relying on exception handling simply because it adds more overhead by current compilers/interpreters.)
(New) The act of throwing an exception is slow. Does entering a try block have overhead, even if an exception is not thrown?
Similarities on SO:
This is similar to the code sample in this answer, but states they are similar only in concept, not compiled reality.
The premise is similar to this question but, in my case, the requester of the task (e.g. "Something") isn't the caller of the method (e.g. "doSomething") (thus no returns).
And this one is very similar, but I didn't find an answer to my question.
And similar to far too many other questions to list, except:
I'm not asking about theoretical best practice. I'm asking more about runtime performance and efficiency (which should mean, for specific cases, there are non-opinion answers), especially on resource limited platforms. For instance, if the only bad value was simply a null object, would it be better/more efficient to check for that or just attempt to use it and catch the exception?
"How does your answer change if the value is "bad" (would throw an exception) most of the time?" I think that's the key right there. Exceptions are expensive as compared to comparisons, so you really want to use exceptions for exceptional conditions.
Similarly, your question about how this answer might change depending on the language/environment ties into that: The expense of exceptions is different in different environments. .Net 1.1 and 2.0 are incredibly slow the first time an exception is thrown, for instance.
Purely from an efficiency standpoint, and given your code examples, I think it depends on how often you expect to see bad values. If bad values are not too uncommon, it's faster to do the comparison because exceptions are expensive. If bad values are very rare, however, it may be faster to use the exception.
The bottom line, though, is that if you're looking for performance, profile your code. This block of code may not even be a concern. If it is, then try it both ways and see which is faster. Again, it depends on how often you expect to see bad values.
I could find surprisingly little current information about the cost of throwing Exceptions. Pretty obviously there must be some, you are creating an object, and probably getting stack trace information.
In the specific example you talk about:
if (value1 == badvalue || value1 == badvalue2 || ...) {
result = specificError;
} else {
DoSomeActionThatFailsIfValue1IsBad(value1);
// ...
result = success;
}
The problem for me here is that you are in danger if (probably incompletely) replicating logic in the caller that should be owned by the method you are calling.
Hence I would not perform those checks. Your code is not performing an experiment, it does "know" the data it's supposed to be sending down I suppose? Hence the likelyhood of the Exception being thrown should be low. Hence keep it simple, let the callee do the checks.
In my opinion you should have try/catch blocks around anything that could potentially throw exceptions, if only to have a safe running system. You have finer control of error responses if you check for possible data errors fist. So I suggest doing both.
Well, exceptions are more expensive, yes but for me, its about weighting the cost of efficiency vs bad design. unless your use case demands it, always stick to the best design.
the question really is, when do you throw an exception? in exceptional situations.
if your arguments are not in the range that you're looking for, i'd suggest returning an error code or a boolean.
for instance, a method,
public int IsAuthenticated(String username, String password)
{
if(!Validated(username,password)
{
// just an error
// log it
return -2;
}
// contacting the Database here
if cannot connect to db
{
// woww this is HUUGE
throw new DBException('cannot connect'); // or something like that
}
// validate against db here
if validated, return 0;
// etc etc
}
thats my 2 cents
My personal opinion is that exceptions indicate that something is broken - this might well be an API called with illegal arguments or division by zero or file not found etc. This means that exceptions could be thrown by checking values.
For the reader of your code - again my personal opinion - it is much easier to follow the flow if you can be certain that it is not put aside by all kinds of strange throws (which is essentially gotos in disguise if used as part of the program flow). You simply have less to think about.
This is in my opinion a good thing. "Smart" code is hard to wrap your head around.
On a side note - JVM's get much much smarter - coding for efficiency usually doesn't pay off.
Normally, one would assume that try-catch is more expensive because it looks heavier in the code, but that entirely depends on the JIT. My guess is that it's impossible to tell without having a real case and some performance measurements. The comparisons could be more expensive, especially when you have many values, for example, or because you have to call equals() since == won't work in many cases.
As for which one you should chose (as in "code style"), my answer is: Make sure that the user gets a useful error message when it fails. Anything else is a matter of taste and I can't give you rules for that.
To be safe, assume exceptions are expensive. They often are, and if they aren't it will at least push you towards using exceptions wisely. (Entering a try block is usually trivially cheap, since implementors do their best to make it so, even at the cost of making exceptions more expensive. After all, if exceptions are used properly, the code will enter the try block many times more often than it will throw.)
More importantly, exceptions are a style issue. Exceptions for exceptional conditions make code simpler because there's less error-checking code, so the actual functionality is clearer and more compact.
However, if exceptions might be thrown in more normal circumstances, there's invisible flows of control that the reader has to keep in mind, comparable to Intercal's COME FROM...UNLESS... statement. (Intercal was one of the very early joke languages.) This is very confusing, and can easily lead to misreading and misunderstanding the code.
My advice, which applies to every language and environment I know about:
Don't worry about efficiency here. There are strong reasons besides efficiency for using exceptions in a way that will prove efficient.
Use try blocks freely.
Use exceptions for exceptional conditions. If an exception is likely, test for it and handle it in another way.
a question like this is like asking,
"is it more efficient to write an interface or a base class with all abstract functions"
does it matter which is more efficient? only one of them is the right way for a given situation
Note that if your code doesn't throw exceptions then it doesn't always imply that the input is within bounds. Relying on throwing exceptions by the standard Java (API + JVM), such as NullPointerException or ArrayIndexOutOfBoundsExceptions is a very unhealthy way to validate input. Garbage-in sometimes generates garbage-but-no-exception-out.
And yes, exceptions are quite expensive. They should not be thrown during a normal processing flow.
Optimizationally, I think you're going to find it's probably a wash. They'll both perform alright, I don't think exception throwing is ever going to be your bottleneck. You should probably be more concerned with what Java is designed to do (and what other Java programmers will expect) and that is thrown exceptions. Java is very much designed around throwing/catching exceptions and you can bet the designers made that process as efficient as possible.
I think it's mostly a philosophy and language culture sort of thing. In Java, the general accepted practice is that the method signature is a contract between your method and the code calling it. So if you receive an improper value, you generally throw an unchecked exception and let it be dealt with at a higher level:
public void setAge(int age)
{
if(age < 0)
{
throw new IllegalArgumentException("Array can't be negative");
}
this.age = age;
}
In this case, the caller broke their end of the contract, so you spit their input back at them with an exception. The "throws" clause is for use when you can't fulfill your end of the contract for some reason.
public void readFile(String filename) throws IOException
{
File myfile = new File(filename);
FileInputStream fis = new FileInputStream(myfile);
//do stuff
fis.read();
//do more stuff
}
In this case, as the method writer, you've broken your end of the contract because the user gave you valid input, but you couldn't complete their request due to an IOException.
Hope that kinda puts you on the right track. Good luck!

Categories