In this specific scenarios, are asserts more appropriate then exceptions?
It is my understanding that assert should be used when program is FUBAR to a degree where it can not recover and will exit.
I also was told to always throw exceptions for clarity and error message handling.
Is there a fine line between when to use each? Is there an example where assert must be used in place of exception unconditionally?
public void subscribe(DataConsumer c) throws IllegalArgumentException {
if (c == null) {
// Almost certainly FUBAR
throw new IllegalArgumentException("Can't subscribe null as a DataConsumer. Object not initialized");
}
if (dataConsumerList == null) {
// Definetely FUBAR
throw new IllegalArgumentException("Nothing to subscribe to. DataConsumerList is null");
}
dataConsumerList.add(c);
}
Personally I'm not keen on using assertions for this sort of thing, simply because they can be turned off. Some places use assertions when running tests, but then disable them for production for the sake of speed. To me this is like taking your driving test as normal, but then removing your seatbelt when you get on the motorway.
An assertion is only going to throw an exception anyway, of course. If you absolutely want to take the JVM down right now, you'd need to use something like Runtime.halt.
So I'm a fan of exceptions here, although I'd typically use a NullPointerException when given a null argument, and if dataConsumerList is part of your state then I would personally use IllegalStateException to differentiate that situation. (It's a shame that Java doesn't have the same ArgmentNullException that .NET has, given what a common check it is.)
Guava has the useful Preconditions class which lets you write this more concisely:
public void subscribe(DataConsumer c) throws IllegalArgumentException {
Preconditions.checkNotNull(c,
"Can't subscribe null as a DataConsumer. Object not initialized");
Preconditions.checkState(dataConsumerList != null,
"Nothing to subscribe to. DataConsumerList is null");
dataConsumerList.add(c);
}
General rule (copied from here)
assertions should protect from (not always obvious) mistakes of the
developer, e.g. using a pointer despite its being NULL.
exceptions are a way to handle errors that may legitimately occur at
runtime, e.g. the failure of trying to connect to some server (which may
not respond for various reasons).
And there is a better way of writing the above code using google-guava Preconditions.checkNotNull() class.
public void subscribe(DataConsumer c) throws IllegalArgumentException
{
checkNotNull(c, "Can't subscribe null as a DataConsumer. Object not initialized");
checkNotNull(dataConsumerList , "Nothing to subscribe to. DataConsumerList is null");
dataConsumerList.add(c);
}
If you could put this in English terms, use assert for "gotta" (Got to, Must) and exceptions for "otta" (Ought to, should).
Use the assert for show-stopping, critical conditions that must be true for the execution to continue. Examples might be that a division happens correctly (think of the Intel chip floating point bug) or that your database connection is not null after you have correctly opened it. If these have occurred, then program execution should not continue.
Use the throw for foreseeable errors that your method may handle. The throw is a part of a contract that declares to you and other programmers that certain types of errors may be encountered (and that it's not your responsibility).
In your example, my guess is that a null consumer or an empty list should never happen under normal circumstances. If my guess is correct, then you would want to use an assert here, declaring that subscribe() will handle it.
If my guess is wrong and a null consumer happens, say 1 out of 50 times, then the throw would be better and you would be declaring that subscribe() forms a contract with a calling method, whereby the calling method handles the error.
The Java technote Programming With Assertions contain this explicit line in with regards to usage:
Do not use assertions for argument checking in public methods.
That should be a pretty definitive answer to your question.
Related
When reading JDK source code, I find it common that the author will check the parameters if they are null and then throw new NullPointerException() manually.
Why do they do it? I think there's no need to do so since it will throw new NullPointerException() when it calls any method. (Here is some source code of HashMap, for instance :)
public V computeIfPresent(K key,
BiFunction<? super K, ? super V, ? extends V> remappingFunction) {
if (remappingFunction == null)
throw new NullPointerException();
Node<K,V> e; V oldValue;
int hash = hash(key);
if ((e = getNode(hash, key)) != null &&
(oldValue = e.value) != null) {
V v = remappingFunction.apply(key, oldValue);
if (v != null) {
e.value = v;
afterNodeAccess(e);
return v;
}
else
removeNode(hash, key, null, false, true);
}
return null;
}
There are a number of reasons that come to mind, several being closely related:
Fail-fast: If it's going to fail, best to fail sooner rather than later. This allows problems to be caught closer to their source, making them easier to identify and recover from. It also avoids wasting CPU cycles on code that's bound to fail.
Intent: Throwing the exception explicitly makes it clear to maintainers that the error is there purposely and the author was aware of the consequences.
Consistency: If the error were allowed to happen naturally, it might not occur in every scenario. If no mapping is found, for example, remappingFunction would never be used and the exception wouldn't be thrown. Validating input in advance allows for more deterministic behavior and clearer documentation.
Stability: Code evolves over time. Code that encounters an exception naturally might, after a bit of refactoring, cease to do so, or do so under different circumstances. Throwing it explicitly makes it less likely for behavior to change inadvertently.
It is for clarity, consistency, and to prevent extra, unnecessary work from being performed.
Consider what would happen if there wasn't a guard clause at the top of the method. It would always call hash(key) and getNode(hash, key) even when null had been passed in for the remappingFunction before the NPE was thrown.
Even worse, if the if condition is false then we take the else branch, which doesn't use the remappingFunction at all, which means the method doesn't always throw NPE when a null is passed; whether it does depends on the state of the map.
Both scenarios are bad. If null is not a valid value for remappingFunction the method should consistently throw an exception regardless of the internal state of the object at the time of the call, and it should do so without doing unnecessary work that is pointless given that it is just going to throw. Finally, it is a good principle of clean, clear code to have the guard right up front so that anyone reviewing the source code can readily see that it will do so.
Even if the exception were currently thrown by every branch of code, it is possible that a future revision of the code would change that. Performing the check at the beginning ensures it will definitely be performed.
In addition to the reasons listed by #shmosel's excellent answer ...
Performance: There may be / have been performance benefits (on some JVMs) to throwing the NPE explicitly rather than letting the JVM do it.
It depends on the strategy that the Java interpreter and JIT compiler take to detecting the dereferencing of null pointers. One strategy is to not test for null, but instead trap the SIGSEGV that happens when an instruction tries to access address 0. This is the fastest approach in the case where the reference is always valid, but it is expensive in the NPE case.
An explicit test for null in the code would avoid the SIGSEGV performance hit in a scenario where NPEs were frequent.
(I doubt that this would be a worthwhile micro-optimization in a modern JVM, but it could have been in the past.)
Compatibility: The likely reason that there is no message in the exception is for compatibility with NPEs that are thrown by the JVM itself. In a compliant Java implementation, an NPE thrown by the JVM has a null message. (Android Java is different.)
Apart from what other people have pointed out, it's worth noting the role of convention here. In C#, for example, you also have the same convention of explicitly raising an exception in cases like this, but it's specifically an ArgumentNullException, which is somewhat more specific. (The C# convention is that NullReferenceException always represents a bug of some kind - quite simply, it shouldn't ever happen in production code; granted, ArgumentNullException usually does, too, but it could be a bug more along the line of "you don't understand how to use the library correctly" kind of bug).
So, basically, in C# NullReferenceException means that your program actually tried to use it, whereas ArgumentNullException it means that it recognized that the value was wrong and it didn't even bother to try to use it. The implications can actually be different (depending on the circumstances) because ArgumentNullException means that the method in question didn't have side effects yet (since it failed the method preconditions).
Incidentally, if you're raising something like ArgumentNullException or IllegalArgumentException, that's part of the point of doing the check: you want a different exception than you'd "normally" get.
Either way, explicitly raising the exception reinforces the good practice of being explicit about your method's pre-conditions and expected arguments, which makes the code easier to read, use, and maintain. If you didn't explicitly check for null, I don't know if it's because you thought that no one would ever pass a null argument, you're counting it to throw the exception anyway, or you just forgot to check for that.
It is so you will get the exception as soon as you perpetrate the error, rather than later on when you're using the map and won't understand why it happened.
It turns a seemingly erratic error condition into a clear contract violation: The function has some preconditions for working correctly, so it checks them beforehand, enforcing them to be met.
The effect is, that you won't have to debug computeIfPresent() when you get the exception out of it. Once you see that the exception comes from the precondition check, you know that you called the function with an illegal argument. If the check were not there, you would need to exclude the possibility that there is some bug within computeIfPresent() itself that leads to the exception being thrown.
Obviously, throwing the generic NullPointerException is a really bad choice, as it does not signal a contract violation in and of itself. IllegalArgumentException would be a better choice.
Sidenote:
I don't know whether Java allows this (I doubt it), but C/C++ programmers use an assert() in this case, which is significantly better for debugging: It tells the program to crash immediately and as hard as possible should the provided condition evaluate to false. So, if you ran
void MyClass_foo(MyClass* me, int (*someFunction)(int)) {
assert(me);
assert(someFunction);
...
}
under a debugger, and something passed NULL into either argument, the program would stop right at the line telling which argument was NULL, and you would be able to examine all local variables of the entire call stack at leisure.
It's because it's possible for it not to happen naturally. Let's see piece of code like this:
bool isUserAMoron(User user) {
Connection c = UnstableDatabase.getConnection();
if (user.name == "Moron") {
// In this case we don't need to connect to DB
return true;
} else {
return c.makeMoronishCheck(user.id);
}
}
(of course there is numerous problems in this sample about code quality. Sorry to lazy to imagine perfect sample)
Situation when c will not be actually used and NullPointerException will not be thrown even if c == null is possible.
In more complicated situations it's becomes very non-easy to hunt down such cases. This is why general check like if (c == null) throw new NullPointerException() is better.
It is intentional to protect further damage, or to getting into inconsistent state.
Apart from all other excellent answers here, I'd also like to add a few cases.
You can add a message if you create your own exception
If you throw your own NullPointerException you can add a message (which you definitely should!)
The default message is a null from new NullPointerException() and all methods that use it, for instance Objects.requireNonNull. If you print that null it can even translate to an empty string...
A bit short and uninformative...
The stack trace will give a lot of information, but for the user to know what was null they have to dig up the code and look at the exact row.
Now imagine that NPE being wrapped and sent over the net, e.g. as a message in a web service error, perhaps between different departments or even organizations. Worst case scenario, no one may figure out what null stands for...
Chained method calls will keep you guessing
An exception will only tell you on what row the exception occurred. Consider the following row:
repository.getService(someObject.someMethod());
If you get an NPE and it points at this row, which one of repository and someObject was null?
Instead, checking these variables when you get them will at least point to a row where they are hopefully the only variable being handled. And, as mentioned before, even better if your error message contains the name of the variable or similar.
Errors when processing lots of input should give identifying information
Imagine that your program is processing an input file with thousands of rows and suddenly there's a NullPointerException. You look at the place and realize some input was incorrect... what input? You'll need more information about the row number, perhaps the column or even the whole row text to understand what row in that file needs fixing.
Guava offers helper functions to check the preconditions but I could not find helper functions to check intermediate results.
private void foo(String param)
{
checkNotNull(param, "Required parameter is not set");
int x = get(param);
if (x == -1) {
throw new RuntimeException("This should never have happened and indicates a bug.");
}
}
Should I wrap the if (...) {....} part in my own helper?
Or should I use checkState from Guava?
Or should I view the failure of get() as a consequence of param and use checkArgument?
Should I use asserts in these cases?
Or am I missing something?
It's somewhere between a matter of preference and a matter of convention.
Generally, people will use asserts to indicate programming errors; that is, "if I did my job right, then a non-null param should never result in a -1 from get, regardless of user input or other outside forces." I treat them almost as comments that can optionally be verified at runtime.
On the other hand, if get might return -1 in some cases, but that input is invalid, then I would generally throw an IllegalArgumentException, and checkArgument is a perfectly reasonable way to do this. One drawback this has is that when you later catch that, it could have come from pretty much anywhere. Consider:
try {
baz();
bar();
foo(myInput);
} catch (IllegalArgumentException e) {
// Where did this come from!?
// It could have come from foo(myInput), or baz(), or bar(),
// or some method that any of them invoked, or really anywhere
// in that stack.
// It could be something totally unrelated to user input, just
// a bug somewhere in my code.
// Handle it somehow...
}
In cases where that matters -- for instance, you want to pop up a helpful note to the user that they're not allowed to enter -1 in their input form -- you may want to throw a custom exception so that you can more easily catch it later:
try {
baz();
bar();
foo(myInput);
} catch (BadUserInputException e) {
reportError("Bad input: " + e.getMessage());
log.info("recorded bad user input", e);
}
As for checkState, it doesn't really sound right to me. That exception usually implies that the problem was the state that this was in (or some other, more global state in the application). From the docs:
Signals that a method has been invoked at an illegal or inappropriate time.
In your case, a -1 is never appropriate, so checkState is misleading. Now, if it had been:
if (x == -1 && (!allowNegativeOne()) { ... }
...then that would be more appropriate, though it still has the drawback that IllegalArgumentException had above.
So, lastly, there's the question of whether you should just keep the if as it is, or use a helper method. That really comes down to taste, how complex the check is, and how often it's used (e.g. in other methods). If the check is as simple as x == -1 and that check isn't ever performed by other methods (so code reuse is not an issue), I would just keep the if.
If the get method is simply converting the string to an int, then it should do the validation there, preferably throwing an illegalArgumentException or some such RuntimeException. With the above you are also mixing levels of abstraction in your method. E.g. your checkNotNull abstracts away the checking of param for null, but the checking for param as an int is split across the get method and the foo method. Why not have one checkPreCondition type method? E.g.
private void paramShouldBeNonNullInt(String value) {
if (value == null) throw new IllegalArgumentException("value was null");
try {
Integer.parseInt(value)
} catch (NumberFormatException e) {
throw new IllegalArgumentException("value was not an integer");
}
}
First of all you need to make a distinction between contracts (e.g assertions/programming errors) and error handling (e.g. recoverable exceptions that could and should be caught and recovered from).
If you have the need to check an intermediate result, it seems like you don't trust the invoked service and you want to make sure your assumptions hold. Right? This should be expressed as an assertion, and Guava don't have very good support for that.
Have a look at valid4j. Found here https://github.com/helsing/valid4j and here http://www.valid4j.org.
I would then have expressed the code like this (using valid4j's support for hamcrest-matchers):
private int getFoo(String param) {
require(param, notNullValue()); // Violation means programming error at client
int x = get(param);
ensure(x, not(equalTo(-1)); // Violation means programming error at supplier
return x;
}
Some other excellent answers here.
From the Preconditions javadoc:
Precondition exceptions are used to signal that the calling method has made an error. (...) Postcondition or other invariant failures should not throw these types of exceptions.
So ...
Should I wrap the if (...) {....} part in my own helper?
No, existing facilities should be good enough.
Or should I use checkState from Guava?
Yes possibly: if parameters need to be loaded from a file before this method is called, then that would be part of the contract of how this class must be used.
Or should I view the failure of get() as a consequence of param and use checkArgument?
Yes possibly: e.g. if there was some formatting restriction on the syntax of parameters. (Although perhaps that would go inside get())
Should I use asserts in these cases?
Yes. If it's not a precondition check like the above, then normally I'd just use an assert here. Don't forget you can still add a message:
assert x != 1 : "Indicates a bug.";
I find this appropriate to document expectations and verify the internal / private implementation of a class or method.
If you want to make that a runtime check, you could do if (...) throw AssertionError but that's probably only necessary if you're working with dodgy code that you don't trust.
Can you please help me better understand, what is an appropriate use of “assert” vs “throwing an exception? When is each scenario appropriate?
Scenario 1
CODE
public Context(Algorythm algo) {
if (algo == null) {
throw new IllegalArgumentException("Failed to initialize Context");
}
this.algo = algo;
}
TEST
public void testContext_null() {
try {
context = new Context(null);
fail();
} catch (IllegalArgumentException e) {
assertNotNull(e);
}
}
Scenario 2
CODE
public Context(Algorythm algo) {
assert (algo != null);
this.algo = algo;
}
TEST
public void testContext_null() {
try {
context = new Context(null);
fail();
} catch (AssertionFailedError e) {
assertNotNull(e);
}
}
The main difference with assert is;
the ability to turn on/off selected tests by class/package.
the error thrown.
assert is more approriate for tests which will be turned off in production.
If you want a test which is checked every time, esp if validating data from an input, you should use the check which runs every time.
Assert is a macro (in C/C++, or a function in other languages) that validates a given expression as true or false, and throw an exception in case of false values.
Assert is something to use when ddebugging an application, like when you must check if a math expression really gives you an appropriate value, or if an object/structure member is not null or missing something important, and things like that.
An Exception throwing is more of a real error treatment. Exceptions are errors too and can stop your application, but they are used as the (let's say) "retail version" error treatment of the application. That's because Exceptions can be caught and taken differently to the user, with a little non-technical message instead of symbols and memory addresses, while you can just serialize that into an app log, for example.
On the other hand, asserts will just stop the running process and give you a message like "Assertion failed on source_file.ext, line X. The process will be terminated." And that's not user-friendly :)
The assert keyword should be used when failure to meet a condition violates the integrity of the program. These are intended to be non-recoverable error situations.
Exceptions, on the other hand, alert calling methods to the presence and location of an error but can be handled or ignored at the programmer's discretion.
When testing, you should use the Assert functions when a condition must be met for a test to pass. If you're expecting an exception in that particular test, JUnit 4 has an annotation to signify that an test should throw a particular Exception:
#Test(expected=MyException.class)
Outside of test code, asserts are generally a bad idea. the reason is that unless there are very strict company guidelines in place, you invariably end up with mixed usage, which is bad. there are basically 2 usage scenarios for assert:
extra, possibly slow tests which will be turned off in production
normal, quick code sanity tests which should never be disabled (like requiring a given method parameter to be non-null)
As long as you always follow one of the scenarios, things are fine. however, if your code base ends up with both scenarios, then you are stuck. you have asserts which follow scenario 2 which you don't want to disable, and you have asserts which follow scenario 1 (and are slowing down your production code) which you want to disable. what to do?
most codebases which i have worked with which used asserts in normal code, never ended up disabling them in the production build for exactly this reason. therefore, my recommendation is always to avoid them outside of test code. use normal exceptions for the normal code, and stick the extra, possibly slow code (with asserts) in separate test code.
I'll explain what I mean by input error checking.
Say you have a function doSomething(x).
If the function completes successfully doSomething does something and returns nothing. However, if there are errors I'd like to be notified. That is what I mean by error checking.
I'm looking for, in general, the best way to check for errors. I've thought of the following solutions, each with a potential problem.
Flag error checking. If doSomething(x) completes successfully return null. Otherwise, it returns a boolean or an error string. Problem: Side effects.
Throwing an exception. Throw an exception if doSomething(x) encounters an error. Problem: If you are performing error checking for parameters only, throwing an IllegalArgumentExceptionseems inappropriate.
Validating input prior to function call. If the error checking is only meant for the parameters of the function, then you can call a validator function before calling the doSomething(x) function. Problem: What if a client of the class forgets to call the validator function before calling doSomething(x)?
I often encounter this problem and any help or a point in the right direction would be much appreciated.
Throw an exception is the best way.
If you are performing error checking for parameters only, throwing an
IllegalArgumentException seems inappropriate.
Why? That's the purpose of this Exception.
Flag error checking
This is appropriate in some cases, depending on what you mean by an "error".
An example from the API: If you try to add an object to a Set, which already contains another object which equals the new object, the add method sort of "fails" and indicates this by returning false. (Note that we're on a level at which it's technically not even an "error"!)
2.Throwing an exception
This is the default option.
Question is now, should you go for a checked exception (which you need a throws declaration or try/catch clauses) or an unchecked exception (an exception that extends RuntimeException). There are a few rules of thumbs here.
From Java Practices -> Checked versus unchecked exceptions:
Unchecked exceptions: Represent defects in the program (bugs) - often invalid arguments passed to a non-private method.
Checked exceptions: Represent invalid conditions in areas outside the immediate control of the program (invalid user input, database problems, network outages, absent files)
Note that IllegalArgumentException is an unchecked exception, perfectly suitable for throwing when arguments are not as they should be.
If you want to throw a checked exception, you could A) roll your own by extending Exception, B) use some existing checked exception or C) "chain" a runtime exception in, for instance, an IOException: throw new IOException(new IllegalArgumentException("reason goes here..."));
3.Validating input prior to function call
Relying on the fact that the client should have sanitized / checked his arguments before the call seems like a bad idea to me.
Your second suggestion ("Throwing an exception") is the best choice. The other two options rely on the invoker either doing something before ("Validating input prior to function call") or after ("Flag error checking") the call to the method. Either way, the extra task is not mandated by the compiler so someone invoking the function isn't forced to call the "extra thing" so problems are not spotted till run-time.
As for "Throwing an Exception" and your suggested 'problem', well the answer is throw appropriate exception types for the code. If the input parameters are invalid, then throw an InvalidArgumentException (since that's the appropriate error). If the exception is for functionality (e.g. cannot open network connection), use another exception type or create your own.
I agree with throwing exceptions. I want to add another option that combines #2 and #3 - the proxy pattern. That way your code stays fairly cohesive - validation in one place and business logic in another. This makes sense if you have a large set of calls that need to be validated.
Create a proxy to handle validation. Have it delegate all calls to the actual implementation of your business logic interface after it validates, otherwise it can throw exceptions if something does not validate.
I decide which method to use usually on the type of interface.
User interface (GUI): I validate before calling business methods, because the user wants to know what was wrong.
On technical interfaces between components or systems, the interface should have been tested and work properly in this case I throw exceptions.
Exceptions is the way to go. Your stated problem with exceptions can be mitigated by the proper implementation of exception throwing / handling. Use exceptions to your advantage by validating parameters at the lowest level that you need them and throwing an exception if the validation fails. This allows you to avoid redundantly checking for validity at multiple levels in the call stack. Throw the exception at the bottom and let the stack unroll to the appropriate place for handling the error.
The method you choose depends on the situation, and they are not mutually exclusive so you can mix them all in the same solution (although whether that's a good idea really depends on your situation).
Choose this method if you want a very simple method for handling errors. This method might be OK for situations where the calling function can accept any value the called function returns. There might be situations where business logic dictates this as an OK choice, such as returning a specific message string when a resource cannot be properly located, or a server does not respond. Generally, I don't use this or see this technique in Java very much, as exceptions are a better mechanism for error handling.
Throw an exception when your function runs into un defined behaviour. If you have a math function that can only operate on positive integers and someone passes -1, you should thrown an InvalidArguementException. If your function is given the ID of a product in a database, but the product cannot be found by a query, you could throw a custom ProductNotFound exception.
Validating input is a good idea, I would say it should be done by the called function, rather than the caller - unless the caller can avoid an exception from the callee by validating the input before passing it. If you work in a language that supports Design By Contract, validating input would be done as the function's precondition.
I usually use #2 and #3. I haven't written code with error flags for a while. The exception to that might be a function that returned an enum, where one possible value indicated an error code. That was driven more by a business rule than anything else.
And generally, try to keep it simple.
Throw a custom checked exception.
doSomething(WithX x ) throws BusinessRuleViolatedException
Input validation is surprisingly complicated and all three of the suggested approaches in the original post are needed and sometimes more. Exceptions are appropriate when input is outside the bounds of business logic, if it is corrupt or cannot be read for example.
Flag checking quickly becomes an anti-pattern if you have more than one or two flags to check, and can be replaced with a slightly specialized version of the visitor pattern. I do not know the exact name of this specific pattern, but I'll informally call it the "validator list pattern" and will describe it in more detail below.
Checking input early and failing fast is usually good, but not always possible. Often there is a lot of input validation, all input received from outside of your control should be treated as hostile and requires validation. Good program design and architecture will help make it clear when exactly this needs to happen.
'The Validator List Pattern'
As an example, let's first describe in code the "Validation Flag" anti-pattern, and then we'll transform it to the "validation list" pattern.
public Optional<String> checkForErrorsUsingFlags(
ObjectToCheck objToCheck ) {
// the small series of checks and if statements represent the
// anti-pattern. Hard to test and many other problems crop up.
String errMsg = checkForError1( objToCheck );
if(errMsg != null ) {
return Optional.of(errMsg);
}
errMsg = checkForError2( objToCheck );
if(errMsg != null ) {
return Optional.of(errMsg);
}
return Optional.empty();
}
/**** client usage ****/
ObjectToCheck obj = doSomethingToReadInput(obj);
Optional<String> error = checkForErrors( obj);
if (error.isPresent()) {
// invalid input, throw object away and request input again
} else {
// do stuff, we have a valid input
}
To fix, start by creating a common interface that will represent a single validator. Then each check is converted to a validator instance. Finally we create a list of validators and pass it to the validator code.
/** The common validator interface each validator will use */
private interface MyValidator {
public boolean isValid(ObjectToCheck obj);
public String getErrorMessage(ObjectToCheck obj);
}
// this method should look familiar to the above, now we
// have a list of validators as an additional parameter
public Optional<String> checkForErrors( ObjectToCheck objToCheck,
List<MyValidator> validators ) {
for(MyValidator validator : validators ) {
if (!validator.isValid(objToCheck)) {
String errMsg = validator.getErrorMessage(objToCheck);
return Optional.of(errMsg);
}
}
return Optional.empty();
}
/****** client usage *****/
// now in this pattern, the client controls when the validators
// are created, and which ones are used.
MyValidator validator1 = new MyValidator() {
#Override
public boolean isValid(ObjectToCheck obj) {
return checkForError1( objToCheck ) != null;
}
#Override
public boolean getErrorMessage(ObjectToCheck obj) {
return checkForError1( objToCheck );
}
}
// note: above we call checkForError1 twice, not optimal.
// typical in real examples this can be avoided,
// and the error message generation logic split from the detection
// logic often simplifies things.
MyValidator validator2 = new MyValidator() { ... }
List<MyValidator> validators =
ImmutableList.of( validator1, validator2);
Optional<String> error = checkForErrors(objToCheck, validators);
if (error.isPresent()) {
// invalid input, throw object away and request input again
} else {
// do stuff, we have a valid input
}
Now to test, create a series of mock validators and check that each one has their validate called. You can stub validator results and ensure the correct behavior is taken. Then you also have access to each validator individually so you can test them one by one on their own.
Cheers - hope that helps, happy coding.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I always come across the same problem that when an exception is caught in a function that has a non-void return value I don't know what to return. The following code snippet illustrates my problem.
public Object getObject(){
try{
...
return object;
}
catch(Exception e){
//I have to return something here but what??
return null; // is this a bad design??
}
}
So my questions are:
Is return null bad design?
If so what is seen as a cleaner solution??
thanks.
I would say don't catch the exception if you really can't handle it. And logging isn't considered handling an error. Better to bubble it up to someone who can by throwing the exception.
If you must return a value, and null is the only sensible thing, there's nothing wrong with that. Just document it and make it clear to users what ought to be done. Have a unit test that shows the exception being thrown so developers coming after you can see what the accepted idiom needs to be. It'll also test to make sure that your code throws the exception when it should.
I always come across the same problem that when an exception is caught in a function that has a non-void return value I don't know what to return.
If you don't know what to return, then it means that you don't know how to handle the exception. In that case, re-throw it. Please, don't swallow it silently. And please, don't return null, you don't want to force the caller of the code to write:
Foo foo = bar.getFoo();
if (foo != null) {
// do something with foo
}
This is IMHO a bad design, I personally hate having to write null-checks (many times, null is used where an exception should be thrown instead).
So, as I said, add a throws clause to the method and either totally remote the try/catch block or keep the try/catch if it makes sense (for example if you need to deal with several exceptions) and rethrow the exception as is or wrap it in a custom exception.
Related questions
How to avoid “!= null” statements in Java?
Above all I prefer not to return null. That's something that the user has to explicitly remember to handle as a special case (unless they're expecting a null - is this documented). If they're lucky they'll deference it immediately and suffer an error. If they're unlucky they'll stick it in a collection and suffer the same problem later on.
I think you have two options:
throw an exception. This way the client has to handle it in some fashion (and for this reason I either document it and/or make it checked). Downsides are that exceptions are slow and shouldn't be used for control flow, so I use this for exceptional circumstances (pun intended)
You could make use of the NullObject pattern.
I follow a coding style in which I rarely return a null. If/when I do, that's explicitly documented so clients can cater for it.
Exceptions denote exceptional cases. Assuming your code was supposed to return an object, something must have gone wrong on the way (network error, out of memory, who knows?) and therefore you should not just hush it by returning null.
However, in many cases, you can see in documentation that a method returns a null when such and such condition occurs. The client of that class can then count on this behaviour and handle a null returned, nothing bad about that. See, in this second usage example, it is not an exceptional case - the class is designed to return null under certain conditions - and therefore it's perfectly fine to do so (but do document this intended behaviour).
Thus, at the end of the day, it really depends on whether you can't return the object because of something exceptional in your way, or you simply have no object to return, and it's absolutely fine.
I like the responses that suggest to throw an exception, but that implies that you have designed exception handling into the architecture of your software.
Error handling typically has 3 parts: detection, reporting, and recovery. In my experience, errors fall into classes of severity (the following is an abbreviated list):
Log for debug only
Pause whatever is going on and report to user, waiting for response to continue.
Give up and terminate the program with an apologetic dialogue box.
Your errors should be classified and handling should be as generically and consistently as possible. If you have to consider how to handle each error each time you write some new code, you do not have an effective error handling strategy for your software. I like to have a reporting function which initiates user interaction should continuation be dependent on a user's choice.
The answer as to whether to return a null (a well-worn pattern if I ever saw one) then is dependent on what function logs the error, what can/must the caller do if the function fails and returns the null, and whether or not the severity of the error dictates additional handling.
Exceptions should always be caught by the controller in the end.
Passing a <null> up to the controller makes no sense.
Better to throw/return the original exception up the stack.
It's your code and it's not bad solution. But if you share your code you Shoudn't use it because it can throw unexpected exception (as nullpointer one).
You can of course use
public Object getObject() throws Exception {}
which can give to parent function usable information and will warn that something bad can happen.
Basically I would ditto on Duffymo, with a slight addition:
As he says, if your caller can't handle the exception and recover, then don't catch the exception. Let a higher level function catch it.
If the caller needs to do something but should then appropriately die itself, just rethrow the exception. Like:
SomeObject doStuff()
throws PanicAbortException
{
try
{
int x=functionThatMightThrowException();
... whatever ...
return y;
}
catch (PanicAbortException panic)
{
cleanUpMess();
throw panic; // <-- rethrow the exception
}
}
You might also repackage the exception, like ...
catch (PanicAbortException panic)
{
throw new MoreGenericException("In function xyz: "+panic.getMessage());
}
This is why so much java code is bloated with if (x!=null) {...} clauses. Don't create your own Null Pointer Exceptions.
I would say it is a bad practice. If null is received how do I know if the object is not there or some error happened?
My suggestion is
never ever return NULL if the written type is an array or
Collection. Instead, return an empty Collection or an empty array.
When the return type is an object, it is up to you to return null depending on the scenario. But never ever swallow an exception and return NULL.
Also if you are returning NULL in any scenario, ensure that this is documented in the method.
As Josha Blooch says in the book "Effective Java", the null is a keyword of Java.
This word identifies a memory location without pointer to any other memory location: In my opinion it's better to coding with the separation of behavior about the functional domain (example: you wait for an object of kind A but you receive an object of kind B) and behavior of low-level domain (example: the unavailability of memory).
In your example, I would modify code as :
public Object getObject(){
Object objectReturned=new Object();
try{
/**business logic*/
}
catch(Exception e){
//logging and eventual logic leaving the current ojbect (this) in a consistent state
}
return objectReturned;
}
The disadvantage is to create a complete Object in every call of getObject() (then in situation where the object returned is not read or write).
But I prefer to have same object useless than a NullPointerException because sometimes this exception is very hard to fix.
Some thoughts on how to handle Exceptions
Whether returning null would be good or bad design depends on the Exception and where this snippet is placed in your system.
If the Exception is a NullPointerException you probably apply the catch block somewhat obtrusive (as flow control).
If it is something like IOException and you can't do anything against the reason, you should throw the Exception to the controller.
If the controller is a facade of a component he, translate the Exception to well documented component-specific set of possible Exceptions, that may occur at the interface. And for detailed information you should include the original Exception as nested Exception.