Try-Catch Instead of Null Check When Using Several Getters - java

My problem is the following, I have quite a long Getter, i.e.,
objectA.getObjectB().getObjectC().getObjectD().getObjectE().getName();
Due to "bad" database/entity design (some things were introduced later than others) it happens that getObjectB(), getObjectC() or getObjectD() could return NULL.
Usually we use null-checks all the time, but in this case, I'd have to use
ObjectB b = objectA.getObjectB();
if (b != null) {
ObjectC c = b.getObjectC();
if (c != null) {
ObjectD d = c.getObjectD();
if (d != null)
return d.getObjectE().getName();
}
}
return "";
Instead it would be much easier to simply use a try-catch block
try {
return objectA.getObjectB().getObjectC().getObjectD().getObjectE().getName();
} catch (NullPointerException e) {
return "";
}
In this case I don't really care which object returned NULL, it's either display a name or don't. Are there any complications or is it bad design to use try-catch instead of checks?
Thanks for your input.

If it is an option to use Java 8, you can use Optional as follows:
Optional.ofNullable(objectA)
.map(a -> a.getObjectB())
.map(b -> b.getObjectC())
.map(c -> c.getObjectD())
.map(d -> d.getObjectE())
.map(e -> e.getName())
.orElse("");

This kind of method chaining is called a "Train wreck" and is not preferred.
Such a statement also violates the Law of Demeter . Let me give you an example from the book Clean code by Robert C Martin:
String scratchDirPath = ctxt.getOptions().getScratchDir().getAbsolutePath();
BufferedOutputStream bos = new BufferedOutputStream(new FileOutputStream(scratchDirPath));
//write to file...
This is similar to what you have and it is a bad practice. This could atleast be refactored to below:
Options options = ctxt.getOptions();
File scratchDir = options.getScratchDir();
String scratchDirPath = scratchDir.getAbsolutePath();
BufferedOutputStream bos = new BufferedOutputStream(new FileOutputStream(scratchDirPath));
//write to file...
This is still violating the Law of Demeter but is at least partially better.
The most preferable way would be to find out why the scratchDirPath is needed and ask ctxt object to provide it to you. So it would look like -
BufferedOutputStream bos = ctxt.createScratchDirFileStream();
This way ctxt does not expose all its internals. This decouples the calling code from the implementation of ctxt.

It's really a judgement call only you and your team can make, but there's (at least) one objective thing I'll call out: In the case where one of those methods returns null, your first code block will perform better than your second one. Throwing an exception is expensive relative to a simple branch. (In the case where none of the methods returns null, I think the second might be almost unmeasurably faster than the second as you avoid the branching.) In either case, if performance matters, test it in your environment. (But test it properly; micro benchmarks are hard. Use tools.)
Most of time, the difference won't matter in the real world, of course, but that's the significant difference at runtime.

Related

Identify record that is culprit - coding practices

Is method chaining good?
I am not against functional programming that uses method chaining a lot, but against a herd mentality where people mindlessly run behind something that is new.
The example, if I am processing a list of items using stream programming and need to find out the exact row that resulted into throwing NullPointerException.
private void test() {
List<User> aList = new ArrayList<>();
// fill aList with some data
aList.stream().forEach(x -> doSomethingMeaningFul(x.getAddress()));
}
private void doSomethingMeaningFul(Address x) {
// Do something
}
So in the example above if any object in list is null, it will lead to NullPointerException while calling x.getAddress() and come out, without giving us a hook to identify a User record which has this problem.
I may be missing something that offers this feature in stream programming, any help is appreciated.
Edit 1:
NPE is just an example, but there are several other RuntimeExceptions that could occur. Writing filter would essentially mean checking for every RTE condition based on the operation I am performing. And checking for every operation will become a pain.
To give a better idea about what I mean following is the snippet using older methods; I couldn't find any equivalent with streams / functional programming methods.
List<User> aList = new ArrayList<>();
// Fill list with some data
int counter = 0;
User u = null;
try {
for (;counter < aList.size(); counter++) {
u = aList.get(counter);
u.doSomething();
int result = u.getX() / u.getY();
}
} catch(Exception e) {
System.out.println("Error processing at index:" + counter + " with User record:" + u);
System.out.println("Exception:" + e);
}
This will be a boon during the maintenance phase(longest phase) pointing exact data related issues which are difficult to reproduce.
**Benefits:**
- Find exact index causing issue, pointing to data
- Any RTE is recorded and analyzed against the user record
- Smaller stacktrace to look at
Is method chaining good?
As so often, the simple answer is: it depends.
When you
know what you are doing
are be very sure that elements will never be null, thus the chance for an NPE in such a construct is (close to) 0
and the chaining of calls leads to improved readability
then sure, chain calls.
If any of the above criteria isn't clearly fulfilled, then consider not doing that.
In any case, it might be helpful to distribute your method calls on new lines. Tools like IntelliJ actually give you advanced type information for each line, when you do that (well, not always, see my own question ;)
From a different perspective: to the compiler, it doesn't matter much if you chain call. That really only matters to humans. Either for readability, or during debugging.
There are a few aspects to this.
1) Nulls
It's best to avoid the problem of checking for nulls, by never assigning null. This applies whether you're doing functional programming or not. Unfortunately a lot of library code does expose the possibility of a null return value, but try to limit exposure to this by handling it in one place.
Regardless of whether you're doing FP or not, you'll find you get a lot less frustrated if you never have to write null checks when calling your own methods, because your own methods can never return null.
An alternative to variables that might be null, is to use Java 8's Optional class.
Instead of:
public String myMethod(int i) {
if(i>0) {
return "Hello";
} else {
return null;
}
}
Do:
public Optional<String> myMethod(int i) {
if(i>0) {
return Optional.of("Hello");
} else {
return Optional.empty();
}
Look at Optional Javadoc to see how this forces the caller to think about the possibility of an Optional.empty() response.
As a bridge between the worlds of "null represents absent" and "Optional.empty() represents absent", you can use Optional.ofNullable(val) which returns Empty when val == null. But do bear in mind that Optional.empty() and Optional.of(null) are different values.
2) Exceptions
It's true that throwing an exception in a stream handler doesn't work very well. Exceptions aren't a very FP-friendly mechanism. The FP-friendly alternative is Either -- which isn't a standard part of Java but is easy to write yourself or find in third party libraries: Is there an equivalent of Scala's Either in Java 8?
public Either<Exception, Result> meaningfulMethod(Value val) {
try {
return Either.right(methodThatMightThrow(val));
} catch (Exception e) {
return Either.left(e);
}
}
... then:
List<Either<Exception, Result>> results = listOfValues.stream().map(meaningfulMethod).collect(Collectors.toList());
3) Indexes
You want to know the index of the stream element, when you're using a stream made from a List? See Is there a concise way to iterate over a stream with indices in Java 8?
In your test() function you are creating an emptylist List<User> aList = new ArrayList<>();
And doing for each on it. First add some element to
aList
If you want to handle null values you can add .filter(x-> x != null) this before foreach it will filter out all null value
Below is code
private void test() {
List<User> aList = new ArrayList<>();
aList.stream().filter(x-> x != null).forEach(x -> doSomethingMeaningFul(x.getAddress()));
}
private void doSomethingMeaningFul(Address x) {
// Do something
}
You can write a black of code in streams. And you can find out the list item which might result in NullPointerException. I hope this code might help
private void test() {
List<User> aList = new ArrayList<>();
aList.stream().forEach(x -> {
if(x.getAddress() != null)
return doSomethingMeaningFul(x.getAddress())
else
system.out.println(x+ "doesn't have address");
});
}
private void doSomethingMeaningFul(Address x) {
// Do something
}
If you want you can throw NullPointerException or custom excption like AddressNotFoundException in the else part

Doing OR with multiple conditions using Optional wrapper

I am trying to understand and use Java 8 - Optional feature. I would like to refactor this code block. Without Optional I have such a condition.
ClassA objA = findObject();
if(objA == null || objA.isDeleted()){
throw Exception("Object is not found.");
}
I want to transform this block using Optional wrapper. I have read about filter, ifPresent functions but I could not find a way. Maybe it is simple but I am new to Java 8. I would appreciate if you could help.
You shouldn't use Optional<T> to solely replace the if statement as it's no better and doesn't gain you any benefit. A much better solution would be to make the findObject() method return Optional<ClassA>.
This makes the caller of this method decide what to do in the "no value" case.
Assuming you've made this change, you can then leverage the Optional<T> type:
findObject().filter(a -> !a.isDeleted()) // if not deleted then do something
.map(...) // do some mapping maybe?
... // do some additional logic
.orElseThrow(() -> new Exception("Object is not found."));//if object not found then throw exception
see the Optional<T> class to familiarise your self with the API and the methods that are available.
#Eric as you mentioned in your comment, if you don't want (can't) change the return type of findObject() for some constraints, you can do the following :
ClassA objA = findObject();
Optional<ClassA> myOpt =
Optional.ofNullable(objA)
.filter(e->!e.isDeleted())
.orElseThrow(()->new Exception("Object is not found.");
return list.stream()
.filter(tmm -> tmpAddress.type() == 1)
.findFirst()
.orElseThrow(()->{
logger.error("ERROR");//something like this
exceptionHandler.handler("some exception
return null;
});

Which is the best way to set/drop boolean flag inside lambda function

Say I have a currency rates loader returning isLoaded=true result only when all the rates are loaded successfully:
//List<String> listFrom = Stream.of("EUR", "RUB").collect(toList());
//List<String> listTo = Stream.of("EUR", "CNY").collect(toList());
boolean isLoaded = true;
final FixerDataProvider httpProvider = new FixerDataProvider(maxAttempts);
final List<CurrencyRatePair> data =
listFrom.stream()
.flatMap(from -> {
final List<CurrencyRatePair> result = httpProvider.findRatesBetweenCurrencies(from, listTo);
if (Objects.isNull(result) || result.size() == 0) {
isLoaded = false; //!!!Not working as ineffectively final!!!
}
return result.stream();
}).collect(Collectors.toList());
if (!isLoaded) {
return false;
}
// do smth with loaded data
return true;
Assignment isLoaded = false; inside lambda function is not allowed when isLoaded variable is not final or effectively final.
Which is the most elegant solution to set/drop boolean flag inside lambda expressions?
What do you think about AtomicBoolean and set(false) method as a possible approach?
You may be better off with an old-style loop, as others have suggested. It does feel like a bit of a programming faux pas to write lambdas with side-effects, but you're likely to find an equal number of developers who think it's fine too.
As for getting this particular lambda-with-side effects working, making isLoaded into an AtomicBoolean is probably your best bet. You could achieve the same effect by making isLoaded a boolean[] of size 1, but that seems less elegant than going with AtomicBoolean to me.
But seriously, try using an old-school loop instead too and see which one you like better.
If you use parallel stream, you must use AtomicBoolean. Because boolean[1] may not be safe in parallel scenario.
The java.util.stream javadoc states that
Side-effects in behavioral parameters to stream operations are, in general, discouraged, as they can often lead to unwitting violations of the statelessness requirement, as well as other thread-safety hazards.
That said, if you want to do it anyway, the solution you have identified with an AtomicBoolean will do the trick just fine.
Variables used within anonymous inner classes and lambda expression have to be effectively final.
You can use AtomicReference for your case, here is a similar snippet from ConditionEvaluationListenerJava8Test
public void expectedMatchMessageForAssertionConditionsWhenUsingLambdasWithoutAlias() {
final AtomicReference<String> lastMatchMessage = new AtomicReference<>();
CountDown countDown = new CountDown(10);
with()
.conditionEvaluationListener(condition -> {
try {
countDown.call();
} catch (Exception e) {
throw new RuntimeException(e);
}
lastMatchMessage.set(condition.getDescription());
})
.until(() -> assertEquals(5, (int) countDown.get()));
String expectedMatchMessage = String.format("%s reached its end value", CountDown.class.getName());
assertThat(lastMatchMessage.get(), allOf(startsWith("Condition defined as a lambda expression"), endsWith(expectedMatchMessage)));
}
Cheers !
If I right understand you will get isLoaded=false only in case if all off result lists will be empty (If result list is null you will get NPE in the next line so there is no any reason to do null check in this way). In this case your data list also will be empty and you don't need any boolean flags, just check if data.isEmpty() and return false if true.

Design pattern for incremental code

According to the business logic, the output of one of the method is used as an input to another. The logic has linear flow.
To emulate the behaviour, now there is a controller class which has everything.
It is very messy, too much loc and hard to modify. Also the exception handling is very complex. The individual method does some handling but the global exceptions bubble up and which involves a lot of try catch statements.
Does there exists a design pattern to address this problem?
Example Controller Class Code
try{
Logic1Inputs logic1_inputs = new Logic1Inputs( ...<some other params>... );
Logic1 l = new Logic1(logic1_inputs);
try{
Logic1Output l1Output = l.execute();
} catch( Logic1Exception l1Exception) {
// exception handling
}
Logic2Inputs logic2_inputs = new Logic2Inputs(l1Output);
Logic2 l2 = new Logic2(logic2_inputs);
try{
Logic2Output l2Output = l2.execute();
} catch( Logic2Exception l2Exception) {
// exception handling
}
Logic3Inputs logic3_inputs = new Logic3Inputs(l1Output, l2Output);
Logic3 l3 = new Logic3(logic2_inputs);
try{
Logic3Output l3Output = l3.execute();
} catch( Logic3Exception l3Exception) {
// exception handling
}
} catch(GlobalException globalEx){
// exception handling
}
I think this is called pipeline: http://en.wikipedia.org/wiki/Pipeline_%28software%29 This pattern is used for algorithms in which data flows through a sequence of tasks or stages.
You can search for a library that does this( http://code.google.com/p/pipelinepattern ) or try your own java implementation
Basically you have all you objects in a list and the output from one si passed to the next. This is a naive implementation but you can add generics and all you need
public class BasicPipelinePattern {
List<Filter> filters;
public Object process(Object input) {
for (Filter c : filters) {
try {
input = c.apply(input);
} catch (Exception e) {
// exception handling
}
}
return input;
}
}
public interface Filter {
public Object apply(Object o);
}
When faced with problems like this, I like to see how other programming languages might solve it. Then I might borrow that concept and apply it to the language that I'm using.
In javascript, there has been much talk of promises and how they can simplify not only asynchronous processing, but error handling. This page is a great introduction to the problem.
Then approach has been called using "thenables". Here's the pseudocode:
initialStep.execute().then(function(result1){
return step2(result1);
}).then(function(result2){
return step3(result3);
}).error(function(error){
handle(error);
}).done(function(result3){
handleResult(result3)
});
The advantage of this pattern is that you can focus on the processing and effectively handle errors in one place without needing to worry about checking for success at each step.
So how would this work in java? I would take a look at one of the promises/futures libraries, perhaps jdeferred. I would expect that you could put something like this together (assuming java 8 for brevity):
initialPromise.then( result1 -> {
Logic2 logic2 = new Logic2(new Logic2Inputs(result1));
return logic2.execute();
}).then(result2 -> {
Logic3 logic3 = new Logic3(new Logic3Inputs(result2));
return logic2.execute();
}).catch(exception -> {
handleException(exception)
}).finally( result -> {
handleResult(result);
});
This does, of course gloss over a hidden requirement in your code. You mention that in step 3 you need the output for both step 1 and step 2. If you were writing scala, there is syntactic sugar that would handle this for you (leaving out error handling for the moment):
for(result1 <- initialStep.execute();
Logic2 logic2 = new Logic2(Logic2Input(result1));
result2 <- logic2.execute();
Logic3 logic3 = new Logic3(Logic3Input(result1, result2));
result3 <- logic3.execute()) yield result3;
But since you don't have the ability here, then you are left to the choices of being refactoring each step to take only the output of the previous step, or nesting the processing so that result1 is still in scope when you need to set up step 3.
The classic alternative to this, as #user1121883 mentioned would be to use a Pipeline processor. The downside to this approach is that it works best if your input and output are the same type. Otherwise you are going to have to push Object around everywhere and do a lot of type checking.
Another alternative would be to expose a fluent interface for the pipeline. Again, you'd want to do some refactoring, perhaps to have a parameter-less constructor and a consistent interface for inputs and outputs:
Pipeline p = new Pipeline();
p.then(new Logic1())
.then(new Logic2())
.then(new Logic3())
.addErrorHandlder(e->handleError(e))
.complete();
This last option is more ideomatic java, but retains many of the advantages of the thenables processing, so it's probably the way that I would go.

Struggle against habits formed by Java when migrating to Scala

What are the most common mistakes that Java developers make when migrating to Scala?
By mistakes I mean writing a code that does not conform to Scala spirit, for example using loops when map-like functions are more appropriate, excessive use of exceptions etc.
EDIT: one more is using own getters/setters instead of methods kindly generated by Scala
It's quite simple: Java programmer will tend to write imperative style code, whereas a more Scala-like approach would involve a functional style.
That is what Bill Venners illustrated back in December 2008 in his post "How Scala Changed My Programming Style".
That is why there is a collection of articles about "Scala for Java Refugees".
That is how some of the SO questions about Scala are formulated: "help rewriting in functional style".
One obvious one is to not take advantage of the nested scoping that scala allows, plus the delaying of side-effects (or realising that everything in scala is an expression):
public InputStream foo(int i) {
final String s = String.valueOf(i);
boolean b = s.length() > 3;
File dir;
if (b) {
dir = new File("C:/tmp");
} else {
dir = new File("/tmp");
}
if (!dir.exists()) dir.mkdirs();
return new FileInputStream(new File(dir, "hello.txt"));
}
Could be converted as:
def foo(i : Int) : InputStream = {
val s = i.toString
val b = s.length > 3
val dir =
if (b) {
new File("C:/tmp")
} else {
new File("/tmp")
}
if (!dir.exists) dir.mkdirs()
new FileInputStream(new File(dir, "hello.txt"))
}
But this can be improved upon a lot. It could be:
def foo(i : Int) = {
def dir = {
def ensuring(d : File) = { if (!d.exists) require(d.mkdirs); d }
def b = {
def s = i.toString
s.length > 3
}
ensuring(new File(if (b) "C:/tmp" else "/tmp"));
}
new FileInputStream(dir, "hello.txt")
}
The latter example does not "export" any variable beyond the scope which it is needed. In fact, it does not declare any variables at all. This means it is easier to refactor later. Of course, this approach does lead to hugely bloated class files!
A couple of my favourites:
It took me a while to realise how truly useful Option is. A common mistake carried from Java is to use null to represent a field/variable that sometimes does not have a value. Recognise that you can use 'map' and 'foreach' on Option to write safer code.
Learn how to use 'map', 'foreach', 'dropWhile', 'foldLeft', ... and other handy methods on Scala collections to save writing the kind of loop constructions you see everywhere in Java, which I now perceive as verbose, clumsy, and harder to read.
A common mistake is to go wild and overuse a feature not present in Java once you "grokked" it. E.g. newbies tend to overuse pattern matching(*), explicit recursion, implicit conversions, (pseudo-) operator overloading and so on. Another mistake is to misuse features that look superficially similar in Java (but ain't), like for-comprehensions or even if (which works more like Java's ternary operator ?:).
(*) There is a great cheat sheet for pattern matching on Option: http://blog.tmorris.net/scalaoption-cheat-sheet/
I haven't adopted lazy vals and streams so far.
In the beginning, a common error (which the compiler finds) is to forget the semicolon in a for:
for (a <- al;
b <- bl
if (a < b)) // ...
and where to place the yield:
for (a <- al) yield {
val x = foo (a).map (b).filter (c)
if (x.cond ()) 9 else 14
}
instead of
for (a <- al) {
val x = foo (a).map (b).filter (c)
if (x.cond ()) yield 9 else yield 14 // why don't ya yield!
}
and forgetting the equals sign for a method:
def yoyo (aka : Aka) : Zirp { // ups!
aka.floskel ("foo")
}
Using if statements. You can usually refactor the code to use if-expressions or by using filter.
Using too many vars instead of vals.
Instead of loops, like others have said, use the list comprehension functions like map, filter, foldLeft, etc. If there isn't one available that you need (look carefully there should be something you can use), use tail recursion.
Instead of setters, I keep the spirit of functional programming and have my objects immutable. So instead I do something like this where I return a new object:
class MyClass(val x: Int) {
def setX(newx: Int) = new MyClass(newx)
}
I try to work with lists as much as possible. Also, to generate lists, instead of using a loop, use the for/yield expressions.
Using Arrays.
This is basic stuff and easily spotted and fixed, but will slow you down initially when it bites your ass.
Scala has an Array object, while in Java this is a built in artifact. This means that initialising and accessing elements of the array in Scala are actually method calls:
//Java
//Initialise
String [] javaArr = {"a", "b"};
//Access
String blah1 = javaArr[1]; //blah1 contains "b"
//Scala
//Initialise
val scalaArr = Array("c", "d") //Note this is a method call against the Array Singleton
//Access
val blah2 = scalaArr(1) //blah2 contains "d"

Categories