Design pattern for incremental code - java

According to the business logic, the output of one of the method is used as an input to another. The logic has linear flow.
To emulate the behaviour, now there is a controller class which has everything.
It is very messy, too much loc and hard to modify. Also the exception handling is very complex. The individual method does some handling but the global exceptions bubble up and which involves a lot of try catch statements.
Does there exists a design pattern to address this problem?
Example Controller Class Code
try{
Logic1Inputs logic1_inputs = new Logic1Inputs( ...<some other params>... );
Logic1 l = new Logic1(logic1_inputs);
try{
Logic1Output l1Output = l.execute();
} catch( Logic1Exception l1Exception) {
// exception handling
}
Logic2Inputs logic2_inputs = new Logic2Inputs(l1Output);
Logic2 l2 = new Logic2(logic2_inputs);
try{
Logic2Output l2Output = l2.execute();
} catch( Logic2Exception l2Exception) {
// exception handling
}
Logic3Inputs logic3_inputs = new Logic3Inputs(l1Output, l2Output);
Logic3 l3 = new Logic3(logic2_inputs);
try{
Logic3Output l3Output = l3.execute();
} catch( Logic3Exception l3Exception) {
// exception handling
}
} catch(GlobalException globalEx){
// exception handling
}

I think this is called pipeline: http://en.wikipedia.org/wiki/Pipeline_%28software%29 This pattern is used for algorithms in which data flows through a sequence of tasks or stages.
You can search for a library that does this( http://code.google.com/p/pipelinepattern ) or try your own java implementation
Basically you have all you objects in a list and the output from one si passed to the next. This is a naive implementation but you can add generics and all you need
public class BasicPipelinePattern {
List<Filter> filters;
public Object process(Object input) {
for (Filter c : filters) {
try {
input = c.apply(input);
} catch (Exception e) {
// exception handling
}
}
return input;
}
}
public interface Filter {
public Object apply(Object o);
}

When faced with problems like this, I like to see how other programming languages might solve it. Then I might borrow that concept and apply it to the language that I'm using.
In javascript, there has been much talk of promises and how they can simplify not only asynchronous processing, but error handling. This page is a great introduction to the problem.
Then approach has been called using "thenables". Here's the pseudocode:
initialStep.execute().then(function(result1){
return step2(result1);
}).then(function(result2){
return step3(result3);
}).error(function(error){
handle(error);
}).done(function(result3){
handleResult(result3)
});
The advantage of this pattern is that you can focus on the processing and effectively handle errors in one place without needing to worry about checking for success at each step.
So how would this work in java? I would take a look at one of the promises/futures libraries, perhaps jdeferred. I would expect that you could put something like this together (assuming java 8 for brevity):
initialPromise.then( result1 -> {
Logic2 logic2 = new Logic2(new Logic2Inputs(result1));
return logic2.execute();
}).then(result2 -> {
Logic3 logic3 = new Logic3(new Logic3Inputs(result2));
return logic2.execute();
}).catch(exception -> {
handleException(exception)
}).finally( result -> {
handleResult(result);
});
This does, of course gloss over a hidden requirement in your code. You mention that in step 3 you need the output for both step 1 and step 2. If you were writing scala, there is syntactic sugar that would handle this for you (leaving out error handling for the moment):
for(result1 <- initialStep.execute();
Logic2 logic2 = new Logic2(Logic2Input(result1));
result2 <- logic2.execute();
Logic3 logic3 = new Logic3(Logic3Input(result1, result2));
result3 <- logic3.execute()) yield result3;
But since you don't have the ability here, then you are left to the choices of being refactoring each step to take only the output of the previous step, or nesting the processing so that result1 is still in scope when you need to set up step 3.
The classic alternative to this, as #user1121883 mentioned would be to use a Pipeline processor. The downside to this approach is that it works best if your input and output are the same type. Otherwise you are going to have to push Object around everywhere and do a lot of type checking.
Another alternative would be to expose a fluent interface for the pipeline. Again, you'd want to do some refactoring, perhaps to have a parameter-less constructor and a consistent interface for inputs and outputs:
Pipeline p = new Pipeline();
p.then(new Logic1())
.then(new Logic2())
.then(new Logic3())
.addErrorHandlder(e->handleError(e))
.complete();
This last option is more ideomatic java, but retains many of the advantages of the thenables processing, so it's probably the way that I would go.

Related

How can I preserve clean code flow after going async with CompletableFuture?

I'm running into an issue with CompletableFutures. I have a JAX RS-based REST endpoint that reaches out to an API, and I need to make 3 sequential calls to this API. Current flow looks like this:
FruitBasket fruitBasket = RestGateway.GetFruitBasket("today");
Fruit chosenFruit = chooseFruitFromBasket(fruitBasket);
Boolean success = RestGateway.RemoveFromBasket(chosenFruit);
if (success) {
RestGateway.WhatWasRemoved(chosenFruit.getName());
} else {
throw RuntimeException("Could not remove fruit from basket.");
}
return chosenFruit
Of course, each of the calls to RestGateway.SomeEndpoint() is blocking because it does not use .async() in building my request.
So now let's add .async() and return a CompletableFuture from each of the RestGateway interactions.
My initial thought is to do this:
Fruit chosenFruit;
RestGateway.GetFruitBasket("today")
.thenCompose(fruitBasket -> {
chosenFruit = chooseFruitFromBasket(fruitBasket);
return RestGateway.RemoveFromBasket(chosenFruit);
})
.thenCompose(success -> {
if(success) {
RestGateway.WhatWasRemoved(chosenFruit);
} else {
throw RuntimeException("Could not remove fruit from basket.");
});
return chosenFruit;
Because this seems to guarantee me that execution will happen sequentially, and if a previous stage fails then the rest will fail.
Unfortunately, this example is simple and has many less stages than my actual use-case. It feels like I'm writing lots of nested conditionals inside of stacked .thenCompose() blocks. Is there any way to write this in a more sectioned/compartmentalized way?
What I'm looking for is something like the original code:
FruitBasket fruitBasket = RestGateway.GetFruitBasket("today").get();
Fruit chosenFruit = chooseFruitFromBasket(fruitBasket);
Boolean success = RestGateway.RemoveFromBasket(chosenFruit).get();
if (success) {
RestGateway.WhatWasRemoved(chosenFruit.getName()).get();
} else {
throw RuntimeException("Could not remove fruit from basket.");
}
return chosenFruit
But the calls to .get() are blocking! So there is absolutely no benefit from the asynchronous re-write of the RestGateway!
TL;DR - Is there any way to preserve the original code flow, while capturing the benefits of asynchronous non-blocking web interactions? The only way I see it is to cascade lots of .thenApply() or .thenCompose methods from the CompletableFuture library, but there must be a better way!
I think this problem is solved by await() in JavaScript, but I don't think Java has anything of that sort.
As the folks in the comments section mentioned, there's not much that can be done.
I will however close this off with a link to another stack that proved tremendously helpful, and solved another problem that I didn't explicitly mention. Namely, how to pass values forward from previous stages while dealing with this warning:
variable used in lambda expression should be final or effectively final.
Using values from previously chained thenCompose lambdas in Java 8
I ended up with something like this:
CompletableFuture<Boolean> stepOne(String input) {
return RestGateway.GetFruitBasket("today")
.thenCompose(fruitBasket -> {
Fruit chosenFruit = chooseFruitFromBasket(fruitBasket);
return stepTwo(chosenFruit);
});
}
CompletableFuture<Boolean> stepTwo(Fruit chosenFruit) {
return RestGateway.RemoveFromBasket(chosenFruit)
.thenCompose(success -> {
if (success) {
return stepThree(chosenFruit.getName());
} else {
throw RuntimeException("Could not remove fruit from basket.");
}
});
}
CompletableFuture<Boolean> stepThree(String fruitName) {
return RestGateway.WhatWasRemoved(fruitName);
}
Assuming that RestGateway.WhatWasRemoved() returns a Boolean.

Identify record that is culprit - coding practices

Is method chaining good?
I am not against functional programming that uses method chaining a lot, but against a herd mentality where people mindlessly run behind something that is new.
The example, if I am processing a list of items using stream programming and need to find out the exact row that resulted into throwing NullPointerException.
private void test() {
List<User> aList = new ArrayList<>();
// fill aList with some data
aList.stream().forEach(x -> doSomethingMeaningFul(x.getAddress()));
}
private void doSomethingMeaningFul(Address x) {
// Do something
}
So in the example above if any object in list is null, it will lead to NullPointerException while calling x.getAddress() and come out, without giving us a hook to identify a User record which has this problem.
I may be missing something that offers this feature in stream programming, any help is appreciated.
Edit 1:
NPE is just an example, but there are several other RuntimeExceptions that could occur. Writing filter would essentially mean checking for every RTE condition based on the operation I am performing. And checking for every operation will become a pain.
To give a better idea about what I mean following is the snippet using older methods; I couldn't find any equivalent with streams / functional programming methods.
List<User> aList = new ArrayList<>();
// Fill list with some data
int counter = 0;
User u = null;
try {
for (;counter < aList.size(); counter++) {
u = aList.get(counter);
u.doSomething();
int result = u.getX() / u.getY();
}
} catch(Exception e) {
System.out.println("Error processing at index:" + counter + " with User record:" + u);
System.out.println("Exception:" + e);
}
This will be a boon during the maintenance phase(longest phase) pointing exact data related issues which are difficult to reproduce.
**Benefits:**
- Find exact index causing issue, pointing to data
- Any RTE is recorded and analyzed against the user record
- Smaller stacktrace to look at
Is method chaining good?
As so often, the simple answer is: it depends.
When you
know what you are doing
are be very sure that elements will never be null, thus the chance for an NPE in such a construct is (close to) 0
and the chaining of calls leads to improved readability
then sure, chain calls.
If any of the above criteria isn't clearly fulfilled, then consider not doing that.
In any case, it might be helpful to distribute your method calls on new lines. Tools like IntelliJ actually give you advanced type information for each line, when you do that (well, not always, see my own question ;)
From a different perspective: to the compiler, it doesn't matter much if you chain call. That really only matters to humans. Either for readability, or during debugging.
There are a few aspects to this.
1) Nulls
It's best to avoid the problem of checking for nulls, by never assigning null. This applies whether you're doing functional programming or not. Unfortunately a lot of library code does expose the possibility of a null return value, but try to limit exposure to this by handling it in one place.
Regardless of whether you're doing FP or not, you'll find you get a lot less frustrated if you never have to write null checks when calling your own methods, because your own methods can never return null.
An alternative to variables that might be null, is to use Java 8's Optional class.
Instead of:
public String myMethod(int i) {
if(i>0) {
return "Hello";
} else {
return null;
}
}
Do:
public Optional<String> myMethod(int i) {
if(i>0) {
return Optional.of("Hello");
} else {
return Optional.empty();
}
Look at Optional Javadoc to see how this forces the caller to think about the possibility of an Optional.empty() response.
As a bridge between the worlds of "null represents absent" and "Optional.empty() represents absent", you can use Optional.ofNullable(val) which returns Empty when val == null. But do bear in mind that Optional.empty() and Optional.of(null) are different values.
2) Exceptions
It's true that throwing an exception in a stream handler doesn't work very well. Exceptions aren't a very FP-friendly mechanism. The FP-friendly alternative is Either -- which isn't a standard part of Java but is easy to write yourself or find in third party libraries: Is there an equivalent of Scala's Either in Java 8?
public Either<Exception, Result> meaningfulMethod(Value val) {
try {
return Either.right(methodThatMightThrow(val));
} catch (Exception e) {
return Either.left(e);
}
}
... then:
List<Either<Exception, Result>> results = listOfValues.stream().map(meaningfulMethod).collect(Collectors.toList());
3) Indexes
You want to know the index of the stream element, when you're using a stream made from a List? See Is there a concise way to iterate over a stream with indices in Java 8?
In your test() function you are creating an emptylist List<User> aList = new ArrayList<>();
And doing for each on it. First add some element to
aList
If you want to handle null values you can add .filter(x-> x != null) this before foreach it will filter out all null value
Below is code
private void test() {
List<User> aList = new ArrayList<>();
aList.stream().filter(x-> x != null).forEach(x -> doSomethingMeaningFul(x.getAddress()));
}
private void doSomethingMeaningFul(Address x) {
// Do something
}
You can write a black of code in streams. And you can find out the list item which might result in NullPointerException. I hope this code might help
private void test() {
List<User> aList = new ArrayList<>();
aList.stream().forEach(x -> {
if(x.getAddress() != null)
return doSomethingMeaningFul(x.getAddress())
else
system.out.println(x+ "doesn't have address");
});
}
private void doSomethingMeaningFul(Address x) {
// Do something
}
If you want you can throw NullPointerException or custom excption like AddressNotFoundException in the else part

How can I collect results of a Java stream operation that throws exceptions in a concise manner?

Have a look at the following snippet that tries to convert a list of strings into a list of class objects:
public static List<Class<?>> f1(String... strings) {
return
Stream.of(strings)
.map(s -> {
try {
return Class.forName(s);
}
catch (ClassNotFoundException e) {
System.out.println(e.getMessage());
}
return null;
})
.collect(Collectors.toList());
}
Because of the way Java handles checked exceptions in streams (as has been discussed at length here - where I also blatantly stole my example snippet from), you have to have the additional return statement after the try-catch block. This will result in an unwanted null reference being added to the result of type List<Class>.
To avoid this extra null reference, I came up with the following function, achieving a better result with a plain old procedural loop:
public static List<Class<?>> f2(String... strings) {
List<Class<?>> classes = new ArrayList<>();
for (String s : strings) {
try {
classes.add(Class.forName(s));
}
catch (ClassNotFoundException e) {
// Handle exception here
}
}
return classes;
}
Does this observation allow me to draw a conclusion in the form of a best-practice advice, that could be expressed like the following?
If you need to call a function that throws an exception inside a stream and you cannot make use of stream.parallel(), better use a loop, because:
You'll need the try-catch block anyway (notwithstanding the tricky solutions involving some kind of wrapper around the throwing function provided in the aforementioned discussion)
Your code will not be less concise (mainly because of 1.)
Your loop will not break in case of an exception
What do you think?
You can do this without introducing elements which you have to filter in a subsequent step:
public static List<Class<?>> f1(String... strings) {
return Arrays.stream(strings)
.flatMap(s -> {
try { return Stream.of(Class.forName(s)); }
catch(ClassNotFoundException e) { return null; }
})
.collect(Collectors.toList());
}
Still, this isn’t more concise than a loop solution.
But it should be noted that this has nothing to do with checked exceptions. You want to continue in the exceptional case, omitting the failed element. That always requires you to catch the exception to implement this alternative behavior (the default would be to propagate the exception to the caller), whether the exception is checked or unchecked. The checked case has the advantage of reminding you that you have to do this.
In other words, the Stream API does not allow you to implement the behavior of propagating checked exceptions to the caller in a simple way, but that’s not what you want here anyway. If Class.forName(String) was designed to throw unchecked exceptions only, omitting the try … catch block in map would cause the entire operation to abort in the exceptional case by relaying the exception to the caller. But, as said, that’s not what you want here.
As #Patrick wrote, you could filter your null classes. Just add a filter after the map in your stream:
.filter(Objects::nonNull)
.collect(Collectors.toList());

Try-Catch Instead of Null Check When Using Several Getters

My problem is the following, I have quite a long Getter, i.e.,
objectA.getObjectB().getObjectC().getObjectD().getObjectE().getName();
Due to "bad" database/entity design (some things were introduced later than others) it happens that getObjectB(), getObjectC() or getObjectD() could return NULL.
Usually we use null-checks all the time, but in this case, I'd have to use
ObjectB b = objectA.getObjectB();
if (b != null) {
ObjectC c = b.getObjectC();
if (c != null) {
ObjectD d = c.getObjectD();
if (d != null)
return d.getObjectE().getName();
}
}
return "";
Instead it would be much easier to simply use a try-catch block
try {
return objectA.getObjectB().getObjectC().getObjectD().getObjectE().getName();
} catch (NullPointerException e) {
return "";
}
In this case I don't really care which object returned NULL, it's either display a name or don't. Are there any complications or is it bad design to use try-catch instead of checks?
Thanks for your input.
If it is an option to use Java 8, you can use Optional as follows:
Optional.ofNullable(objectA)
.map(a -> a.getObjectB())
.map(b -> b.getObjectC())
.map(c -> c.getObjectD())
.map(d -> d.getObjectE())
.map(e -> e.getName())
.orElse("");
This kind of method chaining is called a "Train wreck" and is not preferred.
Such a statement also violates the Law of Demeter . Let me give you an example from the book Clean code by Robert C Martin:
String scratchDirPath = ctxt.getOptions().getScratchDir().getAbsolutePath();
BufferedOutputStream bos = new BufferedOutputStream(new FileOutputStream(scratchDirPath));
//write to file...
This is similar to what you have and it is a bad practice. This could atleast be refactored to below:
Options options = ctxt.getOptions();
File scratchDir = options.getScratchDir();
String scratchDirPath = scratchDir.getAbsolutePath();
BufferedOutputStream bos = new BufferedOutputStream(new FileOutputStream(scratchDirPath));
//write to file...
This is still violating the Law of Demeter but is at least partially better.
The most preferable way would be to find out why the scratchDirPath is needed and ask ctxt object to provide it to you. So it would look like -
BufferedOutputStream bos = ctxt.createScratchDirFileStream();
This way ctxt does not expose all its internals. This decouples the calling code from the implementation of ctxt.
It's really a judgement call only you and your team can make, but there's (at least) one objective thing I'll call out: In the case where one of those methods returns null, your first code block will perform better than your second one. Throwing an exception is expensive relative to a simple branch. (In the case where none of the methods returns null, I think the second might be almost unmeasurably faster than the second as you avoid the branching.) In either case, if performance matters, test it in your environment. (But test it properly; micro benchmarks are hard. Use tools.)
Most of time, the difference won't matter in the real world, of course, but that's the significant difference at runtime.

Is it ok to handle a class metadata through reflection to ensure a DRY approach?

The title might seem unsettling, but let me explain.
I'm facing an interesting challenge, where I have a hierarchy of classes that have associated an object that stores metadata related to each one of its attributes (an int-valued enum with edit flags like UPDATED or NO_UPDATE).
The problem comes when merging two objects, because I dont want to check EVERY field on a class to see if it was updated and skip or apply the changes.
My idea: Reflection.
All the objects are behind an interface, so I could use IObject.class.getMethods() and iterate over that array in this fashion:
IClass class = //Instance of the first class;
IAnotherClass anotherClass = //Instance of the second class;
for(Method m : IObject.class.getMethods()) {
if(m.getName().startsWith("get")) {
try {
//Under this method (which is a getter) I cast it on
//both classes who implement interfaces that extend an
//interface that defines the getters to make them
//consistent and ensure I'll invoke the same methods.
String propertyClass = (String)m.invoke(class);
String propertyAnotherClass = (String)m.invoke(anotherClass);
if(propertyClass != propertyAnotherClass) {
//Update attribute and attribute status.
}
} catch (Exception e) {
}
}
}
Is there another way to implement this or should I stick to lengthy methods invoking attribute per attribute and doing the checks like that?. The objects are not going to change that much and the architecture is quite modular, so there is not much update involved if the fields change but having to change a method like that worries me a little.
EDIT 1: I'm posting a working code of what I have got so far. This code is a solution for me but, tough it works, I'm using it as a last resource not because I have time to spend but because I don't want to rediscover the wheel. If I use it, I'll make a static list with the methods so I only have to fetch that list once, considering the fact that AlexR pointed out.
private static void merge(IClazz from, IClazz to) {
Method methods[] = from.getClass().getDeclaredMethods();
for(Method m : methods) {
if(m.getName().startsWith("get") && !m.getName().equals("getMetadata")) {
try {
String commonMethodAnchor = m.getName().split("get")[1];
if(!m.getReturnType().cast(m.invoke(from)).equals(m.getReturnType().cast(m.invoke(to)))) {
String setterMethodName = "set" + commonMethodAnchor;
Method setter = IClazz.class.getDeclaredMethod(setterMethodName, m.getReturnType());
setter.invoke(to, m.getReturnType().cast(m.invoke(from)));
//Updating metadata
String metadataMethodName = "set" + commonMethodAnchor + "Status";
Method metadataUpdater = IClazzMetadata.class.getDeclaredMethod(metadataMethodName, int.class);
metadataUpdater.invoke(to.getMetadata(), 1);
}
} catch (Exception e) {
}
}
}
}
metadataUpdater sets the value to 1 just to simulate the "UPDATED" flag I'm using on the real case scenario.
EDIT 3: Thanks Juan, David and AlexR for your suggestions and directions! They really pointed me to consider things I did not consider at first (I'm upvoting all your answers because all of them helped me).
After adding what AlexR sugegsted and checking jDTO and Apache Commons (finding out that in the end the general concepts are quite similar) I've decided to stick to my code instead of using other tools, since it is working given the object hierarchy and metadata structure of the solution and there are no exceptions popping up so far. The code is the one on the 2nd edit and I've placed it on a helper class that did the trick in the end.
Apache Commons Bean Utils may resolve your problem: http://commons.apache.org/beanutils/
If you want to copy all properties, try to use copyProperties: http://commons.apache.org/beanutils/v1.8.3/apidocs/src-html/org/apache/commons/beanutils/BeanUtils.html#line.134
Look an example from: http://www.avajava.com/tutorials/lessons/how-do-i-copy-properties-from-one-bean-to-another.html
FromBean fromBean = new FromBean("fromBean", "fromBeanAProp", "fromBeanBProp");
ToBean toBean = new ToBean("toBean", "toBeanBProp", "toBeanCProp");
System.out.println(ToStringBuilder.reflectionToString(fromBean));
System.out.println(ToStringBuilder.reflectionToString(toBean));
try {
System.out.println("Copying properties from fromBean to toBean");
BeanUtils.copyProperties(toBean, fromBean);
} catch (IllegalAccessException e) {
e.printStackTrace();
} catch (InvocationTargetException e) {
e.printStackTrace();
}
System.out.println(ToStringBuilder.reflectionToString(fromBean));
System.out.println(ToStringBuilder.reflectionToString(toBean));
I think the best approach would be using proxy objects, either dynamic proxies or cglib enhancers or something like it, so you decorate the getters and setters and you can keep track of the changes there.
Hope it helps.
Your approach is OK, but keep in mind that getMethod() is much slower than invoke(), so if your code is performance critical you will probably want to cache the Method objects.

Categories