How to make a condition block in java execute only once? - java

Consider a file reading scenario, where the first line is a header. When looping the file line by line, an 'if condition' would be used to check whether the current line is the first line.
for(Line line : Lines)
{
if(firstLine)
{
//parse the headers
}else
{
//parse the body
}
}
Now since the control has come inside the 'if block', in all the other occurrences, it is waste to check the 'if condition'.
Is there any concept in java so that, once after executing, the particular line of code just vanishes away?
I think this would be a great feature if introduced or does it already exist?
Edit: Thank you for your answers. As I see, there were many approaches based on 'divide the data set'. So instead of 'first line', consider there is a 'special line'..
for(Line line : Lines) //1 million lines
{
if(specialLine)
{
//handle the special line
}else
{
//handle the rest
}
}
In the above code block, suppose the special line comes only after half way to 1 million iterations, how would you handle this scenario efficiently?

What about changing the logic a little!
So because the header is the first line you can get the firstLine and parse it alone, then start to parse the body starting from the second line :
List firstLine = Lines.get(0);
//parse the headers ^------------------Get the first line
for (int i = 1; i < Lines.size(); i++) {
// ^--------------------------------Parse the body starting from the 2nd line
//parse the body
}
In this case you don't need any verification.

Try to divde the dataset.
public List head(List list) {
return list.subList(0, 1);
}
public List tail(List list) {
return list.subList(1, list.size());
}
head(lines).foreach( lambda: //parse the header)
tail(lines).foreach( lambda: //parse the body)

You could (pseudo code) simply reverse things:
// parse headers
for(Line line: remainingLines)
{
But then: you would be using a simple boolean flag for that check. That if check is almost for free, and even more: when this loop sees so many repetitions that it is worth optimizing, the JIT will kick in and create optimized code for it.
And if the JIT doesn't kick in, it isn't worth optimizing this. And of course, there is also branch prediction on the hardware level - in other words: you can expect that modern hardware/software makes sure that the optimal thing happens. Without you doing much else besides writing clear, simple code.
In that sense, your question is a typical example of premature optimization. You worry on the wrong aspects. Focus on writing clean, simple code that gets the job done. Because that is what enables the JIT to do its runtime magic. And most optimisations that you apply at jave source code level do not matter at runtime.

The simplest answer is to use some kind of flags to ensure that you parse the first line as header only once.
boolean isHeaderParsed = false
for(Line line: Lines)
{
if(!isHeaderParsed && firstLine)
{
//parse the headers
isHeaderParsed = true
}else
{
//parse the body
}
}
This will achieve what you are looking for. But if you are looking for something more fancier, Interfaces is what you need.
Parser parser = new HeaderParser()
for(Line line: Lines)
{
parser = parser.parse();
}
// Implementations for Parser
public class HeaderParser implements Parser{
public Parser parse() {
// your business logic
return new BodyParser();
}
}
public class BodyParser implements Parser{
public Parser parse() {
// your logic
return this;
}
}
These are commonly referred to as Thunks

Since everybody else is posting some weird answers. You could have an interface.
interface StringConsumer{
void consume(String s);
}
Then just change interfaces after the first attempt.
StringConsumer firstLine = s->{System.out.printf("first line: %s\n", s);};
StringConsumer rest = s->{System.out.printf("more: %s\n", s);};
StringConsumer consumer = firstLine;
for(String line: lines){
consumer.consume(line);
consumer = rest;
}
If you didn't like the assignment, you can make consumer a field and change it's value in the firstConsumer consume method.
class StringProvider{
StringConsumer consumer;
List<String> lines;
//assume this is initialized and populated somewhere.
void process(){
StringConsumer rest = s->{System.out.printf("more: %s\n", s);};
consumer = s->{
System.out.printf("firstline: %s\n", s);
consumer = rest;
};
for(String line: lines){
consumer.consume(line);
}
}
}
Now the first consume call will switch the consumer that is being used. and subsequent calls will to the 'rest' consumer.

Have a boolean flag out side of if-else and change the status of flag when it entered in method. also evaluate this flag along with condition
boolean flag=true;
for(Line line: Lines)
{
if(firstLine && flag)
{
flag=false;
//parse the headers
}else
{
//parse the body
}
}

As no one said that, I'll just go ahead and say that this extra if statement is actually pretty cheap, performance wise.
At the lowest level of your machine, there is a thing called branch prediction. if statements tend to be pretty expensive when the hardware can't predict its result. But when it does get it right, testing a boolean with an if comes out pretty much for free. And since your if will 99.99% of the times evaluate to false, any hardware will do just fine
So don't worry about that too much. You're not wasting cycles there.
And answering your question... It kind of exists already. It doesn't vanish, but if you code it right it should be pretty close to vanished

Related

Trying to add substrings from newLines in a large file to a list

I downloaded my extended listening history from Spotify and I am trying to make a program to turn the data into a list of artists without doubles I can easily make sense of. The file is rather huge because it has data on every stream I have done since 2016 (307790 lines of text in total). This is what 2 lines of the file looks like:
{"ts":"2016-10-30T18:12:51Z","username":"edgymemes69endmylifepls","platform":"Android OS 6.0.1 API 23 (HTC, 2PQ93)","ms_played":0,"conn_country":"US","ip_addr_decrypted":"68.199.250.233","user_agent_decrypted":"unknown","master_metadata_track_name":"Devil's Daughter (Holy War)","master_metadata_album_artist_name":"Ozzy Osbourne","master_metadata_album_album_name":"No Rest for the Wicked (Expanded Edition)","spotify_track_uri":"spotify:track:0pieqCWDpThDCd7gSkzx9w","episode_name":null,"episode_show_name":null,"spotify_episode_uri":null,"reason_start":"fwdbtn","reason_end":"fwdbtn","shuffle":true,"skipped":null,"offline":false,"offline_timestamp":0,"incognito_mode":false},
{"ts":"2021-03-26T18:15:15Z","username":"edgymemes69endmylifepls","platform":"Android OS 11 API 30 (samsung, SM-F700U1)","ms_played":254120,"conn_country":"US","ip_addr_decrypted":"67.82.66.3","user_agent_decrypted":"unknown","master_metadata_track_name":"Opportunist","master_metadata_album_artist_name":"Sworn In","master_metadata_album_album_name":"Start/End","spotify_track_uri":"spotify:track:3tA4jL0JFwFZRK9Q1WcfSZ","episode_name":null,"episode_show_name":null,"spotify_episode_uri":null,"reason_start":"fwdbtn","reason_end":"trackdone","shuffle":true,"skipped":null,"offline":false,"offline_timestamp":1616782259928,"incognito_mode":false},
It is formatted in the actual text file so that each stream is on its own line. NetBeans is telling me the exception is happening at line 19 and it only fails when I am looking for a substring bounded by the indexOf function. My code is below. I have no idea why this isn't working, any ideas?
import java.util.*;
public class MainClass {
public static void main(String args[]){
File dat = new File("SpotifyListeningData.txt");
List<String> list = new ArrayList<String>();
Scanner swag = null;
try {
swag = new Scanner(dat);
}
catch(Exception e) {
System.out.println("pranked");
}
while (swag.hasNextLine())
if (swag.nextLine().length() > 1)
if (list.contains(swag.nextLine().substring(swag.nextLine().indexOf("artist_name"), swag.nextLine().indexOf("master_metadata_album_album"))))
System.out.print("");
else
try {list.add(swag.nextLine().substring(swag.nextLine().indexOf("artist_name"), swag.nextLine().indexOf("master_metadata_album_album")));}
catch(Exception e) {}
System.out.println(list);
}
}
Find a JSON parser you like.
Create a class that with the fields you care about marked up to the parsers specs.
Read the file into a collection of objects. Most parsers will stream the contents so you're not string a massive string.
You can then load the data into objects and store that as you see fit. For your purposes, a TreeSet is probably what you want.
Your code will throw a lot of exceptions only because you don't use braces. Please do use braces in each blocks, whether it is if, else, loops, whatever. It's a good practice and prevent unnecessary bugs.
However, everytime scanner.nextLine() is called, it reads the next line from the file, so you need to avoid using that in this way.
The best way to deal with this is to write a class containing the fields same as the json in each line of the file. And map the json to the class and get desired field value from that.
Your way is too much risky and dependent on structure of the data, even on whitespaces. However, I fixed some lines in your code and this will work for your purpose, although I actually don't prefer operating string in this way.
while (swag.hasNextLine()) {
String swagNextLine = swag.nextLine();
if (swagNextLine.length() > 1) {
String toBeAdded = swagNextLine.substring(swagNextLine.indexOf("artist_name") + "artist_name".length() + 2
, swagNextLine.indexOf("master_metadata_album_album") - 2);
if (list.contains(toBeAdded)) {
System.out.print("Match");
} else {
try {
list.add(toBeAdded);
} catch (Exception e) {
System.out.println("Add to list failed");
}
}
System.out.println(list);
}
}

Identify record that is culprit - coding practices

Is method chaining good?
I am not against functional programming that uses method chaining a lot, but against a herd mentality where people mindlessly run behind something that is new.
The example, if I am processing a list of items using stream programming and need to find out the exact row that resulted into throwing NullPointerException.
private void test() {
List<User> aList = new ArrayList<>();
// fill aList with some data
aList.stream().forEach(x -> doSomethingMeaningFul(x.getAddress()));
}
private void doSomethingMeaningFul(Address x) {
// Do something
}
So in the example above if any object in list is null, it will lead to NullPointerException while calling x.getAddress() and come out, without giving us a hook to identify a User record which has this problem.
I may be missing something that offers this feature in stream programming, any help is appreciated.
Edit 1:
NPE is just an example, but there are several other RuntimeExceptions that could occur. Writing filter would essentially mean checking for every RTE condition based on the operation I am performing. And checking for every operation will become a pain.
To give a better idea about what I mean following is the snippet using older methods; I couldn't find any equivalent with streams / functional programming methods.
List<User> aList = new ArrayList<>();
// Fill list with some data
int counter = 0;
User u = null;
try {
for (;counter < aList.size(); counter++) {
u = aList.get(counter);
u.doSomething();
int result = u.getX() / u.getY();
}
} catch(Exception e) {
System.out.println("Error processing at index:" + counter + " with User record:" + u);
System.out.println("Exception:" + e);
}
This will be a boon during the maintenance phase(longest phase) pointing exact data related issues which are difficult to reproduce.
**Benefits:**
- Find exact index causing issue, pointing to data
- Any RTE is recorded and analyzed against the user record
- Smaller stacktrace to look at
Is method chaining good?
As so often, the simple answer is: it depends.
When you
know what you are doing
are be very sure that elements will never be null, thus the chance for an NPE in such a construct is (close to) 0
and the chaining of calls leads to improved readability
then sure, chain calls.
If any of the above criteria isn't clearly fulfilled, then consider not doing that.
In any case, it might be helpful to distribute your method calls on new lines. Tools like IntelliJ actually give you advanced type information for each line, when you do that (well, not always, see my own question ;)
From a different perspective: to the compiler, it doesn't matter much if you chain call. That really only matters to humans. Either for readability, or during debugging.
There are a few aspects to this.
1) Nulls
It's best to avoid the problem of checking for nulls, by never assigning null. This applies whether you're doing functional programming or not. Unfortunately a lot of library code does expose the possibility of a null return value, but try to limit exposure to this by handling it in one place.
Regardless of whether you're doing FP or not, you'll find you get a lot less frustrated if you never have to write null checks when calling your own methods, because your own methods can never return null.
An alternative to variables that might be null, is to use Java 8's Optional class.
Instead of:
public String myMethod(int i) {
if(i>0) {
return "Hello";
} else {
return null;
}
}
Do:
public Optional<String> myMethod(int i) {
if(i>0) {
return Optional.of("Hello");
} else {
return Optional.empty();
}
Look at Optional Javadoc to see how this forces the caller to think about the possibility of an Optional.empty() response.
As a bridge between the worlds of "null represents absent" and "Optional.empty() represents absent", you can use Optional.ofNullable(val) which returns Empty when val == null. But do bear in mind that Optional.empty() and Optional.of(null) are different values.
2) Exceptions
It's true that throwing an exception in a stream handler doesn't work very well. Exceptions aren't a very FP-friendly mechanism. The FP-friendly alternative is Either -- which isn't a standard part of Java but is easy to write yourself or find in third party libraries: Is there an equivalent of Scala's Either in Java 8?
public Either<Exception, Result> meaningfulMethod(Value val) {
try {
return Either.right(methodThatMightThrow(val));
} catch (Exception e) {
return Either.left(e);
}
}
... then:
List<Either<Exception, Result>> results = listOfValues.stream().map(meaningfulMethod).collect(Collectors.toList());
3) Indexes
You want to know the index of the stream element, when you're using a stream made from a List? See Is there a concise way to iterate over a stream with indices in Java 8?
In your test() function you are creating an emptylist List<User> aList = new ArrayList<>();
And doing for each on it. First add some element to
aList
If you want to handle null values you can add .filter(x-> x != null) this before foreach it will filter out all null value
Below is code
private void test() {
List<User> aList = new ArrayList<>();
aList.stream().filter(x-> x != null).forEach(x -> doSomethingMeaningFul(x.getAddress()));
}
private void doSomethingMeaningFul(Address x) {
// Do something
}
You can write a black of code in streams. And you can find out the list item which might result in NullPointerException. I hope this code might help
private void test() {
List<User> aList = new ArrayList<>();
aList.stream().forEach(x -> {
if(x.getAddress() != null)
return doSomethingMeaningFul(x.getAddress())
else
system.out.println(x+ "doesn't have address");
});
}
private void doSomethingMeaningFul(Address x) {
// Do something
}
If you want you can throw NullPointerException or custom excption like AddressNotFoundException in the else part

Design pattern for incremental code

According to the business logic, the output of one of the method is used as an input to another. The logic has linear flow.
To emulate the behaviour, now there is a controller class which has everything.
It is very messy, too much loc and hard to modify. Also the exception handling is very complex. The individual method does some handling but the global exceptions bubble up and which involves a lot of try catch statements.
Does there exists a design pattern to address this problem?
Example Controller Class Code
try{
Logic1Inputs logic1_inputs = new Logic1Inputs( ...<some other params>... );
Logic1 l = new Logic1(logic1_inputs);
try{
Logic1Output l1Output = l.execute();
} catch( Logic1Exception l1Exception) {
// exception handling
}
Logic2Inputs logic2_inputs = new Logic2Inputs(l1Output);
Logic2 l2 = new Logic2(logic2_inputs);
try{
Logic2Output l2Output = l2.execute();
} catch( Logic2Exception l2Exception) {
// exception handling
}
Logic3Inputs logic3_inputs = new Logic3Inputs(l1Output, l2Output);
Logic3 l3 = new Logic3(logic2_inputs);
try{
Logic3Output l3Output = l3.execute();
} catch( Logic3Exception l3Exception) {
// exception handling
}
} catch(GlobalException globalEx){
// exception handling
}
I think this is called pipeline: http://en.wikipedia.org/wiki/Pipeline_%28software%29 This pattern is used for algorithms in which data flows through a sequence of tasks or stages.
You can search for a library that does this( http://code.google.com/p/pipelinepattern ) or try your own java implementation
Basically you have all you objects in a list and the output from one si passed to the next. This is a naive implementation but you can add generics and all you need
public class BasicPipelinePattern {
List<Filter> filters;
public Object process(Object input) {
for (Filter c : filters) {
try {
input = c.apply(input);
} catch (Exception e) {
// exception handling
}
}
return input;
}
}
public interface Filter {
public Object apply(Object o);
}
When faced with problems like this, I like to see how other programming languages might solve it. Then I might borrow that concept and apply it to the language that I'm using.
In javascript, there has been much talk of promises and how they can simplify not only asynchronous processing, but error handling. This page is a great introduction to the problem.
Then approach has been called using "thenables". Here's the pseudocode:
initialStep.execute().then(function(result1){
return step2(result1);
}).then(function(result2){
return step3(result3);
}).error(function(error){
handle(error);
}).done(function(result3){
handleResult(result3)
});
The advantage of this pattern is that you can focus on the processing and effectively handle errors in one place without needing to worry about checking for success at each step.
So how would this work in java? I would take a look at one of the promises/futures libraries, perhaps jdeferred. I would expect that you could put something like this together (assuming java 8 for brevity):
initialPromise.then( result1 -> {
Logic2 logic2 = new Logic2(new Logic2Inputs(result1));
return logic2.execute();
}).then(result2 -> {
Logic3 logic3 = new Logic3(new Logic3Inputs(result2));
return logic2.execute();
}).catch(exception -> {
handleException(exception)
}).finally( result -> {
handleResult(result);
});
This does, of course gloss over a hidden requirement in your code. You mention that in step 3 you need the output for both step 1 and step 2. If you were writing scala, there is syntactic sugar that would handle this for you (leaving out error handling for the moment):
for(result1 <- initialStep.execute();
Logic2 logic2 = new Logic2(Logic2Input(result1));
result2 <- logic2.execute();
Logic3 logic3 = new Logic3(Logic3Input(result1, result2));
result3 <- logic3.execute()) yield result3;
But since you don't have the ability here, then you are left to the choices of being refactoring each step to take only the output of the previous step, or nesting the processing so that result1 is still in scope when you need to set up step 3.
The classic alternative to this, as #user1121883 mentioned would be to use a Pipeline processor. The downside to this approach is that it works best if your input and output are the same type. Otherwise you are going to have to push Object around everywhere and do a lot of type checking.
Another alternative would be to expose a fluent interface for the pipeline. Again, you'd want to do some refactoring, perhaps to have a parameter-less constructor and a consistent interface for inputs and outputs:
Pipeline p = new Pipeline();
p.then(new Logic1())
.then(new Logic2())
.then(new Logic3())
.addErrorHandlder(e->handleError(e))
.complete();
This last option is more ideomatic java, but retains many of the advantages of the thenables processing, so it's probably the way that I would go.

Scanner.findInLine() leaks memory massively

I'm running a simple scanner to parse a string, however I've discovered that if called often enough I get OutOfMemory errors. This code is called as part of the constructor of an object that is built repeatedly for an array of strings :
Edit: Here's the constructor for more infos; not much more happening outside of the try-catch regarding the Scanner
public Header(String headerText) {
char[] charArr;
charArr = headerText.toCharArray();
// Check that all characters are printable characters
if (charArr.length > 0 && !commonMethods.isPrint(charArr)) {
throw new IllegalArgumentException(headerText);
}
// Check for header suffix
Scanner sc = new Scanner(headerText);
MatchResult res;
try {
sc.findInLine("(\\D*[a-zA-Z]+)(\\d*)(\\D*)");
res = sc.match();
} finally {
sc.close();
}
if (res.group(1) == null || res.group(1).isEmpty()) {
throw new IllegalArgumentException("Missing header keyword found"); // Empty header to store
} else {
mnemonic = res.group(1).toLowerCase(); // Store header
}
if (res.group(2) == null || res.group(2).isEmpty()) {
suffix = -1;
} else {
try {
suffix = Integer.parseInt(res.group(2)); // Store suffix if it exists
} catch (NumberFormatException e) {
throw new NumberFormatException(headerText);
}
}
if (res.group(3) == null || res.group(3).isEmpty()) {
isQuery= false;
} else {
if (res.group(3).equals("?")) {
isQuery = true;
} else {
throw new IllegalArgumentException(headerText);
}
}
// If command was of the form *ABC, reject suffixes and prefixes
if (mnemonic.contains("*")
&& suffix != -1) {
throw new IllegalArgumentException(headerText);
}
}
A profiler memory snapshot shows the read(Char) method of Scanner.findInLine() to be allocated massive amounts of memory during operation as a I scan through a few hundred thousands strings; after a few seconds it already is allocated over 38MB.
I would think that calling close() on the scanner after using it in the constructor would flag the old object to be cleared by the GC, but somehow it remains and the read method accumulates gigabytes of data before filling the heap.
Can anybody point me in the right direction?
You haven't posted all your code, but given that you are scanning for the same regex repeatedly, it would be much more efficient to compile a static Pattern beforehand and use this for the scanner's find:
static Pattern p = Pattern.compile("(\\D*[a-zA-Z]+)(\\d*)(\\D*)");
and in the constructor:
sc.findInLine(p);
This may or may not be the source of the OOM issue, but it will definitely make your parsing a bit faster.
Related: java.util.regex - importance of Pattern.compile()?
Update: after you posted more of your code, I see some other issues. If you're calling this constructor repeatedly, it means you are probably tokenizing or breaking up the input beforehand. Why create a new Scanner to parse each line? They are expensive; you should be using the same Scanner to parse the entire file, if possible. Using one Scanner with a precompiled Pattern will be much faster than what you are doing now, which is creating a new Scanner and a new Pattern for each line you are parsing.
The strings that are filling up your memory were created in findInLine(). Therefore, the repeated Pattern creation is not the problem.
Without knowing what the rest of the code does, my guess would be that one of the groups you get out of the matcher is being kept in a field of your object. Then that string would have been allocated in findInLine(), as you see here, but the fact that it is being retained would be due to your code.
Edit:
Here's your problem:
mnemonic = res.group(1).toLowerCase();
What you might not realize is that toLowerCase() returns this if there are no uppercase letters in the string. Also, group(int) returns a substring(), which creates a new string backed by the same char[] as the full string. So, mnemonic actually contains the char[] for the entire line.
The fix would just be:
mnemonic = new String(res.group(1).toLowerCase());
I think that your code snippet is not full. I believe you are calling scanner.findInLine() in loop. Anyway, try to call scanner.reset(). I hope this will solve your problem.
The JVM apparently does not have time to Garbage collect. Possibly because it's using the same code (the constructor) repeatedly to create multiple instances of the same class. The JVM may not do anything about GC until something changes on the run time stack -- and in this case that's not happening. I've been warned in the past about doing "too much" in a constructor as some of the memory management behaviors are not quite the same when other methods are being called.
Your problem is that you are scanning through a couple hundred thousand strings and you are passing the pattern in as a string, so you have a new pattern object for every single iteration of the loop. You can pull the pattern out of the loop, like so:
Pattern toMatch = Pattern.compile("(\\D*[a-zA-Z]+)(\\d*)(\\D*)")
Scanner sc = new Scanner(headerText);
MatchResult res;
try {
sc.findInLine(toMatch);
res = sc.match();
} finally {
sc.close();
}
Then you will only be passing the object reference to toMatch instead of having the overhead of creating a new pattern object for every attempt at a match. This will fix your leak.
Well I've found the source of the problem, it wasn't Scanner exactly but the list holding the objects doing the scanning in the constructor.
The problem had to do with the overrun of a list that was holding references to the object containing the parsing, essentially more strings were received per unit of time than could be processed and the list grew and grew until there were no more RAM. Bounding this list to a maximum size now prevents the parser from overloading the memory; I'll be adding some synchronization between the parser and the data source to avoid this overrun in the future.
Thank you all for your suggestions, I've already made some changes performance wise regarding the scanner and thank you to #RobI for pointing me to jvisualvm which allowed me to trace back the exact culprits holding the references. The memory dump wasn't showing the reference linking.

Purpose of "return" statement in Scala?

Is there any real reason of providing the return statement in Scala? (aside from being more "Java-friendly")
Ignoring nested functions, it is always possible to replace Scala calculations with returns with equivalent calculations without returns. This result goes back to the early days of "structured programming", and is called the structured program theorem, cleverly enough.
With nested functions, the situation changes. Scala allows you to place a "return" buried deep inside series of nested functions. When the return is executed, control jumps out of all of the nested functions, into the the innermost containing method, from which it returns (assuming the method is actually still executing, otherwise an exception is thrown). This sort of stack-unwinding could be done with exceptions, but can't be done via a mechanical restructuring of the computation (as is possible without nested functions).
The most common reason you actually would want to return from inside a nested function is to break out of an imperative for-comprehension or resource control block. (The body of an imperative for-comprehension gets translated to a nested function, even though it looks just like a statement.)
for(i<- 1 to bezillion; j <- i to bezillion+6){
if(expensiveCalculation(i, j)){
return otherExpensiveCalculation(i, j)
}
withExpensiveResource(urlForExpensiveResource){ resource =>
// do a bunch of stuff
if(done) return
//do a bunch of other stuff
if(reallyDoneThisTime) return
//final batch of stuff
}
It is provided in order to accommodate those circumstances in which it is difficult or cumbersome to arrange all control flow paths to converge at the lexical end of the method.
While it is certainly true, as Dave Griffith says, that you can eliminate any use of return, it can often be more obfuscatory to do so than to simply cut execution short with an overt return.
Be aware, too, that return returns from methods, not function (literals) that may be defined within a method.
Here is an example
This method has lots of if-else statements to control flow, because there is no return (that is what I came with, you can use your imagination to extend it). I took this from a real life example and modified it to be a dummy code (in fact it is longer than this):
Without Return:
def process(request: Request[RawBuffer]): Result = {
if (condition1) {
error()
} else {
val condition2 = doSomethingElse()
if (!condition2) {
error()
} else {
val reply = doAnotherThing()
if (reply == null) {
Logger.warn("Receipt is null. Send bad request")
BadRequest("Coudln't receive receipt")
} else {
reply.hede = initializeHede()
if (reply.hede.isGood) {
success()
} else {
error()
}
}
}
}
}
With Return:
def process(request: Request[RawBuffer]): Result = {
if (condition1) {
return error()
}
val condition2 = doSomethingElse()
if (!condition2) {
return error()
}
val reply = doAnotherThing()
if (reply == null) {
Logger.warn("Receipt is null. Send bad request")
return BadRequest("Coudln't receive receipt")
}
reply.hede = initializeHede()
if (reply.hede.isGood)
return success()
return error()
}
To my eyes, the second one is more readable and even manageable than the first one. The depth of indentation (with well formatted code) goes deep and deep if you don't use a return statement. And I don't like it :)
I view return as a useful when writing imperative style code, which generally means I/O code. If you're doing pure functional code, you don't need (and should not use) return. But with functional code you may need laziness to get performance equivalent to imperative code that can "escape early" using return.

Categories