When should I use streams? - java

I just came across a question when using a List and its stream() method. While I know how to use them, I'm not quite sure about when to use them.
For example, I have a list, containing various paths to different locations. Now, I'd like to check whether a single, given path contains any of the paths specified in the list. I'd like to return a boolean based on whether or not the condition was met.
This of course, is not a hard task per se. But I wonder whether I should use streams, or a for(-each) loop.
The List
private static final List<String> EXCLUDE_PATHS = Arrays.asList(
"my/path/one",
"my/path/two"
);
Example using Stream:
private boolean isExcluded(String path) {
return EXCLUDE_PATHS.stream()
.map(String::toLowerCase)
.filter(path::contains)
.collect(Collectors.toList())
.size() > 0;
}
Example using for-each loop:
private boolean isExcluded(String path){
for (String excludePath : EXCLUDE_PATHS) {
if (path.contains(excludePath.toLowerCase())) {
return true;
}
}
return false;
}
Note that the path parameter is always lowercase.
My first guess is that the for-each approach is faster, because the loop would return immediately, if the condition is met. Whereas the stream would still loop over all list entries in order to complete filtering.
Is my assumption correct? If so, why (or rather when) would I use stream() then?

Your assumption is correct. Your stream implementation is slower than the for-loop.
This stream usage should be as fast as the for-loop though:
EXCLUDE_PATHS.stream()
.map(String::toLowerCase)
.anyMatch(path::contains);
This iterates through the items, applying String::toLowerCase and the filter to the items one-by-one and terminating at the first item that matches.
Both collect() & anyMatch() are terminal operations. anyMatch() exits at the first found item, though, while collect() requires all items to be processed.

The decision whether to use Streams or not should not be driven by performance consideration, but rather by readability. When it really comes to performance, there are other considerations.
With your .filter(path::contains).collect(Collectors.toList()).size() > 0 approach, you are processing all elements and collecting them into a temporary List, before comparing the size, still, this hardly ever matters for a Stream consisting of two elements.
Using .map(String::toLowerCase).anyMatch(path::contains) can save CPU cycles and memory, if you have a substantially larger number of elements. Still, this converts each String to its lowercase representation, until a match is found. Obviously, there is a point in using
private static final List<String> EXCLUDE_PATHS =
Stream.of("my/path/one", "my/path/two").map(String::toLowerCase)
.collect(Collectors.toList());
private boolean isExcluded(String path) {
return EXCLUDE_PATHS.stream().anyMatch(path::contains);
}
instead. So you don’t have to repeat the conversion to lowcase in every invocation of isExcluded. If the number of elements in EXCLUDE_PATHS or the lengths of the strings becomes really large, you may consider using
private static final List<Predicate<String>> EXCLUDE_PATHS =
Stream.of("my/path/one", "my/path/two").map(String::toLowerCase)
.map(s -> Pattern.compile(s, Pattern.LITERAL).asPredicate())
.collect(Collectors.toList());
private boolean isExcluded(String path){
return EXCLUDE_PATHS.stream().anyMatch(p -> p.test(path));
}
Compiling a string as regex pattern with the LITERAL flag, makes it behave just like ordinary string operations, but allows the engine to spent some time in preparation, e.g. using the Boyer Moore algorithm, to be more efficient when it comes to the actual comparison.
Of course, this only pays off if there are enough subsequent tests to compensate the time spent in preparation. Determining whether this will be the case, is one of the actual performance considerations, besides the first question whether this operation will ever be performance critical at all. Not the question whether to use Streams or for loops.
By the way, the code examples above keep the logic of your original code, which looks questionable to me. Your isExcluded method returns true, if the specified path contains any of the elements in list, so it returns true for /some/prefix/to/my/path/one, as well as my/path/one/and/some/suffix or even /some/prefix/to/my/path/one/and/some/suffix.
Even dummy/path/onerous is considered fulfilling the criteria as it contains the string my/path/one…

Yeah. You are right. Your stream approach will have some overhead. But you may use such a construction:
private boolean isExcluded(String path) {
return EXCLUDE_PATHS.stream().map(String::toLowerCase).anyMatch(path::contains);
}
The main reason to use streams is that they make your code simpler and easy to read.

The goal of streams in Java is to simplify the complexity of writing parallel code. It's inspired by functional programming. The serial stream is just to make the code cleaner.
If we want performance we should use parallelStream, which was designed to. The serial one, in general, is slower.
There is a good article to read about ForLoop, Stream and ParallelStream Performance.
In your code we can use termination methods to stop the search on the first match. (anyMatch...)

Radical answer:
Never. Ever. Ever.
I almost never iterated a list for anything, especially to find something, yet stream users and systems seem filled with that way of coding.
I find it difficult to refactor and organize such code and I see redundancy and over iteration everywhere in stream heavy systems. In the same method you might see it 5 times. Same list, finding different things.
It is also not really shorter either. Rarely is. Definitely not more readable but that is a subjective opinion. Some people will say it is. I don't. People might like it due to autocompletion but in my editor Intellij, I can just iter or itar and have the for loop auto created for me with types and everything.
Often misused and overused, and I think it is better to avoid it completely. Java is not a true functional language and Java generics suck and are not expressive enough, and certainly more difficult to read, parse and refactor. Just try to visit any of the native Java stream libraries. Do you find that easy to parse?
Also, stream code is not easily extractable or refactorable unless you want to start adding weird methods that return Optionals, Predicates, Consumers and what not and you end up having methods returning and taking all kinds of weird generic constraints with orders and meanings only God knows what.
Too much is inferred where you need to visit methods to figure out the types of various things.
Trying to make Java behave like a functional language like Haskell or Lisp is a fools errand. A heavy streams based Java system is always going to be more complex than a none one and way less performant and more complex to refactor and maintain.
Thus also more buggy and filled with patch work. Glue work everywhere due to the redundancy often filled in such systems. Some people just don't have an issue with redundancy. I am not one of them. Nor should you be either.
When OpenJDK got involved they started adding things to the language without really thinking it thoroughly enough. It is now not just Java Streams which is an issue. Now systems are inherently more complex because they require more base knowledge of these API's. You might have it, but your colleagues don't. They sure as hell know what a for loop is and what an if block is.
Furthermore, since you also can not assign anything to a non final variable you can rarely do two things at the same while looping, so you end up iterating twice, or thrice.
Most that like and prefer the stream approach over a for loop are most likely people that started learning Java post Java 8. Those before hate it. The thing is that it is far more complex to use, refactor and more difficult to use the right way. It requires skills to not fuck up, and then even more skills and energy to repair fuck ups.
And when I say it performs worse, it is not in comparison to a for loop, which is also a very real thing but more due to the tendency such code have to over iterate a wide range of things. It is deemed so easy to iterate a list to find an item that it tends being done over and over again.
I've not seen a single system that has benefitted from it. All of the systems I have seen are horribly implemented, mostly because of it, and I've worked in some of the biggest companies in the world.
Code is definitely not more readable than a for loop and a for loop is definitely more flexible and refactorable. The reason we see so many complex shitty systems and bugs everywhere today is, I promise you due to the heavy reliance on streams to filter, not to mention the accompanied overuse of Lombok and Jackson. Those three are the hallmark of a badly implemented system. Keyword overuse. A patch work approach.
Again, I consider it really bad to iterate a list to find anything. Yet with Stream based systems, this is what people do all the time. It is also not rare and difficult to parse and detect that an iteration might be O(N2) while with a for loop you would immediately see it.
What is often customary to ask the database to filter things for you it is now not rare that instead a base query instead return a big list of things with all kind of iterative logic and methods to filter out the undesirables and of course they use streams to do this. All kinds of methods arises around that big list with various things to filter out things.
Often redundant filtering and thus logic too. Over and over again.
Of course, I do not mean you. But your colleagues. Right?
Personally, I rarely ever iterate anything. I use the right datasets and rely on the database to filter it for me. Once. However in a streams heavy system you will see iteration everywhere.
In the deepest method, in the caller, caller of caller, caller of the caller of the caller. Streams everywhere. It is ugly. And good luck refactoring that code that lives in tiny lambdas. And good luck reusing them. Nobody will look to reuse your nice Predicates.
And if they want to use them, guess what? They need to use more Streams. You just got yourself addicted and cornered yourself further. Now, are you proposing I start splitting all of my code in tiny Predicates, Consumers, Function and BiFcuntions? Just so I can reuse that logic for Streams?
Of course I hate it just as much in Javascript as well where over iteration is everywhere by noob frontend developers.
You might say the cost is nothing to iterate a list but the system complexity grows, redundancy increases and therefore maintenance costs and number of bugs increases. It becomes a patch and glue based approach to various things. Just add another filter and remove this, rather than code things the right way.
Furthermore, where you need three servers to host all of your users, I can manage with just one. So required scalability of such a system is going to be required way earlier than a non streams heavy system. For small projects that is a very important metric. Where you can have say 5000 concurrent users, my system can handle twice or thrice that.
I have no need for it in my code, and when I am in charge of new projects, the first rule is that streams are totally forbidden to use.
That is not to say there are not use cases for it or that it might be useful at times but the risks associated with allowing it far outweighs the benefits.
When you start using Streams you are essentially adopting a whole new programming paradigm. The entire programming style of the system will change and that is what I am concerned about.
You do not want that style. It is not superior to the old style. Especially on Java.
Take the Futures API as an example.
Sure, you could start coding everything to return a Promise or a Future, but do you really want to? Is that going to resolve anything? Can your entire system really follow up on being that, everywhere?
Will it be better for you, or are you just experimenting and hoping you will benefit at some point?
There are people that overdo JavaRx and overdo promises in JavaScript as well. There are really really few cases for when you really want to have things futures based and very many many corner cases will be felt where you will find that those APIs have certain limitations and you just got made.
You can build really really complex and far far more maintainable systems without all that crap.
This is what it is about. It is not about your hobby project expanding and becoming a horrible code base.
It is about what is best approach to build large and complex enterprise systems and ensure they remain coherent, consistent refactorable, and easily maintainable.
Furthermore, rarely are you ever working on such systems on your own.
You are very likely working with a minimum of > 10 people all experimenting and overdoing Streams.
So while you might know how to use them properly you can rest assure the other 9 really don't. They just love experimenting and learning by doing.
I will leave you with these wonderful examples of real code, with thousands of more similar to them:
Or this:
Or this:
Or this:
Try refactoring any of the above. I challenge you. Give it a try. Everything is a Stream, everywhere. This is what Stream developers do, they overdo it, and there is no easy way to grasp what the code is actually doing. What is this method returning, what is this transformation doing, what do I end up with. Everything is inferred. Much more difficult to read for sure.
If you understand this, then you must be the einstein, but you should know not everyone is like you, and this could be your system in a very near future.
Do note, this is not isolated to this one project but I've seen many of them very similar to these structures.
One thing is for sure, horrible coders love streams.

As others have mentioned many good points, but I just want to mention lazy evaluation in stream evaluation. When we do map() to create a stream of lower case paths, we are not creating the whole stream immediately, instead the stream is lazily constructed, which is why the performance should be equivalent to the traditional for loop. It is not doing a full scanning, map() and anyMatch() are executed at the same time. Once anyMatch() returns true, it will be short-circuited.

Related

Are Java streams meant only to be used for arrays? How about single elements?

I have been looking at Java streams and functional programming.
Figured a way to rewrite a small "user login" code.
Here is my login methods;
If the user from query is null, null pointer exception is handled on a filter.
public ResponseEntity login(User request) {
User dbUser = userRepo.findByEmail(request.getEmail());
if (!aes.matches(request.getPassword(), dbUser.getPassword()))
return ResponseEntity.status(403).build();
return logUserIn(dbUser);
}
private ResponseEntity logUserIn(User dbUser) {
dbUser.setPassword(null);
jwtHandler.setJwtCookie(dbUser);
return ResponseEntity.ok(dbUser);
}
And via using streams;
public ResponseEntity login(User request) {
return Stream.of(userRepo.findByEmail(request.getEmail()))
.filter(dbUser -> aes.matches(request.getPassword(), dbUser.getPassword()))
.map(this::logUserIn)
.findFirst()
.orElse(ResponseEntity.status(403).build());
}
private ResponseEntity logUserIn(User dbUser) {
dbUser.setPassword(null);
jwtHandler.setJwtCookie(dbUser);
return ResponseEntity.ok(dbUser);
}
I dont know if streams are meant to be used this way. Are they?
If i use similar kind of logic on more important parts of the project will I get in trouble later?
You might feel better about the if-else if you use it in a more functional style rather than short-circuiting:
if (!aes.matches(request.getPassword(), dbUser.getPassword())) {
return ResponseEntity.status(403).build();
}
else {
return logUserIn(dbUser);
}
Doing equivalent in one statement with Stream/Optional is harder to read and less performant.
You might consider the possibility of making findByEmail return Optional<User>, which is more idiomatic for any "find" method. Then you could combine the two approaches like
return userRepo.findByEmail(request.getEmail()).map(dbUser -> {
if (!aes.matches(request.getPassword(), dbUser.getPassword())) {
return ResponseEntity.status(403).build();
}
else {
return logUserIn(dbUser);
}
})... // .orElse(null) / .orElseThrow(...)
You'll get into trouble, mostly. The 'root' problem is that both ways of writing it are defensible as the 'best choice', and the java community, by and large, strongly prefers the second form. For the same reason it is a bad idea to name_variables_like_this (the community decided that the convention is to nameThemLikeThis). Breaking the mold will mean your code is harder to read by others and code written by others is harder to read for you. Also, you'll probably get friction when you try to interact with other code.
For example, right now (and for the foreseeable future), 'lambdas' (those things with the :: and the ->) are NOT exception transparent, NOT control flow transparent, and NOT mutable local variable transparent.
There are only 3 feasible options here:
Somehow write all code such that these 3 transparencies are never relevant regardless of what you're writing. That sounds impossible to me. Even if you somehow you manage, there are other libraries out there. Starting with java.*, which isn't designed for that kind of code style.
Mix code styles, going with lambda style when you don't immediately foresee the transparencies being relevant, otherwise going with the more imperative style if it is or you think it might be. This sounds silly to me; why mix 2 styles when a single style would have covered all the use cases?
Stick with lambda style, bending over backwards to account for the lack of these 3 transparencies where it bothers you, 'downgrading' variables to AtomicX variants, using such constructs to transmit exceptions and boolean flags to carry break and continue control flow outside, etectera. This is just writing ugly code just because you are particularly eneamoured of your fancy new shiny hammer and are just insisting on treating all problems as a nail, no?
That's.. trying to guess at what's going to happen when you interact with other code and other programmers. This snippet, in a vacuum, with just you? Eh, both work fine. Whatever you prefer, if community, friction with other code, and having a consistent style doesn't matter.
I have used Java 8 streams in live code and the biggest drawback for me is the stacktrace you get when an exception goes unhandled in the pipeline.
Sure they are nice to write and give you a sense of writing code in a functional style, but the truth is that streams are just a facade because underneath the fancy API, you are dealing with a monstrous abstraction layer over plain, ugly Java iterators, and this becomes painfully obvious when something goes awry such as an exception not being handled.
So the answer to your question is yes you might get in trouble, but it depends on how good you are at reading stacktraces, where 70% of the trace has nothing to do with code you've written but rather with the magic stuff used to turn iterators into streams.
As much as possible, prefer using if-else, for-loops, etc, unless you are confident that streams will be more efficient or easier to read. On that note, readability is quite important and part of the reason the Stream api exists is to improve readability, but moderation and good judgement are virtues worth exercising when making use of the full potential of the Streams API.

Scala Enumeration ValueSet.isEmpty slow

I am using Scala Enumeration ValueSets in a fairly high-throughput setting - creating, testing, union'ing and intersecting about 10M sets/second/core. I didn't expect this to be a big deal, because I had read somewhere that they were backed by BitSets, but surprisingly ValueSet.isEmpty showed up as a hot spot in a profiling session with YourKit.
To verify, I decided to try and reimplement what I needed using the Java BitSet, while trying to retain some of the type-safety of using Scala Enumerations. (code review moved to https://codereview.stackexchange.com/questions/74795/scala-bitset-implemented-with-java-bitset-for-use-in-scala-enumerations-to-repl ) The good news is, changing just my ValueSets to these BitSets did indeed lop off 25% of my run-time, so I don't know what ValueSet is really doing under the hood but it could be improved...
EDIT: Reviewing the ValueSet source seems to indicate that isEmpty is definitely O(N), implemented using the general SetLike.isEmpty. Considering ValueSet is implemented with a BitSet, is this a bug?
EDIT: This was the backtrace from the profiler. This seems like a crazy way to implement isEmpty on a bitset.
For the record, I'm all for looking under the hood, but this design asks too much of any mortal coder.
The immortals, of course, have infinite time at their disposal.
Enumeration.ValueSet is backed by a BitSet but is not one itself. Something about favoring composition.
[Did you hear about the heir to a fortune who gave it all up to pursue his love of music? He favored composition over inheritance. Did I just make that up or did I hear it at Java One?]
No doubt, ValueSet should delegate more methods to the BitSet, including isEmpty.
I was going to suggest trying values.iterator.isEmpty, but that just tests hasNext which loops through all possible values checking for contains.
https://github.com/scala/scala/blob/v2.11.4/src/library/scala/collection/BitSetLike.scala#L109
If I'm reading that correctly.
The best option is e.values.toBitMask forall (_ == 0), which is the moral equivalent of BitSet.isEmpty.

Problems using interactive debuggers with Java 8 streams

I love Java 8 streams. They are intuitive, powerful and elegant. But they do have one major drawback IMO: they make debugging much harder (unless you can solve your problem by just debugging lambda expressions, which is answered here).
Consider the following two equivalent fragments:
int smallElementBitCount = intList.stream()
.filter(n -> n < 50)
.mapToInt(Integer::bitCount)
.sum();
and
int smallElementBitCount = 0;
for (int n: intList) {
if (n < 50) {
smallElementBitCount += Integer.bitCount(n);
}
}
I find the first one much clearer and more succinct. However consider the situation in which the result is not what you were expecting. What do you do?
In the traditional iterative style, you put a breakpoint on the totalBitCount += Integer.bitCount(n); line and step through each value in the list. You can see what the current list element is (watch n), the current total (watch totalBitCount) and, depending on the debugger, what the return value of Integer.bitCount is.
In the new stream style all of this is impossible. You can put a breakpoint on the entire statement and step through to the sum method. But in general this is close to useless. In this situation in my test my call stack was 11 deep of which 10 were java.util methods that I had no interest in. It is impossible to step through the code testing predicates or performing the mapping.
It is noted in the answers to Debugging streams question that iteractive debuggers work fairly well for breaking inside lambda expressions (such as the n < 50 predicate). But in many situations the most appropriate breakpoint is not within a lambda.
Clearly this is a simple piece of code to debug. But once custom reductions and collections are added, or more complex chains of filters and maps, it can become a nightmare to debug.
I have tried this on NetBeans and Eclipse and both seem to have the same issues.
Over the last few months I've got used to debugging using .peek calls to log interim values or moving interim steps into their own named methods or, in extreme cases, refactoring as iteration until any bugs are sorted out. This works but it reminds me a lot of the bad old days before modern IDEs with integrated interactive debuggers when you had to scatter printf statements through code.
Surely there's a better way.
Specifically I would like to know:
have others experienced this same issue?
are there any 'stream aware' interactive debuggers available?
are there better techniques for debugging this style of code?
is this a reason to restrict the use of streams to simple cases?
Any techniques that you have found successful would be much appreciated.
I'm not entirely certain there is a viable work around for this problem. By using streams you are effectively delegating iteration (and the associated code) to the VM as far as I understand it, thus shoving the process into a black box that is the stream itself.
At least from what I've read about them. This is sort of what's happened around lambda code for me (if they're complex enough, it's very difficult to track what's happening around them). I'd be very interested in any debugging options out there, but I haven't personally found any.
have others experienced this same issue?
Yes.
is this a reason to restrict the use of streams to simple cases?
Yes. I'm basically not using streams for this reason. Even simple cases sometimes need debugging. We first need a good way to debug this before we can use it in real code.

What conditions can be used to derive as which for loop in java is most efficient?

I have been trying to evaluate the performance of for loop.
I had a look at this and this.
But I have not yet understood what is the correct way to measure the performance of for loops.
Should we also consider inserting some elements in the data structure like ArrayList?
There is another link which also say something about it.
Use the syntax that most clearly expresses what you're trying to do. The actual for loop conditions probably won't be a significant performance factor (you can test if you think they really are), and it's more important that the code be readable.
One guideline is to avoid known expensive methods inside the condition; collection.size() is a notable one here. When iterating over a collection, using an Iterator (either explicitly or via the enhanced for loop) usually makes for clearer code anyway.

Java performance vs. code-style: Making multiple method calls from the same line of code

I am curious whether packing multiple and/or nested method calls within the same line of code is better for performance and that is why some developers do it, at the cost of making their code less readable.
E.g.
//like
Set<String> jobParamKeySet = jobParams.keySet();
Iterator<String> jobParamItrtr = jobParamKeySet.iterator();
Could be also written as
//dislike
Iterator<String> jobParamItrtr = jobParams.keySet().iterator();
Personally, I hate the latter because it does multiple evaluations in the same line and is hard for me to read the code. That is why I try to avoid by all means to have more than one evaluation per line of code. I also don't know that jobParams.keySet() returns a Set and that bugs me.
Another example would be:
//dislike
Bar.processParameter(Foo.getParameter());
vs
//like
Parameter param = Foo.getParameter();
Bar.processParameter(param);
The former makes me noxious and dizzy as I like to consume simple and clean evaluations in every line of code and I just hate it when I see other people's code written like that.
But are there any (performance) benefits to packing multiple method calls in the same line?
EDIT: Single liners are also more difficult to debug, thanks to #stemm for reminding
Micro optimization is killer. If the code references you are showing are either instance scope (or) method scope, I would go with second approach.
Method scope variables will be eligible for GC as soon as method execution done, so even you declare another variable, it's ok because scope is limited and the advantage you get will be readable and main-table code.
I tend to disagree with most others on this list. I actually find the first way cleaner and easier to read.
In your example:
//like
Set<String> jobParamKeySet = jobParams.keySet();
Iterator<String> jobParamItrtr = jobParamKeySet.iterator();
Could be also written as
//dislike
Iterator<String> jobParamItrtr = jobParams.keySet().iterator();
the first method (the one you like) has a lot of irrelevant information. The whole point of the iterator interface, for example, is to give you a standard interface that you can use to loop over whatever backing implementation there is. So the fact that it is a keyset has no bearing on the code itself. All you are looking for is the iterator to loop over the implemented object.
Secondly, the second implementation actually gives you more information. It tells you that the code will be ignoring the implementation of jobParams and that it will only be looping through the keys. In the first code, you must first trace back what jobParamKeySet is (as a variable) to figure out what you are iterating over. Additionally, you do not know if/where jobParamKeySet is used elsewhere in the scope.
Finally, as a last comment, the second way makes it easier to switch implementations if necessary; in the first case, you might need to recode two lines (the first variable assignment if it changes from a set to something else), whereas the second case you only need to change out one line.
That being said, there are limits to everything. Chaining 10 calls within a single line can be complicated to read and debug. However 3 or 4 levels is usually clear. Sometimes, especially if an intermediary variable is required several times, it makes more sense to declare it explicitly.
In your second example:
//dislike
Bar.processParameter(Foo.getParameter());
vs
//like
Parameter param = Foo.getParameter();
Bar.processParameter(param);
I find it actually more difficult to understand exactly which parameters are being processed by Bar.processParameter(param). It will take me longer to match param to the variable instantiation to see that it is Foo.getParameter(). Whereas the first case, the information is very clear and presented very well - you are processing Foo.getParameter() params. Personally, I find the first method is less prone to error as well - it is unlikely that you accidentally use Foo2.getParamter() when it is within the same call as opposed to a separate line.
There is one less variable assignment, but even the compiler can optimize it in some cases.
I would not do it for performance, it is kind of an early optimization. Write the code that is easier to maintain.
In my case, I find:
Iterator<String> jobParamItrtr = jobParams.keySet().iterator();
easier to be read than:
Set<String> jobParamKeySet = jobParams.keySet();
Iterator<String> jobParamItrtr = jobParamKeySet.iterator();
But I guess it is a matter of personal taste.
Code is never developed by same user. I would choose second way. Also it is easier to understand and maintain.
Also This is beneficial when two different teams are working on the code at different locations.
Many times we take an hour or more time to understand what other developer has done, if he uses first option. Personally I had this situation many times.
But are there any (performance) benefits to packing multiple method calls in the same line?
I seriously doubt the difference is measurable but even if there were I would consider
is hard for me to read the code.
to be so much more important it cannot be over stated.
Even if the it were half the speed, I would still write the simplest, cleanest and easiest to understand code and only when you have profiled the application and identified that you have an issue would I consider optimising it.
BTW: I prefer the more dense, chained code, but I would suggest you use what you prefer.
The omission of an extra local variable probably has a neglible performance advantage (although the JIT may be able to optimize this).
Personally I don't mind call chaining when its pretty clear whats done and the intermediate object is very unlikely to be null (like your first 'dislike'-example). When it gets complex (multiple .'s in the expression), I prefer explicit local variables, because its so much simpler to debug.
So I decide case by case what I prefer :)
I don't see where a().b().c().d is that much harder to read than a.b.c.d which people don't seem to mind too much. (Though I would break it up.)
If you don't like that it's all on one line, you could say
a()
.b()
.c()
.d
(I don't like that either.)
I prefer to break it up, using a couple extra variables.
It makes it easier to debug.
If performance is your concern (as it should be), the first thing to understand is not to sweat the small stuff.
If adding extra local variables costs anything at all, the rest of the code has to be rippin' fat-free before it even begins to matter.

Categories