Response execute(String para1,String para2){
String xmlRequest = buildRequest(para1, para2);
String xmlResponse = getXmlResponse(xmlRequest);
Response response = parseResponse(xmlResponse);
return response;
}
Or the concatenated version? Why is that?
Response execute(String para1,String para2){
return parseResponse(getXmlResponse(buildRequest(para1, para2)));
}
Thanks,
Sarah
In contrast to what others have written, I find the second version much easier to read.
First, there's about half as many symbols for me to parse (12 vs 21). There's no way for me to glance at the first one and understand what it's doing, even though what it's doing is just as simple as the second method.
Second, in the second example, the data flow is obvious: you can just read down the line. In the first one, I had to look carefully to make sure the variable in one is used in the next, for each pair of lines (especially since 2 of the temp variables start with the exact same 5 characters). This is exactly the sort of case that somebody is going to update later, leave an extra variable assignment around by mistake -- I've seen it a million times.
Third, the second one is written in a more functional style, and reasoning about functional code tends to be easier. In this case the benefit is minimal, since the first 2 lines are assignments to an immutable object, but declaring 3 variables shifts me into "something complex is happening here" mode, which really isn't the case here.
Of course, this method is so small you can't go far wrong either way, but I find it useful to apply good habits at any scale.
I like the first version better because it's easier to read, not so many nested parameters. Other than readability there shouldn't be a difference in performance.
First one is better. Because it is more readable.
Any programmer can write code that a computer can understand.
Good programmers write code that humans can understand.
They are the same, but as mentioned, the first one is more readble, so that would tip the scales in favor for the first one.
When you nest methods like in the second one, the compiler simply does what you do in the first one for you. So there is no win in terms of speed memory use.
I would say it depends. If it's a static utility method that is extremely unlikely to change, I might go with the second one just to keep the number of lines smaller. But it totally depends.
The first one has an advantage while debugging: when you step through this code in a debugger, you have a chance to view the return values of the different methods easily (i.e., with the debugger you can easily see what is in xmlRequest, xmlResponse and response).
However, I'd prefer the second notation, because the first version is overly verbose and I don't find the first version more readable than the second. But ofcourse it's a matter of opinion.
Related
I just came across a question when using a List and its stream() method. While I know how to use them, I'm not quite sure about when to use them.
For example, I have a list, containing various paths to different locations. Now, I'd like to check whether a single, given path contains any of the paths specified in the list. I'd like to return a boolean based on whether or not the condition was met.
This of course, is not a hard task per se. But I wonder whether I should use streams, or a for(-each) loop.
The List
private static final List<String> EXCLUDE_PATHS = Arrays.asList(
"my/path/one",
"my/path/two"
);
Example using Stream:
private boolean isExcluded(String path) {
return EXCLUDE_PATHS.stream()
.map(String::toLowerCase)
.filter(path::contains)
.collect(Collectors.toList())
.size() > 0;
}
Example using for-each loop:
private boolean isExcluded(String path){
for (String excludePath : EXCLUDE_PATHS) {
if (path.contains(excludePath.toLowerCase())) {
return true;
}
}
return false;
}
Note that the path parameter is always lowercase.
My first guess is that the for-each approach is faster, because the loop would return immediately, if the condition is met. Whereas the stream would still loop over all list entries in order to complete filtering.
Is my assumption correct? If so, why (or rather when) would I use stream() then?
Your assumption is correct. Your stream implementation is slower than the for-loop.
This stream usage should be as fast as the for-loop though:
EXCLUDE_PATHS.stream()
.map(String::toLowerCase)
.anyMatch(path::contains);
This iterates through the items, applying String::toLowerCase and the filter to the items one-by-one and terminating at the first item that matches.
Both collect() & anyMatch() are terminal operations. anyMatch() exits at the first found item, though, while collect() requires all items to be processed.
The decision whether to use Streams or not should not be driven by performance consideration, but rather by readability. When it really comes to performance, there are other considerations.
With your .filter(path::contains).collect(Collectors.toList()).size() > 0 approach, you are processing all elements and collecting them into a temporary List, before comparing the size, still, this hardly ever matters for a Stream consisting of two elements.
Using .map(String::toLowerCase).anyMatch(path::contains) can save CPU cycles and memory, if you have a substantially larger number of elements. Still, this converts each String to its lowercase representation, until a match is found. Obviously, there is a point in using
private static final List<String> EXCLUDE_PATHS =
Stream.of("my/path/one", "my/path/two").map(String::toLowerCase)
.collect(Collectors.toList());
private boolean isExcluded(String path) {
return EXCLUDE_PATHS.stream().anyMatch(path::contains);
}
instead. So you don’t have to repeat the conversion to lowcase in every invocation of isExcluded. If the number of elements in EXCLUDE_PATHS or the lengths of the strings becomes really large, you may consider using
private static final List<Predicate<String>> EXCLUDE_PATHS =
Stream.of("my/path/one", "my/path/two").map(String::toLowerCase)
.map(s -> Pattern.compile(s, Pattern.LITERAL).asPredicate())
.collect(Collectors.toList());
private boolean isExcluded(String path){
return EXCLUDE_PATHS.stream().anyMatch(p -> p.test(path));
}
Compiling a string as regex pattern with the LITERAL flag, makes it behave just like ordinary string operations, but allows the engine to spent some time in preparation, e.g. using the Boyer Moore algorithm, to be more efficient when it comes to the actual comparison.
Of course, this only pays off if there are enough subsequent tests to compensate the time spent in preparation. Determining whether this will be the case, is one of the actual performance considerations, besides the first question whether this operation will ever be performance critical at all. Not the question whether to use Streams or for loops.
By the way, the code examples above keep the logic of your original code, which looks questionable to me. Your isExcluded method returns true, if the specified path contains any of the elements in list, so it returns true for /some/prefix/to/my/path/one, as well as my/path/one/and/some/suffix or even /some/prefix/to/my/path/one/and/some/suffix.
Even dummy/path/onerous is considered fulfilling the criteria as it contains the string my/path/one…
Yeah. You are right. Your stream approach will have some overhead. But you may use such a construction:
private boolean isExcluded(String path) {
return EXCLUDE_PATHS.stream().map(String::toLowerCase).anyMatch(path::contains);
}
The main reason to use streams is that they make your code simpler and easy to read.
The goal of streams in Java is to simplify the complexity of writing parallel code. It's inspired by functional programming. The serial stream is just to make the code cleaner.
If we want performance we should use parallelStream, which was designed to. The serial one, in general, is slower.
There is a good article to read about ForLoop, Stream and ParallelStream Performance.
In your code we can use termination methods to stop the search on the first match. (anyMatch...)
Radical answer:
Never. Ever. Ever.
I almost never iterated a list for anything, especially to find something, yet stream users and systems seem filled with that way of coding.
I find it difficult to refactor and organize such code and I see redundancy and over iteration everywhere in stream heavy systems. In the same method you might see it 5 times. Same list, finding different things.
It is also not really shorter either. Rarely is. Definitely not more readable but that is a subjective opinion. Some people will say it is. I don't. People might like it due to autocompletion but in my editor Intellij, I can just iter or itar and have the for loop auto created for me with types and everything.
Often misused and overused, and I think it is better to avoid it completely. Java is not a true functional language and Java generics suck and are not expressive enough, and certainly more difficult to read, parse and refactor. Just try to visit any of the native Java stream libraries. Do you find that easy to parse?
Also, stream code is not easily extractable or refactorable unless you want to start adding weird methods that return Optionals, Predicates, Consumers and what not and you end up having methods returning and taking all kinds of weird generic constraints with orders and meanings only God knows what.
Too much is inferred where you need to visit methods to figure out the types of various things.
Trying to make Java behave like a functional language like Haskell or Lisp is a fools errand. A heavy streams based Java system is always going to be more complex than a none one and way less performant and more complex to refactor and maintain.
Thus also more buggy and filled with patch work. Glue work everywhere due to the redundancy often filled in such systems. Some people just don't have an issue with redundancy. I am not one of them. Nor should you be either.
When OpenJDK got involved they started adding things to the language without really thinking it thoroughly enough. It is now not just Java Streams which is an issue. Now systems are inherently more complex because they require more base knowledge of these API's. You might have it, but your colleagues don't. They sure as hell know what a for loop is and what an if block is.
Furthermore, since you also can not assign anything to a non final variable you can rarely do two things at the same while looping, so you end up iterating twice, or thrice.
Most that like and prefer the stream approach over a for loop are most likely people that started learning Java post Java 8. Those before hate it. The thing is that it is far more complex to use, refactor and more difficult to use the right way. It requires skills to not fuck up, and then even more skills and energy to repair fuck ups.
And when I say it performs worse, it is not in comparison to a for loop, which is also a very real thing but more due to the tendency such code have to over iterate a wide range of things. It is deemed so easy to iterate a list to find an item that it tends being done over and over again.
I've not seen a single system that has benefitted from it. All of the systems I have seen are horribly implemented, mostly because of it, and I've worked in some of the biggest companies in the world.
Code is definitely not more readable than a for loop and a for loop is definitely more flexible and refactorable. The reason we see so many complex shitty systems and bugs everywhere today is, I promise you due to the heavy reliance on streams to filter, not to mention the accompanied overuse of Lombok and Jackson. Those three are the hallmark of a badly implemented system. Keyword overuse. A patch work approach.
Again, I consider it really bad to iterate a list to find anything. Yet with Stream based systems, this is what people do all the time. It is also not rare and difficult to parse and detect that an iteration might be O(N2) while with a for loop you would immediately see it.
What is often customary to ask the database to filter things for you it is now not rare that instead a base query instead return a big list of things with all kind of iterative logic and methods to filter out the undesirables and of course they use streams to do this. All kinds of methods arises around that big list with various things to filter out things.
Often redundant filtering and thus logic too. Over and over again.
Of course, I do not mean you. But your colleagues. Right?
Personally, I rarely ever iterate anything. I use the right datasets and rely on the database to filter it for me. Once. However in a streams heavy system you will see iteration everywhere.
In the deepest method, in the caller, caller of caller, caller of the caller of the caller. Streams everywhere. It is ugly. And good luck refactoring that code that lives in tiny lambdas. And good luck reusing them. Nobody will look to reuse your nice Predicates.
And if they want to use them, guess what? They need to use more Streams. You just got yourself addicted and cornered yourself further. Now, are you proposing I start splitting all of my code in tiny Predicates, Consumers, Function and BiFcuntions? Just so I can reuse that logic for Streams?
Of course I hate it just as much in Javascript as well where over iteration is everywhere by noob frontend developers.
You might say the cost is nothing to iterate a list but the system complexity grows, redundancy increases and therefore maintenance costs and number of bugs increases. It becomes a patch and glue based approach to various things. Just add another filter and remove this, rather than code things the right way.
Furthermore, where you need three servers to host all of your users, I can manage with just one. So required scalability of such a system is going to be required way earlier than a non streams heavy system. For small projects that is a very important metric. Where you can have say 5000 concurrent users, my system can handle twice or thrice that.
I have no need for it in my code, and when I am in charge of new projects, the first rule is that streams are totally forbidden to use.
That is not to say there are not use cases for it or that it might be useful at times but the risks associated with allowing it far outweighs the benefits.
When you start using Streams you are essentially adopting a whole new programming paradigm. The entire programming style of the system will change and that is what I am concerned about.
You do not want that style. It is not superior to the old style. Especially on Java.
Take the Futures API as an example.
Sure, you could start coding everything to return a Promise or a Future, but do you really want to? Is that going to resolve anything? Can your entire system really follow up on being that, everywhere?
Will it be better for you, or are you just experimenting and hoping you will benefit at some point?
There are people that overdo JavaRx and overdo promises in JavaScript as well. There are really really few cases for when you really want to have things futures based and very many many corner cases will be felt where you will find that those APIs have certain limitations and you just got made.
You can build really really complex and far far more maintainable systems without all that crap.
This is what it is about. It is not about your hobby project expanding and becoming a horrible code base.
It is about what is best approach to build large and complex enterprise systems and ensure they remain coherent, consistent refactorable, and easily maintainable.
Furthermore, rarely are you ever working on such systems on your own.
You are very likely working with a minimum of > 10 people all experimenting and overdoing Streams.
So while you might know how to use them properly you can rest assure the other 9 really don't. They just love experimenting and learning by doing.
I will leave you with these wonderful examples of real code, with thousands of more similar to them:
Or this:
Or this:
Or this:
Try refactoring any of the above. I challenge you. Give it a try. Everything is a Stream, everywhere. This is what Stream developers do, they overdo it, and there is no easy way to grasp what the code is actually doing. What is this method returning, what is this transformation doing, what do I end up with. Everything is inferred. Much more difficult to read for sure.
If you understand this, then you must be the einstein, but you should know not everyone is like you, and this could be your system in a very near future.
Do note, this is not isolated to this one project but I've seen many of them very similar to these structures.
One thing is for sure, horrible coders love streams.
As others have mentioned many good points, but I just want to mention lazy evaluation in stream evaluation. When we do map() to create a stream of lower case paths, we are not creating the whole stream immediately, instead the stream is lazily constructed, which is why the performance should be equivalent to the traditional for loop. It is not doing a full scanning, map() and anyMatch() are executed at the same time. Once anyMatch() returns true, it will be short-circuited.
I am curious whether packing multiple and/or nested method calls within the same line of code is better for performance and that is why some developers do it, at the cost of making their code less readable.
E.g.
//like
Set<String> jobParamKeySet = jobParams.keySet();
Iterator<String> jobParamItrtr = jobParamKeySet.iterator();
Could be also written as
//dislike
Iterator<String> jobParamItrtr = jobParams.keySet().iterator();
Personally, I hate the latter because it does multiple evaluations in the same line and is hard for me to read the code. That is why I try to avoid by all means to have more than one evaluation per line of code. I also don't know that jobParams.keySet() returns a Set and that bugs me.
Another example would be:
//dislike
Bar.processParameter(Foo.getParameter());
vs
//like
Parameter param = Foo.getParameter();
Bar.processParameter(param);
The former makes me noxious and dizzy as I like to consume simple and clean evaluations in every line of code and I just hate it when I see other people's code written like that.
But are there any (performance) benefits to packing multiple method calls in the same line?
EDIT: Single liners are also more difficult to debug, thanks to #stemm for reminding
Micro optimization is killer. If the code references you are showing are either instance scope (or) method scope, I would go with second approach.
Method scope variables will be eligible for GC as soon as method execution done, so even you declare another variable, it's ok because scope is limited and the advantage you get will be readable and main-table code.
I tend to disagree with most others on this list. I actually find the first way cleaner and easier to read.
In your example:
//like
Set<String> jobParamKeySet = jobParams.keySet();
Iterator<String> jobParamItrtr = jobParamKeySet.iterator();
Could be also written as
//dislike
Iterator<String> jobParamItrtr = jobParams.keySet().iterator();
the first method (the one you like) has a lot of irrelevant information. The whole point of the iterator interface, for example, is to give you a standard interface that you can use to loop over whatever backing implementation there is. So the fact that it is a keyset has no bearing on the code itself. All you are looking for is the iterator to loop over the implemented object.
Secondly, the second implementation actually gives you more information. It tells you that the code will be ignoring the implementation of jobParams and that it will only be looping through the keys. In the first code, you must first trace back what jobParamKeySet is (as a variable) to figure out what you are iterating over. Additionally, you do not know if/where jobParamKeySet is used elsewhere in the scope.
Finally, as a last comment, the second way makes it easier to switch implementations if necessary; in the first case, you might need to recode two lines (the first variable assignment if it changes from a set to something else), whereas the second case you only need to change out one line.
That being said, there are limits to everything. Chaining 10 calls within a single line can be complicated to read and debug. However 3 or 4 levels is usually clear. Sometimes, especially if an intermediary variable is required several times, it makes more sense to declare it explicitly.
In your second example:
//dislike
Bar.processParameter(Foo.getParameter());
vs
//like
Parameter param = Foo.getParameter();
Bar.processParameter(param);
I find it actually more difficult to understand exactly which parameters are being processed by Bar.processParameter(param). It will take me longer to match param to the variable instantiation to see that it is Foo.getParameter(). Whereas the first case, the information is very clear and presented very well - you are processing Foo.getParameter() params. Personally, I find the first method is less prone to error as well - it is unlikely that you accidentally use Foo2.getParamter() when it is within the same call as opposed to a separate line.
There is one less variable assignment, but even the compiler can optimize it in some cases.
I would not do it for performance, it is kind of an early optimization. Write the code that is easier to maintain.
In my case, I find:
Iterator<String> jobParamItrtr = jobParams.keySet().iterator();
easier to be read than:
Set<String> jobParamKeySet = jobParams.keySet();
Iterator<String> jobParamItrtr = jobParamKeySet.iterator();
But I guess it is a matter of personal taste.
Code is never developed by same user. I would choose second way. Also it is easier to understand and maintain.
Also This is beneficial when two different teams are working on the code at different locations.
Many times we take an hour or more time to understand what other developer has done, if he uses first option. Personally I had this situation many times.
But are there any (performance) benefits to packing multiple method calls in the same line?
I seriously doubt the difference is measurable but even if there were I would consider
is hard for me to read the code.
to be so much more important it cannot be over stated.
Even if the it were half the speed, I would still write the simplest, cleanest and easiest to understand code and only when you have profiled the application and identified that you have an issue would I consider optimising it.
BTW: I prefer the more dense, chained code, but I would suggest you use what you prefer.
The omission of an extra local variable probably has a neglible performance advantage (although the JIT may be able to optimize this).
Personally I don't mind call chaining when its pretty clear whats done and the intermediate object is very unlikely to be null (like your first 'dislike'-example). When it gets complex (multiple .'s in the expression), I prefer explicit local variables, because its so much simpler to debug.
So I decide case by case what I prefer :)
I don't see where a().b().c().d is that much harder to read than a.b.c.d which people don't seem to mind too much. (Though I would break it up.)
If you don't like that it's all on one line, you could say
a()
.b()
.c()
.d
(I don't like that either.)
I prefer to break it up, using a couple extra variables.
It makes it easier to debug.
If performance is your concern (as it should be), the first thing to understand is not to sweat the small stuff.
If adding extra local variables costs anything at all, the rest of the code has to be rippin' fat-free before it even begins to matter.
Is there any difference between:
String x = getString();
doSomething(x);
vs.
doSomething(getString());
Resources and performance wise, Especially is it's done within a loop for tens, hundreds or thousands of times?
It has the same overhead. Local variables are just there to make your life easier. At the VM level they don't necessarily exist and certainly not anymore when machine code is run.
So what you need to worry about here is getString(), whether it is potentially expensive. x has very likely no effect at all.
Let me first begin by saying that your overriding goal should almost always be to maintain code readability. Your compiler is almost always better at trivial optimizations than you are. Trust it!
In response to your specific example: the bytecode generated for each example IS different. It didn't appear to make much difference though, because there wasn't a statistically significant or even consistent difference between the two approaches in a loop over Integer.MAX_VALUE iterations.
I believe both would be the same at compile time, the first may become more code readable in some cases though.
Both the statements are same. only difference is that in first approach you have used a local variable X which can be avoided using second syntax.
That largely depends on the use-case. Are you going to make repeated calls to doSomething using that exact String? Then using the local variable is a bit more efficient. However, if it's a single call or multiple calls with different Strings, it makes no difference.
I have always written my boolean expressions like this:
if (!isValid) {
// code
}
But my new employer insists on the following style:
if (false == isValid) {
// code
}
Is one style preferred, or standard?
I prefer the first style because it is more natural for me to read. It's very unusual to see the second style.
One reason why some people might prefer the second over another alternative:
if (isValid == false) { ... }
is that with the latter you accidentally write a single = instead of == then you are assigning to isValid instead of testing it but with the constant first you will get a compile error.
But with your first suggestion this issue isn't even a problem, so this is another reason to prefer the first.
Absolutely the first. The second betrays a lack of understanding of the nature of expressions and values, and as part of the coding standard, it implies that the employer expects to hire very incompetent programmers - not a good omen.
Everybody recognizes this snippet:
if (isValid.toString().lenght() > 4) {
//code
}
I think your second example looks at the same direction.
It was discussed for C# several hours ago.
The false == isValid construct is a leftover from C-world, where compiler would allow you to do assignments in if statement. I believe Java compilers will warn you in such case.
Overall, second option is too verbose.
IMO the first one is much more readable while the second one more verbose.
I would surely go for the 1st one
You are evaluating the variable, not false so the latter is not correct from a readability perspective. So I would personally stick with the first option.
I'm going to attempt a comprehensive answer here that incorporates all the above answers.
The first style is definitely to be preferred for the following reasons:
it's shorter
it is more readable, and hence easier to understand
it is more widely used, which means that readers will recognize the pattern more quickly
"false==..." rather than "...==false" is yet another violation of natural order,which makes the reader think "is there something strange going on that I need to pay attention to", when there isn't.
The only exception to this is when the variable is a Boolean rather than a boolean. In that case the second is a different expression from the first, evaluating to false when isValid is null as well as when it is Boolean.FALSE. If this is the case there are good arguments for using the second.
The second style doesn't require you to negate the expression yourself (which might be far more complicated than just "isValid"). But writing "isValid == false" may lead to an unintended assignment if you forget to type two ='s, hence the idiom is to put on the right side what can't be an rvalue.
The first style seems to be preferred among people who know what they're doing.
I just want to say I learned C twenty years ago in school and have moving onto Perl and Java and now C# which all have the same syntax and...
I think (!myvar) is the most popular
I think (myvar==false) is just fine too
in 20 years i have NEVER EVEN SEEN
(false==myvar)
I think your boss is smoking something-- I'm sorry but I'd take this as a sign your boss is some kind of control freak or numbskull.
Recently I got into a discussion with my Team lead about using temp variables vs calling getter methods. I was of the opinion for a long time that, if I know that I was going to have to call a simple getter method quite a number of times, I would put it into a temp variable and then use that variable instead. I thought that this would be a better both in terms of style and performance. However, my lead pointed out that in Java 4 and newer editions, this was not true somewhat. He is a believer of using a smaller variable space, so he told me that calling getter methods had a very negligible performance hit as opposed to using a temp variable, and hence using getters was better. However, I am not totally convinced by his argument. What do you guys think?
Never code for performance, always code for readability. Let the compiler do the work.
They can improve the compiler/runtime to run good code faster and suddenly your "Fast" code is actually slowing the system down.
Java compiler & runtime optimizations seem to address more common/readable code first, so your "Optimized" code is more likely to be de-optimized at a later time than code that was just written cleanly.
Note:
This answer is referring to Java code "Tricks" like the question referenced, not bad programming that might raise the level of loops from an O(N) to an O(N^2). Generally write clean, DRY code and wait for an operation to take noticeably too long before fixing it. You will almost never reach this point unless you are a game designer.
Your lead is correct. In modern versions of the VM, simple getters that return a private field are inlined, meaning the performance overhead of a method call doesn't exist.
Don't forget that by assigning the value of getSomething() to a variable rather than calling it twice, you are assuming that getSomething() would have returned the same thing the second time you called it. Perhaps that's a valid assumption in the scenario you are talking about, but there are times when it isn't.
It depends. If you would like to make it clear that you use the same value again and again, I'd assign it to a temp variable. I'd do so if the call of the getter is somewhat lengthy, like myCustomObject.getASpecificValue().
You will get much fewer errors in your code if it is readable. So this is the main point.
The performance differences are very small or not existent.
If you keep the code evolution in mind, simple getters in v1.0 tend to become not-so-simple getters in v2.0.
The coder who changes a simple getter to not-so-simple getter usually has no clue that there is a function that calls this getter 10 times instead of 1 and never corrects it there, etc.
That's why from the point of view of the DRY principal it makes sense to cache value for repeated use.
I will not sacrifice "Code readability" to some microseconds.
Perhaps it is true that getter performs better and can save you several microseconds in runtime. But i believe, variables can save you several hours or perhaps days when bug fixing time comes.
Sorry for the non-technical answer.
I think that recent versions of the JVM are often sufficiently clever to cache the result of a function call automatically, if some conditions are met. I think the function must have no side effects and reliably return the same result every time it is called. Note that this may or may not be the case for simple getters, depending on what other code in your class is doing to the field values.
If this is not the case and the called function does significant processing then you would indeed be better of caching its result in a temporary variable. While the overhead of a call may be insignificant, a busy method will eat your lunch if you call it more often than necessary.
I also practice your style; even if not for performance reasons, I find my code more legible when it isn't full of cascades of function calls.
It is not worth if it is just getFoo(). By caching it into a temp variable you are not making it much faster and maybe asking for trouble because getFoo() may return different value later. But if it is something like getFoo().getBar().getBaz().getSomething() and you know the value will not be changed within the block of code, then there may be a reason to use temp variable for better readability.
A general comment: In any modern system, except for I/O, do not worry about performance issues. Blazing fast CPUs and heaps of memory mean, all other issues are most of the time completely immaterial to actual performance of your system. [Of course, there are exceptions like caching solutions but they are far and rare.]
Now coming to this specific problem, yes, compiler will inline all the gets. Yet, even that is not the actual consideration, what should really matter is over all readability and flow of your code. Replacing indirections by a local variable is better, if the call used multiple times, like customer.gerOrder().getAddress() is better captured in local variable.
The virtual machine can handle the first four local variables more efficiently than any local variable declared after that (see lload and lload_<n> instructions). So caching the result of the (inlined) getter may actually hurt your performance.
Of course on their own either performance influence is almost negligible so if you want to optimize your code make sure that you are really tackling an actual bottleneck!
Another reason to not use a temporary variable to contain the result of a method call is that using the method you get the most updated value. This could not be a problem with the actual code, but it could become a problem when the code is changed.
I am in favour of using temp variable if you are sure about getter will return same value throughout the scope. Because if you have a variable having name of length 10 or more getter looks bad in readability aspect.
I've tested it in a very simple code :
created a class with a simple getter of an int (I tried both with final and non-final value for Num, didn't see any difference, mind that it's in the case num never change also...!):
Num num = new Num(100_000_000);
compared 2 differents for loops:
1: for(int i = 0; i < num.getNumber(); ++i){(...)}
2: number = num.getNumber();
for(int i = 0; i < number; ++i){(...)}
The result were around 3 millis int the first one and around 2 millis in the second one. So there's a tiny difference, nothing to worry about for small loops, may be more problematic on big iterations or if you always call getter and need them a lot. For instance, in image processing if you want to be quick, don't use repetively getters I would advise...
I'm +1 for saving the variable.
1) Readability over performance - your code is not just for you.
2) Performance might be negligible but not all the time. I think it is important to be consistent and set a precedent. So, while it might not matter for one local variable - it could matter in a larger class using the same value multiples times or in the case of looping.
3) Ease of changing implementation/ avoiding DRY code. For now you get the value from this one place with a getter and theoretically you use the getter 100 times in one class. But in the future - if you want to change where/how you get the value - now you have to change it 100 times instead of just once when you save it as an instance variable.