Related
sometimes it would be convenient to have an easy way of doing the following:
Foo a = dosomething();
if (a != null){
if (a.isValid()){
...
}
}
My idea was to have some kind of static “default” methods for not initialized variables like this:
class Foo{
public boolean isValid(){
return true;
}
public static boolean isValid(){
return false;
}
}
And now I could do this…
Foo a = dosomething();
if (a.isValid()){
// In our example case -> variable is initialized and the "normal" method gets called
}else{
// In our example case -> variable is null
}
So, if a == null the static “default” methods from our class gets called, otherwise the method of our object gets called.
Is there either some keyword I’m missing to do exactly this or is there a reason why this is not already implemented in programming languages like java/c#?
Note: this example is not very breathtaking if this would work, however there are examples where this would be - indeed - very nice.
It's very slightly odd; ordinarily, x.foo() runs the foo() method as defined by the object that the x reference is pointing to. What you propose is a fallback mechanism where, if x is null (is referencing nothing) then we don't look at the object that x is pointing to (there's nothing its pointing at; hence, that is impossible), but that we look at the type of x, the variable itself, instead, and ask this type: Hey, can you give me the default impl of foo()?
The core problem is that you're assigning a definition to null that it just doesn't have. Your idea requires a redefinition of what null means which means the entire community needs to go back to school. I think the current definition of null in the java community is some nebulous ill defined cloud of confusion, so this is probably a good idea, but it is a huge commitment, and it is extremely easy for the OpenJDK team to dictate a direction and for the community to just ignore it. The OpenJDK team should be very hesitant in trying to 'solve' this problem by introducing a language feature, and they are.
Let's talk about the definitions of null that make sense, which definition of null your idea specifically is catering to (at the detriment of the other interpretations!), and how catering to that specific idea is already easy to do in current java, i.e. - what you propose sounds outright daft to me, in that it's just unneccessary and forces an opinion of what null means down everybody's throats for no reason.
Not applicable / undefined / unset
This definition of null is exactly how SQL defines it, and it has the following properties:
There is no default implementation available. By definition! How can one define what the size is of, say, an unset list? You can't say 0. You have no idea what the list is supposed to be. The very point is that interaction with an unset/not-applicable/unknown value should immediately lead to a result that represents either [A] the programmer messed up, the fact that they think they can interact with this value means they programmed a bug - they made an assumption about the state of the system which does not hold, or [B] that the unset nature is infectuous: The operation returns the notion 'unknown / unset / not applicable' as result.
SQL chose the B route: Any interaction with NULL in SQL land is infectuous. For example, even NULL = NULL in SQL is NULL, not FALSE. It also means that all booleans in SQL are tri-state, but this actually 'works', in that one can honestly fathom this notion. If I ask you: Hey, are the lights on?, then there are 3 reasonable answers: Yes, No, and I can't tell you right now; I don't know.
In my opinion, java as a language is meant for this definition as well, but has mostly chosen the [A] route: Throw an NPE to let everybody know: There is a bug, and to let the programmer get to the relevant line extremely quickly. NPEs are easy to solve, which is why I don't get why everybody hates NPEs. I love NPEs. So much better than some default behaviour that is usually but not always what I intended (objectively speaking, it is better to have 50 bugs that each takes 3 minutes to solve, than one bug that takes an an entire working day, by a large margin!) – this definition 'works' with the language:
Uninitialized fields, and uninitialized values in an array begin as null, and in the absence of further information, treating it as unset is correct.
They are, in fact, infectuously erroneous: Virtually all attempts to interact with them results in an exception, except ==, but that is intentional, for the same reason in SQL IS NULL will return TRUE or FALSE and not NULL: Now we're actually talking about the pointer nature of the object itself ("foo" == "foo" can be false if the 2 strings aren't the same ref: Clearly == in java between objects is about the references itself and not about the objects referenced).
A key aspect to this is that null has absolutely no semantic meaning, at all. Its lack of semantic meaning is the point. In other words, null doesn't mean that a value is short or long or blank or indicative of anything in particular. The only thing it does mean is that it means nothing. You can't derive any information from it. Hence, foo.size() is not 0 when foo is unset/unknown - the question 'what is the size of the object foo is pointing at' is unanswerable, in this definition, and thus NPE is exactly right.
Your idea would hurt this interpretation - it would confound matters by giving answers to unanswerable questions.
Sentinel / 'empty'
null is sometimes used as a value that does have semantic meaning. Something specific. For example, if you ever wrote this, you're using this interpretation:
if (x == null || x.isEmpty()) return false;
Here you've assigned a semantic meaning to null - the same meaning you assigned to an empty string. This is common in java and presumably stems from some bass ackwards notion of performance. For example, in the eclipse ecj java parser system, all empty arrays are done with null pointers. For example, the definition of a method has a field Argument[] arguments (for the method parameters; using argument is the slightly wrong word, but it is used to store the param definitions); however, for methods with zero parameters, the semantically correct choice is obviously new Argument[0]. However, that is NOT what ecj fills the Abstract Syntax Tree with, and if you are hacking around on the ecj code and assign new Argument[0] to this, other code will mess up as it just wasn't written to deal with this.
This is in my opinion bad use of null, but is quite common. And, in ecj's defense, it is about 4 times faster than javac, so I don't think it's fair to cast aspersions at their seemingly deplorably outdated code practices. If it's stupid and it works it isn't stupid, right? ecj also has a better track record than javac (going mostly by personal experience; I've found 3 bugs in ecj over the years and 12 in javac).
This kind of null does get a lot better if we implement your idea.
The better solution
What ecj should have done, get the best of both worlds: Make a public constant for it! new Argument[0], the object, is entirely immutable. You need to make a single instance, once, ever, for an entire JVM run. The JVM itself does this; try it: List.of() returns the 'singleton empty list'. So does Collections.emptyList() for the old timers in the crowd. All lists 'made' with Collections.emptyList() are actually just refs to the same singleton 'empty list' object. This works because the lists these methods make are entirely immutable.
The same can and generally should apply to you!
If you ever write this:
if (x == null || x.isEmpty())
then you messed up if we go by the first definition of null, and you're simply writing needlessly wordy, but correct, code if we go by the second
definition. You've come up with a solution to address this, but there's a much, much better one!
Find the place where x got its value, and address the boneheaded code that decided to return null instead of "". You should in fact emphatically NOT be adding null checks to your code, because it's far too easy to get into this mode where you almost always do it, and therefore you rarely actually have null refs, but it's just swiss cheese laid on top of each other: There may still be holes, and then you get NPEs. Better to never check so you get NPEs very quickly in the development process - somebody returned null where they should be returning "" instead.
Sometimes the code that made the bad null ref is out of your control. In that case, do the same thing you should always do when working with badly designed APIs: Fix it ASAP. Write a wrapper if you have to. But if you can commit a fix, do that instead. This may require making such an object.
Sentinels are awesome
Sometimes sentinel objects (objects that 'stand in' for this default / blank take, such as "" for strings, List.of() for lists, etc) can be a bit more fancy than this. For example, one can imagine using LocalDate.of(1800, 1, 1) as sentinel for a missing birthdate, but do note that this instance is not a great idea. It does crazy stuff. For example, if you write code to determine the age of a person, then it starts giving completely wrong answers (which is significantly worse than throwing an exception. With the exception you know you have a bug faster and you get a stacktrace that lets you find it in literally 500 milliseconds (just click the line, voila. That is the exact line you need to look at right now to fix the problem). It'll say someone is 212 years old all of a sudden.
But you could make a LocalDate object that does some things (such as: It CAN print itself; sentinel.toString() doesn't throw NPE but prints something like 'unset date'), but for other things it will throw an exception. For example, .getYear() would throw.
You can also make more than one sentinel. If you want a sentinel that means 'far future', that's trivially made (LocalDate.of(9999, 12, 31) is pretty good already), and you can also have one as 'for as long as anyone remembers', e.g. 'distant past'. That's cool, and not something your proposal could ever do!
You will have to deal with the consequences though. In some small ways the java ecosystem's definitions don't mesh with this, and null would perhaps have been a better standin. For example, the equals contract clearly states that a.equals(a) must always hold, and yet, just like in SQL NULL = NULL isn't TRUE, you probably don't want missingDate.equals(missingDate) to be true; that's conflating the meta with the value: You can't actually tell me that 2 missing dates are equal. By definition: The dates are missing. You do not know if they are equal or not. It is not an answerable question. And yet we can't implement the equals method of missingDate as return false; (or, better yet, as you also can't really know they aren't equal either, throw an exception) as that breaks contract (equals methods must have the identity property and must not throw, as per its own javadoc, so we can't do either of those things).
Dealing with null better
There are a few things that make dealing with null a lot easier:
Annotations: APIs can and should be very clear in communicating when their methods can return null and what that means. Annotations to turn that documentation into compiler-checked documentation is awesome. Your IDE can start warning you, as you type, that null may occur and what that means, and will say so in auto-complete dialogs too. And it's all entirely backwards compatible in all senses of the word: No need to start considering giant swaths of the java ecosystem as 'obsolete' (unlike Optional, which mostly sucks).
Optional, except this is a non-solution. The type isn't orthogonal (you can't write a method that takes a List<MaybeOptionalorNot<String>> that works on both List<String> and List<Optional<String>>, even though a method that checks the 'is it some or is it none?' state of all list members and doesn't add anything (except maybe shuffle things around) would work equally on both methods, and yet you just can't write it. This is bad, and it means all usages of optional must be 'unrolled' on the spot, and e.g. Optional<X> should show up pretty much never ever as a parameter type or field type. Only as return types and even that is dubious - I'd just stick to what Optional was made for: As return type of Stream terminal operations.
Adopting it also isn't backwards compatible. For example, hashMap.get(key) should, in all possible interpretations of what Optional is for, obviously return an Optional<V>, but it doesn't, and it never will, because java doesn't break backwards compatibility lightly and breaking that is obviously far too heavy an impact. The only real solution is to introduce java.util2 and a complete incompatible redesign of the collections API, which is splitting the java ecosystem in twain. Ask the python community (python2 vs. python3) how well that goes.
Use sentinels, use them heavily, make them available. If I were designing LocalDate, I'd have created LocalDate.FAR_FUTURE and LocalDate_DISTANT_PAST (but let it be clear that I think Stephen Colebourne, who designed JSR310, is perhaps the best API designer out there. But nothing is so perfect that it can't be complained about, right?)
Use API calls that allow defaulting. Map has this.
Do NOT write this code:
String phoneNr = phoneNumbers.get(userId);
if (phoneNr == null) return "Unknown phone number";
return phoneNr;
But DO write this:
return phoneNumbers.getOrDefault(userId, "Unknown phone number");
Don't write:
Map<Course, List<Student>> participants;
void enrollStudent(Student student) {
List<Student> participating = participants.get(econ101);
if (participating == null) {
participating = new ArrayList<Student>();
participants.put(econ101, participating);
}
participating.add(student);
}
instead write:
Map<Course, List<Student>> participants;
void enrollStudent(Student student) {
participants.computeIfAbsent(econ101,
k -> new ArrayList<Student>())
.add(student);
}
and, crucially, if you are writing APIs, ensure things like getOrDefault, computeIfAbsent, etc. are available so that the users of your API don't have to deal with null nearly as much.
You can write a static test() method like this:
static <T> boolean test(T object, Predicate<T> validation) {
return object != null && validation.test(object);
}
and
static class Foo {
public boolean isValid() {
return true;
}
}
static Foo dosomething() {
return new Foo();
}
public static void main(String[] args) {
Foo a = dosomething();
if (test(a, Foo::isValid))
System.out.println("OK");
else
System.out.println("NG");
}
output:
OK
If dosomething() returns null, it prints NG
Not exactly, but take a look at Optional:
Optional.ofNullable(dosomething())
.filter(Foo::isValid)
.ifPresent(a -> ...);
I just randomly tried seeing if new String(); would compile and it did (because according to Oracle's Java documentation on "Expressions, Statements, and Blocks", one of the valid statement types is "object creation"),
However, new int[0]; is giving me a "not a statement" error.
What's wrong with this? Aren't I creating an array object with new int[0]?
EDIT:
To clarify this question, the following code:
class Test {
void foo() {
new int[0];
new String();
}
}
causes a compiler error on new int[0];, whereas new String(); on its own is fine. Why is one not acceptable and the other one is fine?
The reason is a somewhat overengineered spec.
The idea behind expressions not being valid statements is that they accomplish nothing whatsoever. 5 + 2; does nothing on its own. You must assign it to something, or pass it to something, otherwise why write it?
There are exceptions, however: Expressions which, on their own, will (or possibly will) have side effects. For example, whilst this is illegal:
void foo(int a) {
a + 1;
}
This is not:
void foo(int a) {
a++;
}
That is because, on its own, a++ is not completely useless, it actually changes things (a is modified by doing this). Effectively, 'ignoring the value' (you do nothing with a + 1 in that first snippet) is acceptable if the act of producing the value on its own causes other stuff to happen: After all, maybe that is what you were after all along.
For that reason, invoking methods is also a legit expressionstatement, and in fact it is quite common that you invoke methods (even ones that don't return void), ignoring the return value. For void methods it's the only legal way to invoke them, even.
Constructors are technically methods and can have side effects. It is extremely unlikely, and quite bad code style, if this method:
void doStuff() {
new Something();
}
is 'sensible' code, but it could in theory be written, bad as it may be: The constructor of the Something class may do something useful and perhaps that's all you want to do here: Make that constructor run, do the useful thing, and then take the created object and immediately toss it in the garbage. Weird, but, okay. You're the programmer.
Contrast with:
new Something[10];
This is different: The compiler knows what the array 'constructor' does. And what it does is nothing useful - it creates an object and returns a reference to the object, and that is all that happens. If you then instantly toss the reference in the garbage, then the entire operation was a complete waste of time, and surely you did not intend to do nothing useful with such a bizarre statement, so the compiler designers thought it best to just straight up disallow you from writing it.
This 'oh dear that code makes no sense therefore I shall not compile it' is very limited and mostly an obsolete aspect of the original compiler spec; it's never been updated and this is not a good way to trust that code is sensible; there's all sorts of linter tools out there that go vastly further in finding you code that just cannot be right, so if you care about that sort of thing, invest in learning those.
Nevertheless, the java 1.0 spec had this stuff baked in and there is no particularly good reason to drop this aspect of the java spec, therefore, it remains, and constructing a new array is not a valid ExpressionStatement.
As JLS §14.8 states, specifically, a ClassInstanceCreationExpression is in the list of valid expressionstatements. Click that word to link to the definition of ClassInstanceCreationExpression and you'll find that it specifically refers to invoking constructors, and not to array construction.
Thus, the JLS is specific and requires this behaviour. javac is simply following the spec.
In the following piece of code we make a call listType.getDescription() twice:
for (ListType listType: this.listTypeManager.getSelectableListTypes())
{
if (listType.getDescription() != null)
{
children.add(new SelectItem( listType.getId() , listType.getDescription()));
}
}
I would tend to refactor the code to use a single variable:
for (ListType listType: this.listTypeManager.getSelectableListTypes())
{
String description = listType.getDescription();
if (description != null)
{
children.add(new SelectItem(listType.getId() ,description));
}
}
My understanding is the JVM is somehow optimized for the original code and especially nesting calls like children.add(new SelectItem(listType.getId(), listType.getDescription()));.
Comparing the two options, which one is the preferred method and why? That is in terms of memory footprint, performance, readability/ease, and others that don't come to my mind right now.
When does the latter code snippet become more advantageous over the former, that is, is there any (approximate) number of listType.getDescription() calls when using a temp local variable becomes more desirable, as listType.getDescription() always requires some stack operations to store the this object?
I'd nearly always prefer the local variable solution.
Memory footprint
A single local variable costs 4 or 8 bytes. It's a reference and there's no recursion, so let's ignore it.
Performance
If this is a simple getter, the JVM can memoize it itself, so there's no difference. If it's a expensive call which can't be optimized, memoizing manually makes it faster.
Readability
Follow the DRY principle. In your case it hardly matters as the local variable name is character-wise as about as long as the method call, but for anything more complicated, it's readability as you don't have to find the 10 differences between the two expressions. If you know they're the same, so make it clear using the local variable.
Correctness
Imagine your SelectItem does not accept nulls and your program is multithreaded. The value of listType.getDescription() can change in the meantime and you're toasted.
Debugging
Having a local variable containing an interesting value is an advantage.
The only thing to win by omitting the local variable is saving one line. So I'd do it only in cases when it really doesn't matter:
very short expression
no possible concurrent modification
simple private final getter
I think the way number two is definitely better because it improves readability and maintainability of your code which is the most important thing here. This kind of micro-optimization won't really help you in anything unless you writing an application where every millisecond is important.
I'm not sure either is preferred. What I would prefer is clearly readable code over performant code, especially when that performance gain is negligible. In this case I suspect there's next to no noticeable difference (especially given the JVM's optimisations and code-rewriting capabilities)
In the context of imperative languages, the value returned by a function call cannot be memoized (See http://en.m.wikipedia.org/wiki/Memoization) because there is no guarantee that the function has no side effect. Accordingly, your strategy does indeed avoid a function call at the expense of allocating a temporary variable to store a reference to the value returned by the function call.
In addition to being slightly more efficient (which does not really matter unless the function is called many times in a loop), I would opt for your style due to better code readability.
I agree on everything. About the readability I'd like to add something:
I see lots of programmers doing things like:
if (item.getFirst().getSecond().getThird().getForth() == 1 ||
item.getFirst().getSecond().getThird().getForth() == 2 ||
item.getFirst().getSecond().getThird().getForth() == 3)
Or even worse:
item.getFirst().getSecond().getThird().setForth(item2.getFirst().getSecond().getThird().getForth())
If you are calling the same chain of 10 getters several times, please, use an intermediate variable. It's just much easier to read and debug
I would agree with the local variable approach for readability only if the local variable's name is self-documenting. Calling it "description" wouldn't be enough (which description?). Calling it "selectableListTypeDescription" would make it clear. I would throw in that the incremented variable in the for loop should be named "selectableListType" (especially if the "listTypeManager" has accessors for other ListTypes).
The other reason would be if there's no guarantee this is single-threaded or your list is immutable.
I had a remark about a piece of code in the style of:
Iterable<String> upperCaseNames = Iterables.transform(
lowerCaseNames, new Function<String, String>() {
public String apply(String input) {
return input.toUpperCase();
}
});
The person said that every time I go through this code, I instantiate this anonymous Function class, and that I should rather have a single instance in, say, a static variable:
static Function<String, String> toUpperCaseFn =
new Function<String, String>() {
public String apply(String input) {
return input.toUpperCase();
}
};
...
Iterable<String> upperCaseNames =
Iterables.transform(lowerCaseNames, toUpperCaseFn);
On a very superficial level, this somehow makes sense; instantiating a class multiple times has to waste memory or something, right?
On the other hand, people instantiate anonymous classes in middle of the code like there's no tomorrow, and it would be trivial for the compiler to optimize this away.
Is this a valid concern?
Fun fact about Hot Spot JVM optimizations, if you instantiate an object that isn't passed outside of the current method, the JVM will perform optimizations at the bytecode level.
Usually, stack allocation is associated with languages that expose the memory model, like C++. You don't have to delete stack variables in C++ because they're automatically deallocated when the scope is exited. This is contrary to heap allocation, which requires you to delete the pointer when you're done with it.
In the Hot Spot JVM, the bytecode is analyzed to decide if an object can "escape" the thread. There are three levels of escape:
No escape - the object is only used within the method/scope it is created, and the object can't be accessed outside the thread.
Local/Arg escape - the object is returned by the method that creates it or passed to a method that it calls, but none of those methods will put that object somewhere that it can be accessed outside of the thread.
Global escape - the object is put somewhere that it can be accessed in another thread.
This basically is analogous to the questions, 1) do I pass it to another method or return it, and 2) do I associate it with something attached to a GC root like a ClassLoader or something stored in a static field?
In your particular case, the anonymous object will be tagged as "local escape", which only means that any locks (read: use of synchronized) on the object will be optimized away. (Why synchronize on something that won't ever be used in another thread?) This is different from "no escape", which will do allocation on the stack. It's important to note that this "allocation" isn't the same as heap allocation. What it really does is allocates space on the stack for all the variables inside the non-escaping object. If you have 3 fields, int, String, and MyObject inside the no-escape object, then three stack variables will be allocated: an int, a String reference, and a MyObject reference – the MyObject instance itself will still be stored in heap unless it is also analyzed to have "no escape". The object allocation is then optimized away and constructors/methods will run using the local stack variables instead of heap variables.
That being said, it sounds like premature optimization to me. Unless the code is later proven to be slow and is causing performance problems, you shouldn't do anything to reduce its readability. To me, this code is pretty readable, I'd leave it alone. This is totally subjective, of course, but "performance" is not a good reason to change code unless it has something to do with its actual running time. Usually, premature optimization results in code that's harder to maintain with minimal performance benefits.
Java 8+ and Lambdas
If allocating anonymous instances still bothers you, I recommend switching to using Lambdas for single abstract method (SAM) types. Lambda evaluation is performed using invokedynamic, and the implementation ends up creating only a single instance of a Lambda on the first invocation. More details can be found in my answer here and this answer here. For non-SAM types, you will still need to allocate an anonymous instance. The performance impact here will be negligible in most use cases, but IMO, it's more readable this way.
References
Escape analysis (wikipedia.org)
HotSpot escape analysis 14 | 11 | 8 (oracle.com)
What is a 'SAM type' in Java? (stackoverflow.com)
Why are Java 8 lambdas invoked using invokedynamic? (stackoverflow.com)
Short answer: No - don't worry.
Long answer: it depends how frequently you're instantiating it. If in a frequently-called tight loop, maybe - though note that when the function is applied it calls String.toUpperCase() once for every item in an Iterable - each call presumably creates a new String, which will create far more GC churn.
"Premature optimization is the root of all evil" - Knuth
Found this thread: Java anonymous class efficiency implications , you may find it interesting
Did some micro-benchmarking. The micro-benchmark was a comparison between: instantiating an (static inner) class per loop iteration, instantiating a (static inner) class once and using it in the loop, and the two similar ones but with anonymous classes. For the micro benchmarking the compiler seemed to extract the anonymous class out of loops and as predicted, promoted the anonymous class to an inner class of the caller. This meant all four methods were indistinguishable in speed. I also compared it to an outside class and again, same speed. The one with anonymous classes probably took ~128 bits of space more
You can check out my micro-benchmark at http://jdmaguire.ca/Code/Comparing.java & http://jdmaguire.ca/Code/OutsideComp.java. I ran this on various values for wordLen, sortTimes, and listLen. As well, the JVM is slow to warm-up so I shuffled the method calls around. Please don't judge me for the awful non-commented code. I program better than that in RL. And Microbenching marking is almost as evil and useless as premature optimization.
I have a basic doubt regarding object creation in java.Suppose i have two classes as follows
Class B{
public int value=100;
}
Class A{
public B getB(){
return new B();
}
public void accessValue(){
//accessing the value without storing object B
System.out.println("value is :"+getB().value);
//accessing the value by storing object B in variable b
B b=getB();
System.out.println("value is :"+b.value);
}
}
My question is,does storing the object and accessing the value make any difference in terms of memory or both are same?
They are both equivalent, since you are instantiating B both times. The first way is just a shorter version of the second.
Following piece of code is using an anonymous object. which can't be reused later in code.
//accessing the value without storing object B
System.out.println("value is :"+getB().value);
Below code uses the object by assigning it to a reference.
//accessing the value by storing object B in variable b
B b=getB();
System.out.println("value is :"+b.value);
Memory and performance wise it's NOT much difference except that in later version stack frame has an extra pointer.
It is the same. This way: B b=getB(); just keeps your code more readable. Keep in mind, that object must be stored somewhere in memory anyway.
If you never reuse the B-object after this part, the first option with an anonymous object is probably neater:
the second option would need an additional store/load command (as Hot Licks mentioned) if it isn't optimized by the compiler
possibly first storing the object in a variable creates slight overhead for the garbage collector as opposed to an anonymous object, but that's more of a "look into that" than a definitive statement of me
If you do want to access a B a second time, storing one in its own variable is faster.
EDIT: ah, both points already mentioned above while I was typing.
You will not be able to say the difference without looking at the generated machine code. It could be that the JIT puts the local variable "b" onto the stack. More likely however that the JIT will optimize b away. Depends on the JRE and JIT you are using. In any case, the difference is minor and only significant in extremely special cases.
Actually there is no difference in the second instance you are just giving the new object reference to b.
So code wise you cannot achieve the println if you use version 1, as you dont have any reference as you have in the second case unless you keep creating new object for every method call.
In that case the difference, if any, would not be worth mentioning. In the second case an extra bytecode or two would probably be generated, if the compiler didn't optimize them away, but any decent JIT would almost certainly optimize the two cases to the identical machine code.
And, in any event, the cost of an extra store/load would be inconsequential for 99.9% of applications (and swamped in this case by the new operation).
Details: If you look at the bytecodes, in the first case the getB method is called and returns a value on the top of the stack. Then a getfield references value and places that on the top of the stack. Then a StringBuilder append is done to begin building the println parameter list.
In the second case there is an extra astore and aload (pointer store/load) after the getB invocation, and the setup for the StringBuilder is stuck between the two, so that side-effects occur in the order specified in the source. If there were no side-effects to worry about the compiler might have chosen to do the very slightly more efficient dupe and astore sequence. In any case, a decent JIT would recognize that b is never used again and optimize away the store.