Related
In the following piece of code we make a call listType.getDescription() twice:
for (ListType listType: this.listTypeManager.getSelectableListTypes())
{
if (listType.getDescription() != null)
{
children.add(new SelectItem( listType.getId() , listType.getDescription()));
}
}
I would tend to refactor the code to use a single variable:
for (ListType listType: this.listTypeManager.getSelectableListTypes())
{
String description = listType.getDescription();
if (description != null)
{
children.add(new SelectItem(listType.getId() ,description));
}
}
My understanding is the JVM is somehow optimized for the original code and especially nesting calls like children.add(new SelectItem(listType.getId(), listType.getDescription()));.
Comparing the two options, which one is the preferred method and why? That is in terms of memory footprint, performance, readability/ease, and others that don't come to my mind right now.
When does the latter code snippet become more advantageous over the former, that is, is there any (approximate) number of listType.getDescription() calls when using a temp local variable becomes more desirable, as listType.getDescription() always requires some stack operations to store the this object?
I'd nearly always prefer the local variable solution.
Memory footprint
A single local variable costs 4 or 8 bytes. It's a reference and there's no recursion, so let's ignore it.
Performance
If this is a simple getter, the JVM can memoize it itself, so there's no difference. If it's a expensive call which can't be optimized, memoizing manually makes it faster.
Readability
Follow the DRY principle. In your case it hardly matters as the local variable name is character-wise as about as long as the method call, but for anything more complicated, it's readability as you don't have to find the 10 differences between the two expressions. If you know they're the same, so make it clear using the local variable.
Correctness
Imagine your SelectItem does not accept nulls and your program is multithreaded. The value of listType.getDescription() can change in the meantime and you're toasted.
Debugging
Having a local variable containing an interesting value is an advantage.
The only thing to win by omitting the local variable is saving one line. So I'd do it only in cases when it really doesn't matter:
very short expression
no possible concurrent modification
simple private final getter
I think the way number two is definitely better because it improves readability and maintainability of your code which is the most important thing here. This kind of micro-optimization won't really help you in anything unless you writing an application where every millisecond is important.
I'm not sure either is preferred. What I would prefer is clearly readable code over performant code, especially when that performance gain is negligible. In this case I suspect there's next to no noticeable difference (especially given the JVM's optimisations and code-rewriting capabilities)
In the context of imperative languages, the value returned by a function call cannot be memoized (See http://en.m.wikipedia.org/wiki/Memoization) because there is no guarantee that the function has no side effect. Accordingly, your strategy does indeed avoid a function call at the expense of allocating a temporary variable to store a reference to the value returned by the function call.
In addition to being slightly more efficient (which does not really matter unless the function is called many times in a loop), I would opt for your style due to better code readability.
I agree on everything. About the readability I'd like to add something:
I see lots of programmers doing things like:
if (item.getFirst().getSecond().getThird().getForth() == 1 ||
item.getFirst().getSecond().getThird().getForth() == 2 ||
item.getFirst().getSecond().getThird().getForth() == 3)
Or even worse:
item.getFirst().getSecond().getThird().setForth(item2.getFirst().getSecond().getThird().getForth())
If you are calling the same chain of 10 getters several times, please, use an intermediate variable. It's just much easier to read and debug
I would agree with the local variable approach for readability only if the local variable's name is self-documenting. Calling it "description" wouldn't be enough (which description?). Calling it "selectableListTypeDescription" would make it clear. I would throw in that the incremented variable in the for loop should be named "selectableListType" (especially if the "listTypeManager" has accessors for other ListTypes).
The other reason would be if there's no guarantee this is single-threaded or your list is immutable.
Quick question? Is this line atomic in C++ and Java?
class foo {
bool test() {
// Is this line atomic?
return a==1 ? 1 : 0;
}
int a;
}
If there are multiple thread accessing that line, we could end up with doing the check
a==1 first, then a is updated, then return, right?
Added: I didn't complete the class and of course, there are other parts which update a...
No, for both C++ and Java.
In Java, you need to make your method synchronized and protect other uses of a in the same way. Make sure you're synchronizing on the same object in all cases.
In C++, you need to use std::mutex to protect a, probably using std::lock_guard to make sure you properly unlock the mutex at the end of your function.
return a==1 ? 1 : 0;
is a simple way of writing
if(a == 1)
return 1;
else
return 0;
I don't see any code for updating a. But I think you could figure it out.
Regardless of whether there is a write, reading the value of a non-atomic type in C++ is not an atomic operation. If there are no writes then you might not care whether it's atomic; if some other thread might be modifying the value then you certainly do care.
The correct way of putting it is simply: No! (both for Java and C++)
A less correct, but more practical answer is: Technically this is not atomic, but on most mainstream architectures, it is at least for C++.
Nothing is being modified in the code you posted, the variable is only tested. The code will thus usually result in a single TEST (or similar) instruction accessing that memory location, and that is, incidentially, atomic. The instruction will read a cache line, and there will be one well-defined value in the respective loaction, whatever it may be.
However, this is incidential/accidential, not something you can rely on.
It will usually even work -- again, incidentially/accidentially -- when a single other thread writes to the value. For this, the CPU fetches a cache line, overwrites the location for the respective address within the cache line, and writes back the entire cache line to RAM. When you test the variable, you fetch a cache line which contains either the old or the new value (nothing in between). No happens-before guarantees of any kind, but you can still consider this "atomic".
It is much more complicated when several threads modify that variable concurrently (not part of the question). For this to work properly, you need to use something from C++11 <atomic>, or use an atomic intrinsic, or something similar. Otherwise it is very much unclear what happens, and what the result of an operation may be -- one thread might read the value, increment it and write it back, but another one might read the original value before the modified value is written back.
This is more or less guaranteed to end badly, on all current platforms.
No, it is not atomic (in general) although it can be in some architectures (in C++, for example, in intel if the integer is aligned which it will be unless you force it not to be).
Consider these three threads:
// thread one: // thread two: //thread three
while (true) while (true) while (a) ;
a = 0xFFFF0000; a = 0x0000FFFF;
If the write to a is not atomic (for example, intel if a is unaligned, and for the sake of discussion with 16bits in each one of two consecutive cache lines). Now while it seems that the third thread cannot ever come out of the loop (the two possible values of a are both non-zero), the fact is that the assignments are not atomic, thread two could update the higher 16bits to be 0, and thread three could read the lower 16bits to be 0 before thread two gets the time to complete the update, and come out of the loop.
The whole conditional is irrelevant to the question, since the returned value is local to the thread.
No, it still a test followed by a set and then a return.
Yes, multithreadedness will be a problem.
It's just syntactic sugar.
Your question can be rephrased as: is statement:
a == 1
atomic or not? No it is not atomic, you should use std::atomic for a or check that condition under lock of some sort. If whole ternary operator atomic or not does not matter in this context as it does not change anything. If you mean in your question if in this code:
bool flag = somefoo.test();
flag to be consistent to a == 1, it would definitely not, and it irrelevant if whole ternary operator in your question is atomic.
There a lot of good answers here, but none of them mention the need in Java to mark a as volatile.
This is especially important if no other synchronization method is employed, but other threads could updating a. Otherwise, you could be reading an old value of a.
Consider the following code:
bool done = false;
void Thread1() {
while (!done) {
do_something_useful_in_a_loop_1();
}
do_thread1_cleanup();
}
void Thread2() {
do_something_useful_2();
done = true;
do_thread2_cleanup();
}
The synchronization between these two threads is done using a boolean variable done. This is a wrong way to synchronize two threads.
On x86, the biggest issue is the compile-time optimizations.
Part of the code of do_something_useful_2() can be moved below "done = true" by the compiler.
Part of the code of do_thread2_cleanup() can be moved above "done = true" by the compiler.
If do_something_useful_in_a_loop_1() doesn't modify "done", the compiler may re-write Thread1 in the following way:
if (!done) {
while(true) {
do_something_useful_in_a_loop_1();
}
}
do_thread1_cleanup();
so Thread1 will never exit.
On architectures other than x86, the cache effects or out-of-order instruction execution may lead to other subtle problems.
Most race detector will detect such race.
Also, most dynamic race detectors will report data races on the memory accesses that were intended to be synchronized with this bool
(i.e. between do_something_useful_2() and do_thread1_cleanup())
To fix such race you need to use compiler and/or memory barriers (if you are not an expert -- simply use locks).
We were having this discussion wiht my colleagues about Inner assignments such as:
return result = myObject.doSomething();
or
if ( null == (point = field.getPoint()) )
Are these acceptable or should they be replaced by the following and why?
int result = myObject.doSomething();
return result;
or
Point point = field.getPoint();
if ( null == point)
The inner assignment is harder to read and easier to miss. In a complex condition it can even be missed, and can cause error.
Eg. this will be a hard to find error, if the condition evaluation prevent to assign a value to the variable:
if (i == 2 && null == (point = field.getPoint())) ...
If i == 2 is false, the point variable will not have value later on.
if ( null == (point = field.getPoint()) )
Pros:
One less line of code
Cons:
Less readable.
Doesn't restrict point's scope to the statement and its code block.
Doesn't offer any performance improvements as far as I am aware
Might not always be executed (when there is a condition preceding it that evaluates to false.
Cons outweigh pros 4 / 1 so I would avoid it.
This is mainly concerned with readablity of the code. Avoid inner assignments to make your code readable as you will not get any improvements with inner assignments
Functionally Not Necessarily.
For Readability Definitely Yes
They should be avoided. Reducing the number of identifiers/operations per line will increase readability and improve internal code quality. Here's an interesting study on the topic: http://dl.acm.org/citation.cfm?id=1390647
So bottom line, splitting up
return result = myObject.doSomething();
into
result = myObject.doSomething();
return result;
will make it easier for others to understand and work with your code. At the same time, it wouldn't be the end of the world if there were a couple inner assignments sprinkled throughout your code base, so long as they're easily understandable within their context.
Well, the first one is not exactly inner assignment but in second case...it reduces readability ...but in some cases like below,
while ( null == (point = field.getPoint()) );
it's good to write it this way
In both cases the first form is harder to read, and will make you want to change it whenever you want to inspect the value in a debugger. I don't know how often I've cursed "concise" code when step-debugging.
There are a very few cases where inner assignments reduce program complexity, for example in if (x != null && y != null && ((c = f(x, y)) > 0) {...} and you really only need the assignment in the case when it is executed in the complex condition.
But in most cases inner assignments reduce readability and they easily can be missed.
I think inner assignments are a relict to the first versions of the C programming language in the seventies, when the compilers didn't do any optimizations, and the work to optimize the code was left to the programmers. In that time inner assignments were faster, because it was not necessary to read the value again from the variable, but today with fast computers and optimizing compilers this point doesn't count any more. Nevertheless some C programmers were used to them. I think Sun introduced inner assignments to Java only because they wanted to be similar to C and make it easy for C programmers to change to Java.
Always work and aim for code readability not writeability. The same goes for stuff like a > b ? x : y;
There are probably many developers out there not having issues reading your first code snipet but most of them are used to the second snipet.
The more verbose form also makes it easier to follow in a Debugger such as Eclipse. I often split up single line assignments so the intermediate values are more easily visible.
Although not directly requested by OP a similar case is function calls as method arguments may save lines but are harder to debug:
myFunction(funcA(), funcB());
does not show the return types and is harder to step through. It's also more error-prone if the two values are of the same type.
I don't find any harm in using inner assignments. It saves few lines of code (though im sure it doesn't improve compiling or execution time or memory). The only drawback is that to someone else it might appear cumbersome.
I'm cleaning up Java code for someone who starts their functions by declaring all variables up top, and initializing them to null/0/whatever, as opposed to declaring them as they're needed later on.
What are the specific guidelines for this? Are there optimization reasons for one way or the other, or is one way just good practice? Are there any cases where it's acceptable to deviate from whatever the proper way of doing it is?
Declare variables as close to the first spot that you use them as possible. It's not really anything to do with efficiency, but makes your code much more readable. The closer a variable is declared to where it is used, the less scrolling/searching you have to do when reading the code later. Declaring variables closer to the first spot they're used will also naturally narrow their scope.
The proper way is to declare variables exactly when they are first used and minimize their scope in order to make the code easier to understand.
Declaring variables at the top of functions is a holdover from C (where it was required), and has absolutely no advantages (variable scope exists only in the source code, in the byte code all local variables exist in sequence on the stack anyway). Just don't do it, ever.
Some people may try to defend the practice by claiming that it is "neater", but any need to "organize" code within a method is usually a strong indication that the method is simply too long.
From the Java Code Conventions, Chapter 6 on Declarations:
6.3 Placement
Put declarations only at the beginning
of blocks. (A block is any code
surrounded by curly braces "{" and
"}".) Don't wait to declare variables
until their first use; it can confuse
the unwary programmer and hamper code
portability within the scope.
void myMethod() {
int int1 = 0; // beginning of method block
if (condition) {
int int2 = 0; // beginning of "if" block
...
}
}
The one exception to the rule is
indexes of for loops, which in Java
can be declared in the for statement:
for (int i = 0; i < maxLoops; i++) { ... }
Avoid local declarations that hide
declarations at higher levels. For
example, do not declare the same
variable name in an inner block:
int count;
...
myMethod() {
if (condition) {
int count = 0; // AVOID!
...
}
...
}
If you have a kabillion variables used in various isolated places down inside the body of a function, your function is too big.
If your function is a comfortably understandable size, there's no difference between "all up front" and "just as needed".
The only not-up-front variable would be in the body of a for statement.
for( Iterator i= someObject.iterator(); i.hasNext(); )
From Google Java Style Guide:
4.8.2.2 Declared when needed
Local variables are not habitually declared at the start of their
containing block or block-like construct. Instead, local variables are
declared close to the point they are first used (within reason), to
minimize their scope. Local variable declarations typically have
initializers, or are initialized immediately after declaration.
Well, I'd follow what Google does, on a superficial level it might seem that declaring all variables at the top of the method/function would be "neater", it's quite apparent that it'd be beneficial to declare variables as necessary. It's subjective though, whatever feels intuitive to you.
I've found that declaring them as-needed results in fewer mistakes than declaring them at the beginning. I've also found that declaring them at the minimum scope possible to also prevent mistakes.
When I looked at the byte-code generated by the location of the declaration few years ago, I found they were more-or-less identical. There were ocassionally differences depending on when they were assigned. Even something like:
for(Object o : list) {
Object temp = ...; //was not "redeclared" every loop iteration
}
vs
Object temp;
for(Object o : list) {
temp = ...; //nearly identical bytecoode, if not exactly identical.
}
Came out more or less identical
I am doing this very same thing at the moment. All of the variables in the code that I am reworking are declared at the top of the function. I've seen as I've been looking through this that several variables are declared but NEVER used or they are declared and operations are being done with them (ie parsing a String and then setting a Calendar object with the date/time values from the string) but then the resulting Calendar object is NEVER used.
I am going through and cleaning these up by taking the declarations from the top and moving them down in the function to a spot closer to where it is used.
Defining variable in a wider scope than needed hinders understandability quite a bit. Limited scope signals that this variable has meaning for only this small block of code and you can not think about when reading further. This is a pretty important issue because of the tiny short-term working memory that the brain has (it said that on average you can keep track of only 7 things). One less thing to keep track of is significant.
Similarly you really should try to avoid variables in the literal sense. Try to assign all things once, and declare them final so this is known to the reader. Not having to keep track whether something changes or not really cuts the cognitive load.
Principle: Place local variable declarations as close to their first use as possible, and NOT simply at the top of a method. Consider this example:
/** Return true iff s is a blah or a blub. */
public boolean checkB(String s) {
// Return true if s is a blah
... code to return true if s is a blah ...
// Return true if s is a blub. */
int helpblub= s.length() + 1;
... rest of code to return true is s is a blah.
return false;
}
Here, local variable helpblub is placed where it is necessary, in the code to test whether s is a blub. It is part of the code that implements "Return true is s is a blub".
It makes absolutely no logical sense to put the declaration of helpblub as the first statement of the method. The poor reader would wonder, why is that variable there? What is it for?
I think it is actually objectively provable that the declare-at-the-top style is more error-prone.
If you mutate-test code in either style by moving lines around at random (to simulate a merge gone bad or someone unthinkingly cut+pasting), then the declare-at-the-top style has a greater chance of compiling while functionally wrong.
I don't think declare-at-the-top has any corresponding advantage that doesn't come down to personal preference.
So assuming you want to write reliable code, learn to prefer doing just-in-time declaration.
Its a matter of readability and personal preference rather than performance. The compiler does not care and will generate the same code anyway.
I've seen people declare at the top and at the bottom of functions. I prefer the top, where I can see them quickly. It's a matter of choice and preference.
Is there any performance penalty for the following code snippet?
for (int i=0; i<someValue; i++)
{
Object o = someList.get(i);
o.doSomething;
}
Or does this code actually make more sense?
Object o;
for (int i=0; i<someValue; i++)
{
o = someList.get(i);
o.doSomething;
}
If in byte code these two are totally equivalent then obviously the first method looks better in terms of style, but I want to make sure this is the case.
In today's compilers, no. I declare objects in the smallest scope I can, because it's a lot more readable for the next guy.
To quote Knuth, who may be quoting Hoare:
Premature optimization is the root of all evil.
Whether the compiler will produce marginally faster code by defining the variable outside the loop is debatable, and I imagine it won't. I would guess it'll produce identical bytecode.
Compare this with the number of errors you'll likely prevent by correctly-scoping your variable using in-loop declaration...
There's no performance penalty for declaring the Object o within the loop.
The compiler generates very similar bytecode and makes the correct optimizations.
See the article Myth - Defining loop variables inside the loop is bad for performance for a similar example.
You can disassemble the code with javap -c and check what the compiler actually emits. On my setup (java 1.5/mac compiled with eclipse), the bytecode for the loop is identical.
The first code is better as it restricts scope of o variable to the for block. From a performance perspective, it might not have any effects in Java, but it might have in lower level compilers. They might put the variable in a register if you do the first.
In fact, some people might think that if the compiler is dumb, the second snippet is better in terms of performance. This is what some instructor told me at the college and I laughed at him for this suggestion! Basically, compilers allocate memory on the stack for the local variables of a method just once at the start of the method (by adjusting the stack pointer) and release it at the end of method (again by adjusting the stack pointer, assuming it's not C++ or it doesn't have any destructors to be called). So all stack-based local variables in a method are allocated at once, no matter where they are declared and how much memory they require. Actually, if the compiler is dumb, there is no difference in terms of performance, but if it's smart enough, the first code can actually be better as it'll help the compiler understand the scope and the lifetime of the variable! By the way, if it's really smart, there should no absolutely no difference in performance as it infers the actual scope.
Construction of a object using new is totally different from just declaring it, of course.
I think readability is more important that performance and from a readability standpoint, the first code is definitely better.
I've got to admit I don't know java. But are these two equivalent? Are the object lifetimes the same? In the first example, I assume (not knowing java) that o will be eligible for garbage collection immediately the loop terminates.
But in the second example surely o won't be eligible for garbage collection until the outer scope (not shown) is exited?
Don't prematurely optimize. Better than either of these is:
for(Object o : someList) {
o.doSomething();
}
because it eliminates boilerplate and clarifies intent.
Unless you are working on embedded systems, in which case all bets are off. Otherwise, don't try to outsmart the JVM.
I've always thought that most compilers these days are smart enough to do the latter option. Assuming that's the case, I would say the first one does look nicer as well. If the loop gets very large, there's no need to look all around for where o is declared.
These have different semantics. Which is more meaningful?
Reusing an object for "performance reasons" is often wrong.
The question is what does the object "mean"? WHy are you creating it? What does it represent? Objects must parallel real-world things. Things are created, undergo state changes, and report their states for reasons.
What are those reasons? How does your object model and reflect those reasons?
To get at the heart of this question... [Note that non-JVM implementations may do things differently if allowed by the JLS...]
First, keep in mind that the local variable "o" in the example is a pointer, not an actual object.
All local variables are allocated on the runtime stack in 4-byte slots. doubles and longs require two slots; other primitives and pointers take one. (Even booleans take a full slot)
A fixed runtime-stack size must be created for each method invocation. This size is determined by the maximum local variable "slots" needed at any given spot in the method.
In the above example, both versions of the code require the same maximum number of local variables for the method.
In both cases, the same bytecode will be generated, updating the same slot in the runtime stack.
In other words, no performance penalty at all.
HOWEVER, depending on the rest of the code in the method, the "declaration outside the loop" version might actually require a larger runtime stack allocation. For example, compare
for (...) { Object o = ... }
for (...) { Object o = ... }
with
Object o;
for (...) { /* loop 1 */ }
for (...) { Object x =...; }
In the first example, both loops require the same runtime stack allocation.
In the second example, because "o" lives past the loop, "x" requires an additional runtime stack slot.
Hope this helps,
-- Scott
In both cases the type info for the object o is determined at compile time.In the second instance, o is seen as being global to the for loop and in the first instance, the clever Java compiler knows that o will have to be available for as long as the loop lasts and hence will optimise the code in such a way that there wont be any respecification of o's type in each iteration.
Hence, in both cases, specification of o's type will be done once which means the only performance difference would be in the scope of o. Obviously, a narrower scope always enhances performance, therefore to answer your question: no, there is no performance penalty for the first code snip; actually, this code snip is more optimised than the second.
In the second snip, o is being given unnecessary scope which, besides being a performance issue, can be also a security issue.
The first makes far more sense. It keeps the variable in the scope that it is used in. and prevents values assigned in one iteration being used in a later iteration, this is more defensive.
The former is sometimes said to be more efficient but any reasonable compiler should be able to optimise it to be exactly the same as the latter.
As someone who maintains more code than writes code.
Version 1 is much preferred - keeping scope as local as possible is essential for understanding. Its also easier to refactor this sort of code.
As discussed above - I doubt this would make any difference in efficiency. In fact I would argue that if the scope is more local a compiler may be able to do more with it!
When using multiple threads (if your doing 50+) then i found this to be a very effective way of handling ghost thread problems:
Object one;
Object two;
Object three;
Object four;
Object five;
try{
for (int i=0; i<someValue; i++)
{
o = someList.get(i);
o.doSomething;
}
}catch(e){
e.printstacktrace
}
finally{
one = null;
two = null;
three = null;
four = null;
five = null;
System.gc();
}
The answer depends partly on what the constructor does and what happens with the object after the loop, since that determines to a large extent how the code is optimized.
If the object is large or complex, absolutely declare it outside the loop. Otherwise, the people telling you not to prematurely optimize are right.
I've actually in front of me a code which looks like this:
for (int i = offset; i < offset + length; i++) {
char append = (char) (data[i] & 0xFF);
buffer.append(append);
}
...
for (int i = offset; i < offset + length; i++) {
char append = (char) (data[i] & 0xFF);
buffer.append(append);
}
...
for (int i = offset; i < offset + length; i++) {
char append = (char) (data[i] & 0xFF);
buffer.append(append);
}
So, relying on compiler abilities, I can assume there would be only one stack allocation for i and one for append. Then everything would be fine except the duplicated code.
As a side note, java applications are known to be slow. I never tried to do profiling in java but I guess the performance hit comes mostly from memory allocation management.