This question already has answers here:
Difference between declaring variables before or in loop?
(26 answers)
Closed 5 years ago.
Supposing I need to iterate over something and set a variable in every cycle, like this:
for (int i=0; i < list.size(); i++) {
final Object cur = list.get(i);
}
this redefines the variable everytime, so I'm concerned that that might pollute the memory.
The alternate option is to define the variable once and then reassign it every iteration, like this:
Object cur
for (int i=0; i < list.size(); i++) {
cur = list.get(i);
}
would this be better in terms of memory? Would it make any difference? What if cur is a primitive type instead of an Object?
Don't tell me to use foreach, I need the counter, this is just a simplified example
From a performance point of view, I would say if you need to improve this (you measured that a hot spot was here), you can choose the solution that offers best performances.
That said, I would recommend putting it inside the loop so that you cannot use it by error outside.
BTW, you'll ease debugging and readability by not having a variable that is there without knowing if it should or not be used outside the loop.
It has no effect on performance. Both are good. They turn out to be compiled to similar bytecode as pointed by this blogpost :(http://livingtao.blogspot.in/2007/05/myth-defining-loop-variables-inside.html)
I would recommend you use your alternate approach - declaration of a variable - primitive or object outside the loop. However, I notice in your first approach that you are declaring the object with final keyword. This should give a hint to the compiler to optimize the code in a way that the memory utilization at the run time is as less as possible. But you shouldn't rely on that. Just use your alternate approach.
Seriously use whatever you want, it will have no difference. The only difference is the scope of that variable - for example it would be much simpler to reason about the variable GC when it is inside the loop - because it basically can not "escape" the loop.
Performance of program is depend on how much time your code is taking to produce the output, and Statement/Loops/Conditions which take more time to execute it is natural that peace of code will consume more memory then other code which take less time to generate the output.
As per your code concern, 1st loop statement where you assigning final Object on every iteration is takes more time as compare to second loop statement where it assign to same object every time.
What I tried in my local linux(Ubuntu) box, 1st statement take 18 milliseconds while 2nd statement takes only 15 seconds, so I guess 2nd statement will be more appropriate.
Code which I tried locally:
import java.util.ArrayList;
import java.util.List;
public class Solution8 {
public static void main(String[] args) {
List<String> lis = new ArrayList<String>();
for (int i = 0; i <= Math.pow(10, 5); i++) {
lis.add(i + "");
}
long startTime = System.currentTimeMillis();
for (int i = 0; i <= Math.pow(10, 5); i++) {
final Object cur1 = lis.get(i);
}
long endTime = System.currentTimeMillis();
System.out.println("1st : "+(endTime-startTime)); // 18 milliseconds
Object cur;
long startTime2 = System.currentTimeMillis();
for (int i = 0; i <= Math.pow(10, 5); i++) {
cur = lis.get(i);
}
long endTime2 = System.currentTimeMillis();
System.out.println("2nd : "+(endTime2-startTime2)); // 15 milliseconds
}
}
Hope this will help.
Related
I have 2 pieces of code and first part is here
int count = myArrayList.size();
for (int a =0; a< count; a++) {
//any calculation
}
Second part of the code is
for (int a =0; a< myArrayList.size(); a++) {
//any calculation
}
in both piece I am iterating over myArrayList (this is ArrayList) size but in first part I am calculating size then iterating it means the method size is being called only once but on the other hand in second part whenever it iterates it calculates the size first then then check for size in less than or not.Isn't it long process ?I have seen in many examples in many places (which calculate size on every iteration).
My questions:
Isn't it long process? (talking about second part)
what is best practice first or second?
which is efficient way to perform iteration?
myArrayList.size() how this size method works or calculates the size?
EDITION:
For testing the same thing I wrote programs and calculated the time the code is
ArrayList<Integer> myArrayList = new ArrayList<>();
for (int a =0; a<1000; a++) {
myArrayList.add(a);
}
long startTime = System.nanoTime();
for (int a =0; a< myArrayList.size(); a++) {
//any calculation
}
long lastTime = System.nanoTime();
long result = lastTime - startTime;
and the result is = 34490 nano seconds
on the other hand
ArrayList<Integer> myArrayList = new ArrayList<>();
for (int a =0; a<1000; a++) {
myArrayList.add(a);
}
long startTIme = System.nanoTime();
int count = myArrayList.size();
for (int a =0; a< count; a++) {
}
long endTime = System.nanoTime();
long result = endTime - startTIme;
and the result is = 11394 nano seconds
here when calling size() method in every iteration taking much time then without calling it every call.Is this the right way to check the time calculation?
No. The call is not a "long running" process, the JVM can make function calls quickly.
Either is acceptable. Prefer the one that's easier to read. Adding a local reference with a meaningful name can make something easier to read.
You might prefer the for-each loop, but for readability1. There is no appreciable efficiency difference in your options (or with the for-each).
The ArrayList implementation keeps an internal count (in the OpenJDK implementation, and probably others, that is size) and manages the internal array that backs the List.
1See also The Developer Insight Series, Part 1: Write Dumb Code
In the following code:
long startingTime = System.nanoTime();
int max = (int) Math.pow(2, 19);
for(int i = 0; i < max; ){
i++;
}
long timePass = System.nanoTime() - startingTime;
System.out.println("Time pass " + timePass / 1000000F);
I am trying to calculate how much time it take to perform simple actions on my machine.
All the calculations up to the power of 19 increase the time it takes to run this code, but when I went above 19(up to max int value 31) I was amazed to discover that it have no effect on the time it takes.
It always shows 5 milliseconds on my machine!!!
How can this be?
You have just witnessed HotSpot optimizing your entire loop to oblivion. It's smart. You need to do some real action inside the loop. I recommend introducing an int accumulator var and doing some bitwise operations on it, and finally printing the result to ensure it's needed after the loop.
On the HotSpot JVM, -XX:CompileThreshold=10000 by default. This means a loop which iterates 10K times can trigger the whole method to be optimised. In your case you are timing how long it take to detect and compile (in the background) your method.
use another System.nanoTime() in the loop. no one can optimize this.
for(int i = 0; i < max; ){
i++;
dummy+=System.nanoTime();
}
dont forget to do:
System.out.println(dummy);
after the loop. ensures non-optimization
I want to test the time of adding and getting item in simple and generic hashmap:
public void TestHashGeneric(){
Map hashsimple = new HashMap();
long startTime = System.currentTimeMillis();
for (int i = 0; i < 100000; i++) {
hashsimple.put("key"+i, "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" );
}
for (int i = 0; i < 100000; i++) {
String ret =(String)hashsimple.get("key"+i);
}
long endTime =System.currentTimeMillis();
System.out.println("Hash Time " + (endTime - startTime) + " millisec");
Map<String,String> hm = new HashMap<String,String>();
startTime = System.currentTimeMillis();
for (int i = 0; i < 100000; i++) {
hm.put("key"+i, "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" );
}
for (int i = 0; i < 100000; i++) {
String ret = hm.get("key"+i);
}
endTime = System.currentTimeMillis();
System.out.println("Hash generic Time " + (endTime - startTime) + " millisec");
}
The problem is that I get different time if I change places between hashmap's code section!
if i put the loops (with the time print ofcourse) of generic below the simple I get better time for generic and if i put simple below the generic I get better time for simple!
Same happens if I use different methods for this.
The JIT will compile and optimise your program while it is running, so the 2nd run will always be faster.
You should make the following modifications:
Run both tests untimed first, then re-run them timed so that you don't get affected by the JIT.
You should use System.nanoTime() as this is more accurate for timing (you should never get a diff of 0).
You should also test against some empty methods, as you are also timing the string concatenation operation in each loop.
Also note that in Java generic types are erased, so there should be no runtime difference at all.
The Java Runtime is pretty sophistocated - it does some learning/optimisation while it's running. You might be able to get the test you want by "warming up" the JVM first. Try calling TestHashGeneric() twice and see what the second set of results gives you.
You've also got twice as much stuff in memory on the second run. There's all sorts of variables that can affect this test.
This is not the correct way to perform micro-benchmarks as it is very involved (Why?). I suggest you use Caliper framework for this.
When you have a single loop which reaches the compile threshold (about 10K) the whole method is compiled. This can make either the first loop appear faster (as it is optimised correctly as the second loop has no counter information) or the second loop appear after (as its compiled before it is even started)
This simplest way to fix this is to place each test in its own method and they will be compiled and optimised independently. (The order can still matter but is less important) I would still suggest running the test a number of times to see how the results vary.
I came across simple java program with two for loops. The question was whether these for loops will take same time to execute or first will execute faster than second .
Below is programs :
public static void main(String[] args) {
Long t1 = System.currentTimeMillis();
for (int i = 999; i > 0; i--) {
System.out.println(i);
}
t1 = System.currentTimeMillis() - t1;
Long t2 = System.currentTimeMillis();
for (int j = 0; j < 999; j++) {
System.out.println(j);
}
t2 = System.currentTimeMillis() - t2;
System.out.println("for loop1 time : " + t1);
System.out.println("for loop2 time : " + t2);
}
After executing this I found that first for loop takes more time than second. But after swapping there location the result was same that is which ever for loop written first always takes more time than the other. I was quite surprised with result. Please anybody tell me how above program works.
The time taken by either loop will be dominated by I/O (i.e. printing to screen), which is highly variable. I don't think you can learn much from your example.
The first loop will allocate 1000 Strings in memory while the second loop, regardsless of working forwards or not, can use the already pre-allocated objects.
Although working with System.out.println, any allocation should be neglible in comparison.
Long (and other primitive wrappers) has cache (look here for LongCache class) for values -128...127. It is populated at first loop run.
i think, if you are going to do a real benchmark, you should run them in different threads and use a higher value (not just 1000), no IO (printing output during execution time), and not to run them sequentially, but one by one.
i have an experience executing the same code a few times may takes different execution time.
and in my opinion, both test won't be different.
I'm trying to alter some code so it can work with multithreading. I stumbled upon a performance loss when putting a Runnable around some code.
For clarification: The original code, let's call it
//doSomething
got a Runnable around it like this:
Runnable r = new Runnable()
{
public void run()
{
//doSomething
}
}
Then I submit the runnable to a ChachedThreadPool ExecutorService. This is my first step towards multithreading this code, to see if the code runs as fast with one thread as the original code.
However, this is not the case. Where //doSomething executes in about 2 seconds, the Runnable executes in about 2.5 seconds. I need to mention that some other code, say, //doSomethingElse, inside a Runnable had no performance loss compared to the original //doSomethingElse.
My guess is that //doSomething has some operations that are not as fast when working in a Thread, but I don't know what it could be or what, in that aspect is the difference with //doSomethingElse.
Could it be the use of final int[]/float[] arrays that makes a Runnable so much slower? The //doSomethingElse code also used some finals, but //doSomething uses more. This is the only thing I could think of.
Unfortunately, the //doSomething code is quite long and out-of-context, but I will post it here anyway. For those who know the Mean Shift segmentation algorithm, this a part of the code where the mean shift vector is being calculated for each pixel. The for-loop
for(int i=0; i<L; i++)
runs through each pixel.
timer.start(); // this is where I start the timer
// Initialize mode table used for basin of attraction
char[] modeTable = new char [L]; // (L is a class property and is about 100,000)
Arrays.fill(modeTable, (char)0);
int[] pointList = new int [L];
// Allcocate memory for yk (current vector)
double[] yk = new double [lN]; // (lN is a final int, defined earlier)
// Allocate memory for Mh (mean shift vector)
double[] Mh = new double [lN];
int idxs2 = 0; int idxd2 = 0;
for (int i = 0; i < L; i++) {
// if a mode was already assigned to this data point
// then skip this point, otherwise proceed to
// find its mode by applying mean shift...
if (modeTable[i] == 1) {
continue;
}
// initialize point list...
int pointCount = 0;
// Assign window center (window centers are
// initialized by createLattice to be the point
// data[i])
idxs2 = i*lN;
for (int j=0; j<lN; j++)
yk[j] = sdata[idxs2+j]; // (sdata is an earlier defined final float[] of about 100,000 items)
// Calculate the mean shift vector using the lattice
/*****************************************************/
// Initialize mean shift vector
for (int j = 0; j < lN; j++) {
Mh[j] = 0;
}
double wsuml = 0;
double weight;
// find bucket of yk
int cBucket1 = (int) yk[0] + 1;
int cBucket2 = (int) yk[1] + 1;
int cBucket3 = (int) (yk[2] - sMinsFinal) + 1;
int cBucket = cBucket1 + nBuck1*(cBucket2 + nBuck2*cBucket3);
for (int j=0; j<27; j++) {
idxd2 = buckets[cBucket+bucNeigh[j]]; // (buckets is a final int[] of about 75,000 items)
// list parse, crt point is cHeadList
while (idxd2>=0) {
idxs2 = lN*idxd2;
// determine if inside search window
double el = sdata[idxs2+0]-yk[0];
double diff = el*el;
el = sdata[idxs2+1]-yk[1];
diff += el*el;
//...
idxd2 = slist[idxd2]; // (slist is a final int[] of about 100,000 items)
}
}
//...
}
timer.end(); // this is where I stop the timer.
There is more code, but the last while loop was where I first noticed the difference in performance.
Could anyone think of a reason why this code runs slower inside a Runnable than original?
Thanks.
Edit: The measured time is inside the code, so excluding startup of the thread.
All code always runs "inside a thread".
The slowdown you see is most likely caused by the overhead that multithreading adds. Try parallelizing different parts of your code - the tasks should neither be too large, nor too small. For example, you'd probably be better off running each of the outer loops as a separate task, rather than the innermost loops.
There is no single correct way to split up tasks, though, it all depends on how the data looks and what the target machine looks like (2 cores, 8 cores, 512 cores?).
Edit: What happens if you run the test repeatedly? E.g., if you do it like this:
Executor executor = ...;
for (int i = 0; i < 10; i++) {
final int lap = i;
Runnable r = new Runnable() {
public void run() {
long start = System.currentTimeMillis();
//doSomething
long duration = System.currentTimeMillis() - start;
System.out.printf("Lap %d: %d ms%n", lap, duration);
}
};
executor.execute(r);
}
Do you notice any difference in the results?
I personally do not see any reason for this. Any program has at least one thread. All threads are equal. All threads are created by default with medium priority (5). So, the code should show the same performance in both the main application thread and other thread that you open.
Are you sure you are measuring the time of "do something" and not the overall time that your program runs? I believe that you are measuring the time of operation together with the time that is required to create and start the thread.
When you create a new thread you always have an overhead. If you have a small piece of code, you may experience performance loss.
Once you have more code (bigger tasks) you make get a performance improvement by your parallelization (the code on the thread will not necessarily run faster, but you are doing two thing at once).
Just a detail: this decision of how big small can a task be so parallelizing it is still worth is a known topic in parallel computation :)
You haven't explained exactly how you are measuring the time taken. Clearly there are thread start-up costs but I infer that you are using some mechanism that ensures that these costs don't distort your picture.
Generally speaking when measuring performance it's easy to get mislead when measuring small pieces of work. I would be looking to get a run of at least 1,000 times longer, putting the whole thing in a loop or whatever.
Here the one different between the "No Thread" and "Threaded" cases is actually that you have gone from having one Thread (as has been pointed out you always have a thread) and two threads so now the JVM has to mediate between two threads. For this kind of work I can't see why that should make a difference, but it is a difference.
I would want to be using a good profiling tool to really dig into this.