Why second loop is faster than first - java

Why second loop is faster than first here.
public class Test2 {
public static void main(String s[]) {
long start, end;
int[] a = new int[2500000];
int length = a.length;
start = System.nanoTime();
for (int i = 0; i < length; i++) {
a[i] += i;
}
end = System.nanoTime();
System.out.println(end - start + " nano with i < a.length ");
int[] b = new int[2500000];
start = System.nanoTime();
for (int i = b.length - 1; i >= 0; i--) {
b[i] += i;
}
end = System.nanoTime();
System.out.println(end - start + " nano with i > = 0");
}
}
Output is
6776766 nano with i < a.length
5525033 nano with i > = 0
update - I have update the question according to the suggestion but I still see the difference in time. first loop is taking more time then second loop.

Most likely it's because you're fetching the value of a.length each iteration in the first case, as opposite to once in the second case.
try doing something like
int len = a.length;
and using len as the termination border for the loop.
this could potentially reduce the time of the first loop.

If I modified your first for loop slightly, you'll get a similar time:
int alength = a.length; // pre-compute a.length
start = System.currentTimeMillis();
for (int i = 0; i < alength; i++) {
a[i] += i;
}
$ java Test
8 millis with i<a.length
6 millis with i>=0

The main reason for the difference in times is -
"... Never use System.currentTimeMillis() unless you are OK with + or - 15 ms accuracy, which is typical on most OS + JVM combinations. Use System.nanoTime() instead." – Scott Carey Found Here
Update:
I believe someone mentioned in the comments section of your question that you should also warm up the kernel your testing on, before testing micro benchmarks.
Rule 1: Always include a warmup phase which runs your test kernel all the way through, enough to trigger all initializations and compilations before timing phase(s). (Fewer iterations is OK on the warmup phase. The rule of thumb is several tens of thousands of inner loop iterations.)

Related

I wrote two pieces of code, almost exactly the same, but one runs much faster than the other (Java)

I ran this segment of code: (outer loop runs 100 times, inner loop runs 1 billion times.)
long l = 0;
for(int i = 0; i < 100; i++)
for(int j = 0; j < 1000000000; j++)
l++;
System.out.println(l);
This took around 11-12 seconds when I ran it.
Then I ran this segment of code:
long l = 0;
int i = 0, j = 0;
for(; i < 100; i++)
for(; j < 1000000000; j++)
l++;
System.out.println(l);
and this took about 100 ms (0.1 seconds) whenever I ran it.
Does anyone have any idea why there's a big difference? My theory is that for every value of 'i', the inner for loop has to initialize j again, which gives it more operations to do, so it makes sense that it takes longer. However, the difference is huge (by about 100 times), and with other similar tests, the same thing doesn't happen.
If you want to see it yourself, this is how I timed it:
class Main {
static long start, end;
public static void main(String[] args) {
start();
long l = 0;
int i = 0, j = 0;
for(; i < 100; i++)
for(; j < 1000000000; j++)
l++;
System.out.println(l);
end();
print();
}
public static void start() {
start = System.currentTimeMillis();
}
public static void end() {
end = System.currentTimeMillis();
}
public static void print() {
System.out.println((end - start) + " ms.");
}
}
The second function only iterates through j for the first iteration of I. At that point j exceeds the limit of the for loop and is never run again as it is not reset on the next iteration of i
In your first example, inner loop is running from 0 to 1000000000 for each value of i because we are initializing j=0 for each value of i.
In your second example, inner loop is running from 0 to 1000000000 only for i = 0 because here we are initializing j=0 only for the first iteration of the outer loop (i.e i=0).
Real reason is in second case loop is not running as same as first code.
In first code every time you go inside you start j. With 0
But in second code j will be 1 billion in first iteration. After that It always be 1billoin. Which means second loop condition is failing every time. Second loop will not run more than once.
The two versions are not "almost exactly the same". In fact, they are completely different.
The clue is that they print different values for l:
/tmp$ java Main1.java
1000000000
12 ms.
/tmp$ java Main2.java
100000000000
857 ms.
Clearly one version is doing 100 times more iterations than the other. #Oli's answer explains why.
My theory is that for every value of i, the inner for loop has to initialize j again, which gives it more operations to do, so it makes sense that it takes longer.
Nope. That would not explain a 100 times performance difference. It is not plausible that 100 initializations of an int variable would take (on my machine) 800+ milliseconds.
The real explanation is that you are comparing computations that are NOT comparable.
j is set to 0 outside it's for loop. It is never reset back to 0 on i's next iteration.

Java For-Loop - Termination Expression speed

In my java program I have a for-loop looking roughly like this:
ArrayList<MyObject> myList = new ArrayList<MyObject>();
putThingsInList(myList);
for (int i = 0; i < myList.size(); i++) {
doWhatsoever();
}
Since the size of the list isn't changing, I tried to accelerate the loop by replacing the termination expression of the loop with a variable.
My idea was: Since the size of an ArrayList can possibly change while iterating it, the termination expression has to be executed each loop cycle. If I know (but the JVM doesn't), that its size will stay constant, the usage of a variable might speed things up.
ArrayList<MyObject> myList = new ArrayList<MyObject>();
putThingsInList(myList);
int myListSize = myList.size();
for (int i = 0; i < myListSize; i++) {
doWhatsoever();
}
However, this solution is slower, way slower; also making myListSize final doesn't change anything to that! I mean I could understand, if the speed didn't change at all; because maybe JVM just found out, that the size doesn't change and optimized the code. But why is it slower?
However, I rewrote the program; now the size of the list changes with each cycle: if i%2==0, I remove the last element of the list, else I add one element to the end of the list. So now the myList.size() operation has to be called within each iteration, I guessed.
I don't know if that's actually correct, but still the myList.size() termination expression is faster than using just a variable that remains constant all the time as termination expression...
Any ideas why?
Edit (I'm new here, I hope this is the way, how to do it)
My whole test program looks like this:
ArrayList<Integer> myList = new ArrayList<Integer>();
for (int i = 0; i < 1000000; i++)
{
myList.add(i);
}
final long myListSize = myList.size();
long sum = 0;
long timeStarted = System.nanoTime();
for (int i = 0; i < 500; i++)
{
for (int j = 0; j < myList.size(); j++)
{
sum += j;
if (j%2==0)
{
myList.add(999999);
}
else
{
myList.remove(999999);
}
}
}
long timeNeeded = (System.nanoTime() - timeStarted)/1000000;
System.out.println(timeNeeded);
System.out.println(sum);
Performance of the posted code (average of 10 executions):
4102ms for myList.size()
4230ms for myListSize
Without the if-then-else statements (so with constant myList size)
172ms for myList.size()
329ms for myListSize
So the speed different of both versions is still there. In the version with the if-then-else parts the percentaged differences are of course smaller because a lot of the time is invested for the add and remove operations of the list.
The problem is with this line:
final long myListSize = myList.size();
Change this to an int and lo and behold, running times will be identical. Why? Because comparing an int to a long for every iteration requires a widening conversion of the int, and that takes time.
Note that the difference also largely (but probably not completely) disappears when the code is compiled and optimised, as can be seen from the following JMH benchmark results:
# JMH 1.11.2 (released 7 days ago)
# VM version: JDK 1.8.0_51, VM 25.51-b03
# VM options: <none>
# Warmup: 20 iterations, 1 s each
# Measurement: 20 iterations, 1 s each
# Timeout: 10 min per iteration
# Threads: 1 thread, will synchronize iterations
# Benchmark mode: Throughput, ops/time
...
# Run complete. Total time: 00:02:01
Benchmark Mode Cnt Score Error Units
MyBenchmark.testIntLocalVariable thrpt 20 81892.018 ± 734.621 ops/s
MyBenchmark.testLongLocalVariable thrpt 20 74180.774 ± 1289.338 ops/s
MyBenchmark.testMethodInvocation thrpt 20 82732.317 ± 749.430 ops/s
And here's the benchmark code for it:
public class MyBenchmark {
#State( Scope.Benchmark)
public static class Values {
private final ArrayList<Double> values;
public Values() {
this.values = new ArrayList<Double>(10000);
for (int i = 0; i < 10000; i++) {
this.values.add(Math.random());
}
}
}
#Benchmark
public double testMethodInvocation(Values v) {
double sum = 0;
for (int i = 0; i < v.values.size(); i++) {
sum += v.values.get(i);
}
return sum;
}
#Benchmark
public double testIntLocalVariable(Values v) {
double sum = 0;
int max = v.values.size();
for (int i = 0; i < max; i++) {
sum += v.values.get(i);
}
return sum;
}
#Benchmark
public double testLongLocalVariable(Values v) {
double sum = 0;
long max = v.values.size();
for (int i = 0; i < max; i++) {
sum += v.values.get(i);
}
return sum;
}
}
P.s.:
My idea was: Since the size of an ArrayList can possibly change while
iterating it, the termination expression has to be executed each loop
cycle. If I know (but the JVM doesn't), that its size will stay
constant, the usage of a variable might speed things up.
Your assumption is wrong for two reasons: first of all, the VM can easily determine via escape analysis that the list stored in myList doesn't escape the method (so it's free to allocate it on the stack for example).
More importantly, even if the list was shared between multiple threads, and therefore could potentially be modified from the outside while we run our loop, in the absence of any synchronization it is perfectly valid for the thread running our loop to pretend those changes haven't happened at all.
As always, things are not always what they seem...
First things first, ArrayList.size() doesn't get recomputed on every invocation, only when the proper mutator is invoked. So calling it frequently is quite cheap.
Which of these loops is the fastest?
// array1 and array2 are the same size.
int sum;
for (int i = 0; i < array1.length; i++) {
sum += array1[i];
}
for (int i = 0; i < array2.length; i++) {
sum += array2[i];
}
or
int sum;
for (int i = 0; i < array1.length; i++) {
sum += array1[i];
sum += array2[i];
}
Instinctively, you would say that the second loop is the fastest since it doesn't iterate twice. However, some optimizations actually cause the first loop to be the fastest depending, for instance, on memory walking strides that cause a lot of memory cache misses.
Side-note: this compiler optimization technique is called loop
jamming.
This loop:
int sum;
for (int i = 0; i < 1000000; i++) {
sum += list.get(i);
}
is not the same as:
// Assume that list.size() == 1000000
int sum;
for (int i = 0; i < list.size(); i++) {
sum += list.get(i);
}
In the first case, the compile absolutely knows that it must iterate a million times and puts the constant in the Constant Pool, so certain optimizations can take place.
A closer equivalent would be:
int sum;
final int listSize = list.size();
for (int i = 0; i < listSize; i++) {
sum += list.get(i);
}
but only after the JVM has figured out what the value of listSize is. The final keyword gives the compiler/run-time certain guarantees that can be exploited. If the loop runs long enough, JIT-compiling will kick in, making execution faster.
Because this sparked interest in me I decided to do a quick test:
public class fortest {
public static void main(String[] args) {
long mean = 0;
for (int cnt = 0; cnt < 100000; cnt++) {
if (mean > 0)
mean /= 2;
ArrayList<String> myList = new ArrayList<String>();
putThingsInList(myList);
long start = System.nanoTime();
int myListSize = myList.size();
for (int i = 0; i < myListSize; i++) doWhatsoever(i, myList);
long end = System.nanoTime();
mean += end - start;
}
System.out.println("Mean exec: " + mean/2);
}
private static void doWhatsoever(int i, ArrayList<String> myList) {
if (i % 2 == 0)
myList.set(i, "0");
}
private static void putThingsInList(ArrayList<String> myList) {
for (int i = 0; i < 1000; i++) myList.add(String.valueOf(i));
}
}
I do not see the kind of behavior you are seeing.
2500ns mean execution time over 100000 iterations with myList.size()
1800ns mean execution time over 100000 iterations with myListSize
I therefore suspect that it's your code that is executed by the functions that is at fault. In the above example you can sometimes see faster execution if you only fill the ArrayList once, because doWhatsoever() will only do something on the first loop. I suspect the rest is being optimized away and significantly drops execution time therefore. You might have a similar case, but without seeing your code it might be close to impossible to figure that one out.
There is another way to speed up the code using for each loop
ArrayList<MyObject> myList = new ArrayList<MyObject>();
putThingsInList(myList);
for (MyObject ob: myList) {
doWhatsoever();
}
But I agree with #showp1984 that some other part is slowing the code.

Does Java JIT compiler sacrifice performance to favor Collections?

Consider the two following code samples. All benchmarking is done outside of the container being used to calculate an average of the sampled execution times. On my machine, running Windows 7 and JDK 1.6, I am seeing the average execution time in example 2 close to 1,000 times slower than that of example 1. The only explanation I can surmise is that the compiler is optimizing some code used by LinkedList to the detriment of everything else. Can someone help me understand this?
Example 1: Using Arrays
public class TimingTest
{
static long startNanos, endNanos;
static long[] samples = new long[1000];
public static void main(String[] args)
{
for (int a = 0; a < 100; a++)
{
for (int numRuns = 0; numRuns < 1000; numRuns++)
{
startNanos = System.nanoTime();
long sum = 0;
for (long i = 1; i <= 500000; i++)
{
sum += i % 13;
}
endNanos = System.nanoTime() - startNanos;
samples[numRuns] =(endNanos);
}
long avgPrim = 0L;
for (long sample : samples)
{
avgPrim += sample;
}
System.out.println("Avg: " + (avgPrim / samples.length) );
}
}
}
Example 2: Using a LinkedList
public class TimingTest
{
static long startNanos, endNanos;
static List<Long> samples = new LinkedList<Long>();
public static void main(String[] args)
{
for (int a = 0; a < 100; a++)
{
for (int numRuns = 0; numRuns < 1000; numRuns++)
{
startNanos = System.nanoTime();
long sum = 0;
int index = 0;
for (long i = 1; i <= 500000; i++)
{
sum += i % 13;
}
endNanos = System.nanoTime() - startNanos;
samples.add(endNanos);
}
long avgPrim = 0L;
for (long sample : samples)
{
avgPrim += sample;
}
System.out.println("Avg: " + (avgPrim / samples.size()));
}
}
}
Something is very wrong here: When I run the array version, I get an average execution time of 20000 nanoseconds. It is downright impossible for my 2 GHz CPU to execute 500000 loop iterations in that time, as that would imply the average loop iteration to take 20000/500000 = 0.04 ns, or 0.08 cpu cpu cycles ...
The main reason is a bug in your timing logic: In the array version, you do
int index = 0;
for every timing, hence
samples[index++] =(endNanos);
will always assign to first array element, leaving all others at their default value of 0. Hence when you take the average of the array, you get 1/1000 of the last sample, not the average of all samples.
Indeed, if you move the declaration of index outside the loop, no significant difference is reported between the two variants.
Here's a real run of your code (renamed classes for clarity, and cut the outside for loop in each to a < 1 for time's sake):
$ for f in *.class
do
class=$(echo $f | sed 's`\(.*\)\.class`\1`')
echo Running $class
java $class
done
Running OriginalArrayTimingTest
Avg: 18528
Running UpdatedArrayTimingTest
Avg: 41111273
Running LinkedListTimingTest
Avg: 41340483
Obviously, your original concern was caused by the typo #meriton pointed out, which you corrected in your question. We can see that, for your test case, both and array and a LinkedList behave almost identically. Generally speaking, insertions on a LinkedList are very fast. Since you updated your question with meriton's changes, but didn't update your claim that the former is dramatically faster than the latter, it's no longer clear what you're asking; however I hope you can see now that in this case, both data structures behave reasonably similarly.

Measuring the sort times in Java

I wrote a program to test and verify the running time of "insertion sort" which should be O(n^2). The output doesn't look right to me and it doesn't seem to vary much between different runs. The other odd thing is that the second time through is always the smallest. I expect there to be greater variance every time I run the program but the run times don't seem to fluctuate as much as I would expect. I'm just wondering if there are some kind of optimizations or something being done by the JVM or compiler. I have similar code in C# and it seems to vary more and the output is as expected. I am not expecting the running times to square every time but I am expecting them to increase more than they are and I certainly expect a much greater variance at the last iteration.
Sample Output (it doesn't vary enough for me to include multiple outputs):
47
20 (this one is ALWAYS the lowest... it makes no sense!)
44
90
133
175
233
298
379
490
public class SortBench {
public static void main(String args[]){
Random rand = new Random(System.currentTimeMillis());
for(int k = 100; k <= 1000; k += 100)
{
//Keep track of time
long time = 0;
//Create new arrays each time
int[] a = new int[k];
int[] b = new int[k];
int[] c = new int[k];
int[] d = new int[k];
int[] e = new int[k];
//Insert random integers into the arrays
for (int i = 0; i < a.length; i++)
{
int range = Integer.MAX_VALUE;
a[i] = rand.nextInt(range);
b[i] = rand.nextInt(range);
c[i] = rand.nextInt(range);
d[i] = rand.nextInt(range);
e[i] = rand.nextInt(range);
}
long start = System.nanoTime();
insertionSort(a);
long end = System.nanoTime();
time += end-start;
start = System.nanoTime();
insertionSort(b);
end = System.nanoTime();
time += end-start;
start = System.nanoTime();
insertionSort(c);
end = System.nanoTime();
time += end-start;
start = System.nanoTime();
insertionSort(d);
end = System.nanoTime();
time += end-start;
start = System.nanoTime();
insertionSort(e);
end = System.nanoTime();
time += end-start;
System.out.println((time/5)/1000);
}
}
static void insertionSort(int[] a)
{
int key;
int i;
for(int j = 1; j < a.length; j++)
{
key = a[j];
i = j - 1;
while(i>=0 && a[i]>key)
{
a[i + 1] = a[i];
i = i - 1;
}
a[i + 1] = key;
}
}
}
On your first iteration, you're also measuring the JIT time (or at least some JIT time - HotSpot will progressively optimize further). Run it several times first, and then start measuring. I suspect you're seeing the benefits of HotSpot as time goes on - the earlier tests are slowed down by both the time taken to JIT and the fact that it's not running as optimal code. (Compare this with .NET, where the JIT only runs once - there's no progressive optimization.)
If you can, allocate all the memory first too - and make sure nothing is garbage collected until the end. Otherwise you're including allocation and GC in your timing.
You should also consider trying to take more samples, with n going up another order of magnitude, to get a better idea of how the time increases. (I haven't looked at what you've done carefully enough to work out whether it really should be O(n2).)
Warm up the JVM's JIT optimization of your function, memory allocators, TLB, CPU frequency, and so on before the timed region.
Add some untimed calls right after seeding the RNG, before your existing timing loop.
Random rand = new Random(System.currentTimeMillis());
// warmup
for(int k = 100; k <= 10000; k += 100)
{
int[]w = new int[1000];
for (int i = 0; i < w.length; i++)
{
int range = Integer.MAX_VALUE;
w[i] = rand.nextInt(range);
insertionSort(w);
}
}
Results with warming:
4
16
27
47
68
97
126
167
201
250
Results without warming:
62
244
514
206
42
59
80
98
122
148

Java for loop vs. while loop. Performance difference?

Assume i have the following code, there are three for loop to do something. Would it run fast if i change the most outer for loop to while loop? thanks~~
int length = 200;
int test = 0;
int[] input = new int[10];
for(int i = 1; i <= length; i++) {
for (int j = 0; j <=length - i; j++) {
for (int k = 0; k < length - 1; k++) {
test = test + input[j + k];
}
}
}
No, changing the type of loop wouldn't matter.
The only thing that can make it faster would be to have less nesting of loops, and looping over less values.
The only difference between a for loop and a while loop is the syntax for defining them. There is no performance difference at all.
int i = 0;
while (i < 20){
// do stuff
i++;
}
Is the same as:
for (int i = 0; i < 20; i++){
// do Stuff
}
(Actually the for-loop is a little better because the i will be out of scope after the loop while the i will stick around in the while loop case.)
A for loop is just a syntactically prettier way of looping.
This kind of micro-optimization is pointless.
A while-loop won’t be faster.
The loop structure is not your bottleneck.
Optimize your algorithm first.
Better yet, don’t optimize first. Only optimize after you have found out that you really have a bottleneck in your algorithm that is not I/O-dependant.
Someone suggested to test while vs for loops, so I created some code to test whether while loops or for loops were faster; on average, over 100,000 tests, while loop was faster ~95% of the time. I may have coded it incorrectly, I'm quite new to coding, also considering if I only ran 10,000 loops they ended up being quite even in run duration.
edit I didn't shift all the array values when I went to test for more trials. Fixed it so that it's easier to change how many trials you run.
import java.util.Arrays;
class WhilevsForLoops {
public static void main(String[] args) {
final int trials = 100; //change number of trials
final int trialsrun = trials - 1;
boolean[] fscount = new boolean[trials]; //faster / slower boolean
int p = 0; // while counter variable for for/while timers
while (p <= trialsrun) {
long[] forloop = new long[trials];
long[] whileloop = new long[trials];
long systimeaverage;
long systimenow = System.nanoTime();
long systimethen = System.nanoTime();
System.out.println("For loop time array : ");
for (int counter=0;counter <= trialsrun; counter++) {
systimenow = System.nanoTime();
System.out.print(" #" + counter + " #");
systimethen = System.nanoTime();
systimeaverage = (systimethen - systimenow);
System.out.print( systimeaverage + "ns |");
forloop[counter] = systimeaverage;
}
int count = 0;
System.out.println(" ");
System.out.println("While loop time array: ");
while (count <= trialsrun) {
systimenow = System.nanoTime();
System.out.print(" #" + count + " #");
systimethen = System.nanoTime();
systimeaverage = (systimethen - systimenow);
System.out.print( systimeaverage + "ns |");
whileloop[count] = systimeaverage;
count++;
}
System.out.println("===============================================");
int sum = 0;
for (int i = 0; i <= trialsrun; i++) {
sum += forloop[i];
}
System.out.println("for loop time average: " + (sum / trials) + "ns");
int sum1 = 0;
for (int i = 0; i <= trialsrun; i++) {
sum1 += whileloop[i];
}
System.out.println("while loop time average: " + (sum1 / trials) + "ns");
int longer = 0;
int shorter = 0;
int gap = 0;
sum = sum / trials;
sum1 = sum1 / trials;
if (sum1 > sum) {
longer = sum1;
shorter = sum;
}
else {
longer = sum;
shorter = sum1;
}
String longa;
if (sum1 > sum) {
longa = "~while loop~";
}
else {
longa = "~for loop~";
}
gap = longer - shorter;
System.out.println("The " + longa + " is the slower loop by: " + gap + "ns");
if (sum1 > sum) {
fscount[p] = true; }
else {
fscount[p] = false;
}
p++;
}
int forloopfc=0;
int whileloopfc=0;
System.out.println(Arrays.toString(fscount));
for(int k=0; k <= trialsrun; k++) {
if (fscount[k] == true) {
forloopfc++; }
else {
whileloopfc++;}
}
System.out.println("--------------------------------------------------");
System.out.println("The FOR loop was faster: " + forloopfc + " times.");
System.out.println("The WHILE loop was faster: " + whileloopfc + " times.");
}
}
you cant optimize it by changing it to while.
you can just increment speed very very very very little by changing the line
for (int k = 0; k < length - 1; k++) {
by
for (int k = 0; k < lengthMinusOne; k++) {
where lengthMinusOne is calculated before
this subtraction is just calculating almost (200x201/2) x (200-1) times and it is very little number for computer :)
here's a helpful link to an article on the matter
according to it, the While and For are almost twice as faster but both are the same.
BUT this article was written in 2009 and so i tried it on my machine and here are the results:
using java 1.7: the Iterator was about 20%-30% faster than For and While (which were still the same)
using java 1.6: the Iterator was about 5% faster than For and While (which were still the same)
so i guess the best thing is to just time it on your own version and machine and conclude from that
Even if the hypothesis of the while loop being faster than the for loop were true (and it's not), the loops you'd had to change/optimize wouldn't be the outer ones but the inner ones, because those are executed more times.
The difference between for and while is semantic :
In a while loop, you will loop as long as the condition is true, which can vary a lot, because you might, in your loop, modify variables using in evluating the while condition.
Usually, in a for loop, you loop N time. This N can be variable, but doesn't move until the end of your N loop, as usually developpers doesn't modify variables evaluated in the loop condition.
It is a way to help other to understand your code. You are not obliged not to modify for loop variables, but it is a common (and good) practice.
No, you're still looping the exact same number of times. Wouldn't matter at all.
Look at your algorithm! Do you know beforehand which values from your array are added more than one time?
If you know that you could reduce the number of loops and that would result in better performance.
There would be no performance difference. Try it out!
The JVM and further, the compiler, would make both loops into something like
label:
;code inside your for loop.
LOOP label
It would only matter if you are using multi-thread or multiple processor programming. Then it would also depends on how you assign the loops to the various processors/threads.
No, it's not going to make a big difference, the only thing is that if your nesting loops you might want to consider switching up for example for organizational purposes, you may want to use while loop in the outer and have for statements inside it. This wouldn't affect the performance but it would just make your code look cleaner/organized
You can calculate it yourself.
int length = 200;
int test = 0;
int[] input = new int[10];
long startTime = new Date().getTime();
for(int i = 1; i <= length; i++) {
for (int j = 0; j <=length - i; j++) {
for (int k = 0; k < length - 1; k++) {
test = test + input[j + k];
}
}
}
long endTime = new Date().getTime();
long difference = endTime - startTime;
System.out.println("For - Elapsed time in milliseconds: " + difference);
test = 0;
input = new int[10];
int i = 0, j = 0, k = 0;
startTime = new Date().getTime();
while(i < length) {
while(j <= length - i ) {
while(k < length - 1) {
test = test + input[j + k];
k++;
}
j++;
}
i++;
}
endTime = new Date().getTime();
difference = endTime - startTime;
System.out.println("While - Elapsed time in milliseconds: " + difference);
The for loop and while loop both are iteration statements, but both have their distinct feature.
Syntax
While Loop
//setup counter variable
int counter = 0;
while ( condition) {
//instructions
//update counter variable
counter++; //--> counter = counter + 1;
}
For Loop
for (initialization; condition; iteration){
//body of for loop
}
The for loop does have all its declaration (initialization, condition, iteration) at the top of the body of the loop. Adversely, in a while loop only initialization and condition are at the top of the body of the loop and iteration may be written anywhere in the body of the loop.
Key Differences Between for and while loop
In the for loop, initialization, condition checking, and increment or decrement of iteration variable are done explicitly in the syntax of a loop only. As against, in the While loop, we can only initialize and check conditions in the syntax of the loop.
When we are aware of the number of iterations that have to occur in the execution of a loop, then we use for loop. On the other hand, if we are not aware of the number of iteration that has to occur in a loop, then we use a while loop.
If you fail to put the condition statement in the for loop, it will lead to an infinite iteration of a loop. In contrast, if you fail to put a condition statement in the while loop it will lead to a compilation error.
The initialization statement in the syntax of the for loop executes only once at the start of the loop. Conversely, if the while loop is carrying an initialization statement in its syntax, then the initialization statement in the while loop will execute each time the loop iterates.
The iteration statement in the for loop will execute after the body for loop executes. On the contrary, the iteration statement can be written anywhere in the body of the while loop so, there can be some statements that execute after the execution of the iteration statement in the body of the `while loop.
Based on this: https://jsperf.com/loops-analyze (not created by me) the while loop is 22% slower than a for loop in general. At least in Javascript it is.

Categories