Difference between multiple System.out.print() and concatenation - java

Basically, I was wondering which approach is better practice,
for(int i = 0; i < 10000; i++){
System.out.print("blah");
}
System.out.println("");
or
String over_9000_blahs = "";
for(int i = 0; i < 10000; i++){
over_9000_blahs += "blah";
}
System.out.println(over_9000_blahs);
or is there an even better way that I'm not aware of?

Since you are only writing to the System.out the first approach is better BUT if performance are important to you use the method below (System.out.println is synchronized and using locking - can read more about it here and here ) .
If you want to use the "big string" later or improve performance, it's cleaner to use StringBuilder.
(see below) , anycase String + will translate to StringBuilder by the compiler (more details here)
StringBuilder stringBuilder = new StringBuilder();
for(int i = 0; i < 10000; i++){
stringBuilder.append("bla");
}
System.out.println(stringBuilder.toString());

You want to use StringBuilder if you're concatenating string in a (larger count) loop.
for(int i = 0; i < 10000; i++){
over_9000_blahs += "blah";
}
What this does is for each iteration:
Creates a new StringBuilder internally with internal char array large enough to accommodate the intermediate result (over_9000_blahs)
Copies the characters from over_9000_blahs into the internal array
Copies the characters from "blah"
Creates a new String, copying the characters from internal array again
So that is two copies of the increasingly long string per iteration - that means quadratic time complexity.
Since System.out.println() might be synchronized, there's a chance that calling it repeatedly will be slower than using StringBuilder (but my guess would be it won't be slower than concatenating the string in the loop using +=).
So the StringBuilder approach should be the best of the three.

By performance order:
StringBuilder - The fastest. Basically, it just adding the words into a array of characters. When capacity is not enough then it multiply it. Should occur no more than log(10000) times.
System.out.print - It has bad performance comparing to StringBuilder because we need to lock out 10000 times. In addition, print creates new char[writeBufferSize] 10000 times while in the StringBuilder option we do all that 1 time only!
Concatenating strings. Creating many (and later also big) objects, starting some 'i' the memory management will impact the performance badly.
EDIT:
To be more accurate, because the question was about the difference between option 2 and option 3 and it is very clear why Stringbuilder is fast.
We can say that every iteration in the second approach takes K time, because the code is the same and the length of the string is the same for every iteration. At the end of execution, the second option will take 10000*K time for 10000 iterations. We can't say the same about the third approach because the length of the string is always increasing for each iteration. So the time for allocating the objects and garbage collecting them increasing. What I'm trying to say is that the execution time does not increased linearly in the third option.
So it is possible that for low NumberOfIterations we won't see the difference between the two last approaches. But we know that starting a specific NumberOfIterations the second option is always better than the third one.

In this case, I'd say the first one is better. Java uses StringBuilders for string concatenations to increase performance, but since Java doesn't know you are repeatedly doing concatenations with a loop like in the second case, the first case would be better.

If you want only to sysout your values - the result is same.
Second option will create many strings in memory, which GC (Garbage Collector) will take care of. (But in newer versions of Java this problem don't occurs because concating will be transformed behined the scenes to StringBuilder solution below)
If you want use your string later, after sysout, you should check StringBuilder class and append method:
StringBuilder sb = new StringBuilder();
for(int i = 0; i < 10000; i++){
sb.append("blah");
}
System.out.println(sb);

Related

Algorithm, Big O notation: Is this function O(n^2) ? or O(n)?

This is code from a algorithm book, "Data structures and Algorithms in Java, 6th Edition." by by Michael T. GoodRich, Roberto Tamassia, and Michael H. Goldwasser
public static String repeat1(char c, int n)
{
String answer = "";
for(int j=0; j < n; j++)
{
answer += c;
}
return answer;
}
According to the authors, the Big O notation of this algorithm is O(n^2) with reason:
"The command, answer += c, is shorthand for answer = (answer + c). This
command does not cause a new character to be added to the existing String
instance; instead it produces a new String with the desired sequence of
characters, and then it reassigns the variable, answer, to refer to that new
string. In terms of efficiency, the problem with this interpretation is that
the creation of a new string as a result of a concatenation, requires time
that is proportional to the length of the resulting string. The first time
through this loop, the result has length 1, the second time through the loop
the result has length 2, and so on, until we reach the final string of length
n."
However, I do not understand, how this code can have O(n^2) as its number of primitive operations just doubles each iteration regardless of the value of n(excluding j < n and j++).
The statement answer += c requires two primitive operations each iteration regardless of the value n, therefore I think the equation for this function supposed to be 4n + 3. (Each loop operates j
Or, is the sentence,"In terms of efficiency, the problem with this interpretation is that the creation of a new string as a result of a concatenation, requires time that is proportional to the length of the resulting string.," just simply saying that creating a new string as a result of a concatenation requires proportional time to its length regardless of the number of primitive operations used in the function? So the number of primitive operations does not have big effects on the running time of the function because the built-in code for concatenated String assignment operator's running time runs in O(n^2).
How can this function be O(n^2)?
Thank you for your support.
During every iteration of the loop, the statement answer += c; must copy each and every character already in the string answer to a new string.
E.g. n = 5, c = '5'
First loop: answer is an empty string, but it must still create a new string. There is one operation to append the first '5', and answer is now "5".
Second loop: answer will now point to a new string, with the first '5' copied to a new string with another '5' appended, to make "55". Not only is a new String created, one character '5' is copied from the previous string and another '5' is appended. Two characters are appended.
"n"th loop: answer will now point to a new string, with n - 1 '5' characters copied to a new string, and an additional '5' character appended, to make a string with n 5s in it.
The number of characters copied is 1 + 2 + ... + n = n(n + 1)/2. This is O(n2).
The efficient way to constructs strings like this in a loop in Java is to use a StringBuilder, using one object that is mutable and doesn't need to copy all the characters each time a character is appended in each loop. Using a StringBuilder has a cost of O(n).
Strings are immutable in Java. I believe this terrible code is O(n^2) for that reason and only that reason. It has to construct a new String on each iteration. I'm unsure if String concatenation is truly linearly proportional to the number of characters (it seems like it should be a constant time operation since Strings have a known length). However if you take the author's word for it then iterating n times with each iteration taking a time proportional to n, you get n^2. StringBuilder would give you O(n).
I mostly agree with it being O(n^2) in practice, but consider:
Java is SMART. In many cases it uses StringBuilder instead of string for concatenation under the covers. You can't just assume it's going to copy the underlying array every time (although it almost certainly will in this case).
Java gets SMARTER all the time. There is no reason it couldn't optimize that entire loop based on StringBuilder since it can analyze all your code and figure out that you don't use it as a string inside that loop.
Further optimizations can happen--Strings currently use an array AND an length AND a shared flag (And maybe a start location so that splits wouldn't require copying, I forget, but they changed that split implementation anyway)--so appending into an oversized array and then returning a new string with a reference to the same underlying array but a higher end without mutating the original string is altogether possible (by design, they do stuff like this already to a degree)...
So I think the real question is, is it a great idea to calculate O() based on a particular implementation of a language-level construct?
And although I can't say for sure what the answer to that is, I can say it would be a REALLY BAD idea to optimize on the assumption that it was O(n^2) unless you absolutely needed it--you could take away java's ability to speed up your code later by hand optimizing today.
ps. this is from experience. I had to optimize some java code that was the UI for a spectrum analyzer. I saw all sorts of String+ operations and figured I'd clean them all up with .append(). It saved NO time because Java already optimizes String+ operations that are not in a loop.
The complexity becomes O(n^2) because each time the string increase the length by one and to create it each time you need n complexity. Also, the outer loop is n in complexity. So the exact complexity will be (n * (n+1))/2 which is O(n^2)
For example,
For abcdefg
a // one length string object is created so complexity is 1
ab // similarly complexity is 2
abc // complexity 3 here
abcd // 4 now.
abcde // ans so on.
abcdef
abcedefg
Now, you see the total complexity is 1 + 2 + 3 + 4 + ... + n = (n * (n+1))/2. In big O notation it's O(n^2)
Consider the length of the string as "n" so every time we need to add the element at the end so iteration for the string is "n" and also we have the outer for loop so "n" for that, So as a result we get O(n^2).
That is because:
answer += c;
is a String concatenation. In java Strings are immutable.
It means concatenated string is created by creating a copy of original string and appending c to it. So a simple concatenation operation is O(n) for n sized String.
In first iteration, answer length is 0, in second iteration its 1, in third its 2 and so on.
So you're doing these operations every time i.e.
1 + 2 + 3 + ... + n = O(n^2)
For string manipulations StringBuilder is the preferred way i.e. it appends any character in O(1) time.

In Java, for primitive arrays, is reusing arrays signifcantly faster than repeatedly recreating them?

In Java, for primitive arrays, is reusing arrays signifcantly faster than repeatedly recreating them?
The following snippet is a comparison of the two cases: (a) reuse an array by System.arrayCopy vs (b) repeatedly create an array of specified values, in an intensive loop.
public static void testArrayAllocationVsCopyPerformance() {
long time = System.currentTimeMillis();
final int length = 3000;
boolean[] array = new boolean[length];
boolean[] backup = new boolean[length];
//for (int j = 0; j < length; j++) {
// backup [j] = true;
//}
for (int i = 0; i < 10000000; i++) {
//(a). array copy
//System.arraycopy(backup, 0, array, 0, length);
//(b). reconstruct arrays
array = new boolean[length];
//for (int j = 0; j < length; j++) {
// array[j] = true;
//}
}
long millis = System.currentTimeMillis() - time;
System.out.println("Time taken: " + millis + " milliseconds.");
System.exit(0);
}
On my PC, (b) takes around 2600 milliseconds in average, while (a) takes around 450 milliseconds in average. For recreating an array with different values, the performance gap grows wider: (b) takes around 3750 milliseconds in average, while (a) remains constant, still 450 milliseconds in average.
In the snippet above, if 'boolean' are changed to 'int', the results are similar: reusing the int array takes around one thirds of recreating arrays. In addition, the (b) is also not far less readable than (a); (b) is just slightly less readable than (a) which does not need the 'backup' array.
However, answers of similar questions on stackoverflow or stackexchange regarding Java object creation are always things like "don't optimize it until it becomes a bottleneck", "JIT or JVM handles today can handle these better and faster than yourself", "keep it simple for readibility", etc. And these kinds of answers are typically well received by viewers.
The question is: can the performance comparison snippet above show that array copy is significantly faster compared to array re-creation wrt using short lived primitive arrays? Is the snippet above flawed? Or people still shouldn't optimize it until it becomes a bottleneck, etc?
Can the performance comparison snippet above show that array copy is significantly faster compared to array re-creation wrt using short
lived primitive arrays?
Yes, however you don't really need to prove it. Array occupies a continuous space in the memory. System.arraycopy is a native function, which is dedicated to copying arrays. It is obvious that it will be faster than creating the array, iterating over it, increasing the counter on every iteration, checking the boolean expression whether the loop should terminate, assigning the primitive to the certain position in the array, etc.
You should also remember that compilers nowadays are quite smart and they might replace your code with a more efficient version. In such case, you would not observe any difference in the test you wrote. Also, keep in mind that Java uses just-in-time compiler, which might optimize your code after you run it a lot of times and it decides that the optimization is worth doing.
Is the snippet above flawed?
Yes, it is flawed. What you are doing here is microbenchmarking. However, you haven't done any warm-up phase. I suggest to do some more reading about this topic.
Or people still shouldn't optimize it until it becomes a bottleneck,
etc?
You should never do premature optimizations. If there are performance issues, run the code with profiler enabled, identify bottleneck and fix the problem.
However, you should also use some common sense. If you have a List and are adding elements at the front, use LinkedList, not ArrayList. If you need to copy entire array, use System.arraycopy instead of looping over it and doing it manually.

Time complexity of system.out.println

I've been told different things over my course on algorithms, and was wondering if I could get a definitive answer as to the time complexity of Java's System.out.println() command.
For example, what would the time complexity of the following be, with respect to N?
String stringy = "";
while(stringy.length() < N) {
System.out.println(stringy);
stringy += "X";
}
Thanks for helping out the new guy!
the Time complexity of this code is O(N*N) because it's a loop of N times that prints. I don't know what have you been told but the time complexity of printing it not worse then O(N) in Java.
in your code you add "X" to each line, and therefor your printing will be:
X
XX
XXX
XXXX
XXXXX
XXXXXX
.
.
.
so it's complexity is calculated as an Arithmetic progression and we get:
(1+N)*N/2=O(N^2)
to read on how the command work you can read here or here:
There is a general notion that SOPs are bad in performance. When we
analyze deeply, the sequence of calls are like println -> print ->
write() + newLine(). This sequence flow is an implementation of
Sun/Oracle JDK. Both write() and newLine() contains a synchronized
block. Synchronization has a little overhead, but more than that the
cost of adding characters to the buffer and printing is high.
When we run a performance analysis, run multiple number of SOP and
record the time, the execution duration increases proportionally.
Performance degrades when we print more that 50 characters and print
more than 50,000 lines.
It all depends on the scenario we use it. Whatever may be the case, do
not use System.out.println for logging to stdout.
I have run a basic python program to check the time complexity of the print statement in Python for a variable number of characters to be printed. The code goes as -
import time
def current_milli_time():
return round(time.time() * 1000)
=====================================
startTime1 = current_milli_time()
for i in range(10000):
print("a", end="")
endTime1 = current_milli_time()
=====================================
startTime2 = current_milli_time()
for i in range(10000):
print("ab", end="")
endTime2 = current_milli_time()
=====================================
startTime3 = current_milli_time()
for i in range(10000):
print("abc", end="")
endTime3 = current_milli_time()
=====================================
print("\nTime(ms) for first case: ", endTime1 - startTime1)
print("Time(ms) for second case: ", endTime2 - startTime2)
print("Time(ms) for second case: ", endTime3 - startTime3)
We can see that in the first case we printed only "a", in the second case we printed "ab" and in the third case we printed "abc", the time complexity increased linearly with the number of characters.
Therefore, it can be said that for every language the print statement takes O(lengthOfString) time.
time complexity tells you how much more work your algorithm has to do per increment of input size, give or take some constant coefficient.
So an upper bound complexity of O(2 N) is equal to complexity O(23587 N) because the actual definition found here
http://en.wikipedia.org/wiki/Big_O_notation
states that the coefficient can be any number no matter how large, as long as it is fixed with regards to the size of the input.
because you are not using 'N' within the loop, you are just adding a char on to a String, the amount of work per iteration is equal to how many iterations you have -> O(N)
if you had "stringy += stringy;" instead it would be O(N^2) because each iteration you are doubling the amount of work you have to do
**NOTE
I am assuming system.out.print is an atomic statement, ie it prints all the characters as a single action.. if it printed each character individually then its O(N^2)....
The complexity of this code is O(n^2). It iterates the loop N times, and because System.out.println must print each character, which prints from 0 to N characters each iteration, averaging N/2, you drop the constant, N*N = N^2. In the same manner, adding to the string is going to cause the entire string to get copied (Strings are immutable in Java, so any changes mean you have to copy the entire string into a new string). This is another linear operation. So you have n * (n/2 + n/2) is still on a quadratic order - O(n^2).
String stringy = "";
while(stringy.length() < N) { // will iterate N times
System.out.println(stringy); // has to print N letters
stringy += "X"; // has to copy N letters into a new string
}
A great answer can be found here:
http://www.quora.com/What-exactly-is-the-time-complexity-for-System-out-println-in-Java-O-1-or-O-N
The main idea is that printing a string is actualy copying it to the stdout - and we know that copy of a string is o(n).
The second part says that you can try printing a large number of times:
- one character
- A very large string
And you will see the time difference!! (if printing would be o(1) you wouldn't)
Time complexity of System.out.println(stringy); command ???
You basically meant the time complexity of the code snippet above.Look , time complexity is not particularly related to one specific code or language it basically means how much time theoretically will be taken by the line of code. This usually depends on two or three things :
size of the input
degree of polynomial (in case of solving polynomial equations)
Now in this part of your code :
String stringy = "";
while(stringy.length() < N) {// the loop will execute in order of N times
System.out.println(stringy);//println will execute in order of N times too as in printing each character
stringy += "X";
}
It will obviously depend on the size of input which is of course the length of the string.
First the while loop executes little less than N (because of the condition stringy.length() < N making it <= will make it run through the full length of the string ) which we can say in the order of N and printing the string will be done in the order of N so overall code will have a running time of O(N^2)

What's an effective way to reuse ArrayLists in a for loop?

I'm reusing the same ArrayList in a for loop, and I use
for loop
results = new ArrayList<Integer>();
experts = new ArrayList<Integer>();
output = new ArrayList<String>();
....
to create new ones.
I guess this is wrong, because I'm allocating new memory. Is this correct ?
If yes, how can I empty them ?
Added: another example
I'm creating new variables each time I call this method. Is this good practice ? I mean to create new precision, relevantFound.. etc ? Or should I declare them in my class, outside the method to not allocate more and more memory ?
public static void computeMAP(ArrayList<Integer> results, ArrayList<Integer> experts) {
//compute MAP
double precision = 0;
int relevantFound = 0;
double sumprecision = 0;
thanks
ArrayList.clear() will empty them for you; note that doing it your way is also 'okay', since Java is garbage-collected, so the old allocations will eventually get cleaned up. Still, it's better to avoid lots of new allocations (and garbage generation), so the better way would be to move those declarations outside the loop and put in calls to clear inside it.
For your second example, either way is fine; primitive types are typically going to get allocated only once (on the stack, when you enter the function), declaring them inside a loop doesn't increase the cost any. It's only heap allocations (i.e. calls to new) you need to worry about.
In response to comment:
If it doesn't make sense for those things to be instance members, then don't make them such. Also, using new to 'clean' them means allocating new objects every time; definitely don't do that - if your method needs a new copy of something every time it's invoked, and it isn't used anywhere except that method, then it has no business being an instance variable.
In general, worrying about such micro-optimizations at this point is counter-productive; you only think about it if you really, absolutely have to, and then measure whether there's a benefit before doing anything.
The code snippet below measures the difference between allocating a new list inside the loop and calling clear() to reuse an existing list.
Allocating a new list is slower, as pointed out a few times above. This gives an idea of how much.
Note that the code loops 100,000 times to get those numbers. For UI code the difference may not matter. For other applications it can be a significant improvement to reuse the list.
This is the result of three runs:
Elapsed time - in the loop: 2198
Elapsed time - with clear(): 1621
Elapsed time - in the loop: 2291
Elapsed time - with clear(): 1621
Elapsed time - in the loop: 2182
Elapsed time - with clear(): 1605
Having said that, if the lists are holding hundreds or even thousands of objects, the allocation of the array itself will pale in comparison with the allocation of the objects. The performance bottleneck will be related to the objects being added to the array, not with the array.
For completeness: code was measured with Java 1.6.0_19, running on a Centrino 2 laptop with Windows. However, the main point is the difference between them, not the exact number.
import java.util.*;
public class Main {
public static void main(String[] args) {
// Allocates a new list inside the loop
long startTime = System.currentTimeMillis();
for( int i = 0; i < 100000; i++ ) {
List<String> l1 = new ArrayList<String>();
for( int j = 0; j < 1000; j++ )
l1.add( "test" );
}
System.out.println( "Elapsed time - in the loop: " + (System.currentTimeMillis() - startTime) );
// Reuse the list
startTime = System.currentTimeMillis();
List<String> l2 = new ArrayList<String>();
for( int i = 0; i < 100000; i++ ) {
l2.clear();
for( int j = 0; j < 1000; j++ )
l2.add( "test" );
}
System.out.println( "Elapsed time - with clear(): " + (System.currentTimeMillis() - startTime) );
}
}
first, allocating primitive types is practically free in java, so don't worry about that.
with regard to objects, it really depends on the loop. if it's a tight loop to 100k then yes, it's a big deal to allocate 3 array list objects each time through the loop. it'd be better to allocate them outside of the loop and use List.clear().
you also have to consider where the code is running. if it's a mobile platform you will be more concerned about frequent garbage collection than you would on a server with 256GB of ram and 64 CPUs.
that all being said, no one if going to beat you up for coding for performance, whatever the platform. performance is often a trade off with code cleanliness. for example, on the android platform they recommend using the for (int i = 0 ...) syntax to loop through array lists vs. for (Object o: someList). the latter method is cleaner, but on a mobile platform the performance difference is significant. in this case i don't think clear()'ing outside of the loop makes things any harder to understand.
ArrayLists allocate a default memory for 5 entries. These entries are references, which need 4 bytes each (depending on architecture, maybe even 8 byte). An array list contains an int for its "real" length, which are already 24 bytes. Add the default 16 bytes, which every object (even without instance variables) has, so you at with at least 40 bytes for each ArrayList(). Depending if the store them all, or how many you have, this might be a performance loss.
Not however, that starting with Java 1.6.16, the JVM has a (default off?) feature, which "inlines" objects within a function, if no access to those objects leaves the methods context. In this case all instance variables are compiled in as being used as "local" instance variables of the calling function, so no real objects will be created.
Another issue to take into consideration here is how garbage collection is affected. It is clear that reusing the same ArrayList references and using ArrayList.clear() reduces instance creations.
However, garbage collection is not so simple, and apparently here we force 'old' objects to reference 'newer' objects. That means more old-to-young references (i.e. references from objects in the old generation to ones in the young generation). This kind of references result in more work during garbage collection (See this article for example).
I never tried to benchmark this, and I don't know how significant this is, but I thought it could be relevant for this discussion. Maybe if the number of list items significantly outnumbers the number of lists, it is not worthwhile to use the same lists.

modern for loop for primitive array

Is there any performance difference between the for loops on a primitive array?
Assume:
double[] doubleArray = new double[300000];
for (double var: doubleArray)
someComplexCalculation(var);
or :
for ( int i = 0, y = doubleArray.length; i < y; i++)
someComplexCalculation(doubleArray[i]);
Test result
I actually profiled it:
Total timeused for modern loop= 13269ms
Total timeused for old loop = 15370ms
So the modern loop actually runs faster, at least on my Mac OSX JVM 1.5.
Your hand-written, "old" form executes fewer instructions, and may be faster, although you'd have to profile it under a given JIT compiler to know for sure. The "new" form is definitely not faster.
If you look at the disassembled code (compiled by Sun's JDK 1.5), you'll see that the "new" form is equivalent to the following code:
1: double[] tmp = doubleArray;
2: for (int i = 0, y = tmp.length; i < y; i++) {
3: double var = tmp[i];
4: someComplexCalculation(var);
5: }
So, you can see that more local variables are used. The assignment of doubleArray to tmp at line 1 is "extra", but it doesn't occur in the loop, and probably can't be measured. The assignment to var at line 3 is also extra. If there is a difference in performance, this would be responsible.
Line 1 might seem unnecessary, but it's boilerplate to cache the result if the array is computed by a method before entering the loop.
That said, I would use the new form, unless you need to do something with the index variable. Any performance difference is likely to be optimized away by the JIT compiler at runtime, and the new form is more clear. If you continue to do it "by hand", you may miss out on future optimizations. Generally, a good compiler can optimize "stupid" code well, but stumbles on "smart" code.
My opinion is that you don't know and shouldn't guess. Trying to outsmart compilers these days is fruitless.
There have been times people learned "Patterns" that seemed to optimize some operation, but in the next version of Java those patterns were actually slower.
Always write it as clear as you possibly can and don't worry about optimization until you actually have some user spec in your hand and are failing to meet some requirement, and even then be very careful to run before and after tests to ensure that your "fix" actually improved it enough to make that requirement pass.
The compiler can do some amazing things that would really blow your socks off, and even if you make some test that iterates over some large range, it may perform completely differently if you have a smaller range or change what happens inside the loop.
Just in time compiling means it can occasionally outperform C, and there is no reason it can't outperform static assembly language in some cases (assembly can't determine beforehand that a call isn't required, Java can at times do just that.
To sum it up: the most value you can put into your code is to write it to be readable.
There is no difference. Java will transform the enhanced for into the normal for loop. The enhanced for is just a "syntax sugar". The bytecode generated is the same for both loops.
Why not measure it yourself?
This sounds a bit harsh, but this kind of questions are very easy to verify yourself.
Just create the array and execute each loop 1000 or more times, and measure the amount of time. Repeat several times to eliminate glitches.
I got very curious about your question, even after my previous answer. So I decided to check it myself too. I wrote this small piece of code (please ignore math correctness about checking if a number is prime ;-)):
public class TestEnhancedFor {
public static void main(String args[]){
new TestEnhancedFor();
}
public TestEnhancedFor(){
int numberOfItems = 100000;
double[] items = getArrayOfItems(numberOfItems);
int repetitions = 0;
long start, end;
do {
start = System.currentTimeMillis();
doNormalFor(items);
end = System.currentTimeMillis();
System.out.printf("Normal For. Repetition %d: %d\n",
repetitions, end-start);
start = System.currentTimeMillis();
doEnhancedFor(items);
end = System.currentTimeMillis();
System.out.printf("Enhanced For. Repetition %d: %d\n\n",
repetitions, end-start);
} while (++repetitions < 5);
}
private double[] getArrayOfItems(int numberOfItems){
double[] items = new double[numberOfItems];
for (int i=0; i < numberOfItems; i++)
items[i] = i;
return items;
}
private void doSomeComplexCalculation(double item){
// check if item is prime number
for (int i = 3; i < item / 2; i+=2){
if ((item / i) == (int) (item / i)) break;
}
}
private void doNormalFor(double[] items){
for (int i = 0; i < items.length; i++)
doSomeComplexCalculation(items[i]);
}
private void doEnhancedFor(double[] items){
for (double item : items)
doSomeComplexCalculation(item);
}
}
Running the app gave the following results for me:
Normal For. Repetition 0: 5594
Enhanced For. Repetition 0: 5594
Normal For. Repetition 1: 5531
Enhanced For. Repetition 1: 5547
Normal For. Repetition 2: 5532
Enhanced For. Repetition 2: 5578
Normal For. Repetition 3: 5531
Enhanced For. Repetition 3: 5531
Normal For. Repetition 4: 5547
Enhanced For. Repetition 4: 5532
As we can see, the variation among the results is very small, and sometimes the normal loop runs faster, sometimes the enhanced loop is faster. Since there are other apps open in my computer, I find it normal. Also, only the first execution is slower than the others -- I believe this has to do with JIT optimizations.
Average times (excluding the first repetition) are 5535,25ms for the normal loop and 5547ms for the enhanced loop. But we can see that the best running times for both loops is the same (5531ms), so I think we can come to the conclusion that both loops have the same performance -- and the variations of time elapsed are due to other applications (even the OS) of the machine.

Categories