Java autoboxing performance comparison - java

// Hideously slow program! Can you spot the object creation?
Long sum = 0L;
for (long i = 0; i < Integer.MAX_VALUE; i++) {
sum += i;
}
end = System.currentTimeMillis();
System.out.println("Long sum took: " + (end - start) + " milliseconds");
long sum2 = 0L;
for (long i = 0; i < Integer.MAX_VALUE; i++) {
sum2 += i;
}
end = System.currentTimeMillis();
System.out.println("long sum took: " + (end - start) + " milliseconds");
Hi, I am reading Effective Java and in Item 6:Avoid creating unnecessary objects, there is an example suggesting primitives to boxed primitives to avoid unnecessary object creation.
The author says, "Changing the declaration of sum from Long to long reduces the runtime from 43 seconds to 6.8 seconds on my machine." and continues, "The lesson is clear: prefer primitives to boxed primitives, and watch out for unintentional autoboxing".
But when I run it on my machine, the primitive version is slower than the boxed one.
The output of the program above:
Long sum took: 5905 milliseconds
long sum took: 7013 milliseconds
The results are not as expected as the author says, "The variable sum is declared as a Long instead of a long, which means that the program constructs about 2^31 unnecessary Long instances (roughly one for each time the long i is added to the Long sum)".
Why is using primitive slower than using object?

You didn't reset the starting point for the second measurement. The primitive performance is actually the time difference of your two values (which is obviously better than the wrapper's). Try this:
// Hideously slow program! Can you spot the object creation?
Long sum = 0L;
start = System.currentTimeMillis();
for (long i = 0; i < Integer.MAX_VALUE; i++) {
sum += i;
}
end = System.currentTimeMillis();
System.out.println("Long sum took: " + (end - start) + " milliseconds");
long sum2 = 0L;
// reset start!!
start = System.currentTimeMillis();
for (long i = 0; i < Integer.MAX_VALUE; i++) {
sum2 += i;
}
end = System.currentTimeMillis();
System.out.println("long sum took: " + (end - start) + " milliseconds");

Related

Timing bubble sorting in different scenarios

I am trying determine the running times of bubble sort algorithm in three different kinds of input:
1) randomly selected numbers
2) already sorted numbers
3) sorted in reverse order numbers
My expectation about their running time was:
Reverse ordered numbers would take longer than other two.
Already sorted numbers would have the fastest running time.
Randomly selected numbers would lie between these two.
I've tested the algorithm with inputs containing more than 100.000 numbers. The results wasn't like I expected. Already sorted numbers had the fastest running time but randomly selected numbers took almost twice as much time to execute compared to reverse ordered numbers. I was wondering why this is happening?
Here is how I test the inputs
int[] random = fillRandom();
int[] sorted = fillSorted();
int[] reverse = fillReverse();
int[] temp;
long time, totalTime = 0;
for (int i = 0; i < 100; i++) {
temp = random.clone();
time = System.currentTimeMillis();
BubbleSort.sort(temp);
time = System.currentTimeMillis() - time;
totalTime += time;
}
System.out.println("random - average time: " + totalTime/100.0 + " ms");
totalTime = 0;
for (int i = 0; i < 100; i++) {
temp = sorted.clone();
time = System.currentTimeMillis();
BubbleSort.sort(temp);
time = System.currentTimeMillis() - time;
totalTime += time;
}
System.out.println("sorted - average time: " + totalTime/100.0 + " ms");
totalTime = 0;
for (int i = 0; i < 100; i++) {
temp = reverse.clone();
time = System.currentTimeMillis();
BubbleSort.sort(temp);
time = System.currentTimeMillis() - time;
totalTime += time;
}
System.out.println("reverse - average time: " + totalTime/100.0 + " ms");
Benchmarks for java code are not easy, as JVM might apply a lot of optimizations to your code at runtime. It can optimize out a loop if computation result is not used, it can inline some code, JIT can compile some code into native and many other things. As a result, benchmark output is very unstable.
There are tools like jmh that simplify benchmarking a lot.
I recommend you to check this article, it has an example of benchmark for sorting algorithm.

In Android, how to measure the execution time overhead programmatically?

Although there might be similar questions (such as A), their answers do not solve my problem.
I am using Android Studio 1.5.1 targeting Android API 18 (before Android KitKat 4.4, so I’m dealing with Dalvik, not ART runtime).
I have a modified Android that adds memory space overhead (specifically designed by the author and it is outside the scope of this question) with any used variables. For example, if we declare an integer variable, the variable will be stored in 8 bytes (64-bit) instead of 4 bytes (32-bit). This modification is completely transparent to apps which can run on the modified Android without any problem.
I need to measure that overhead in execution time, for example, when I use variables.
Here is what I did so far but it does not seems to work because the overhead variable (at the end of //Method #1 in the code below) is inconsistent, sometime it is negative, positive, or zero. In the ideal solution, it should be always (or at least most of the time) positive.
long start, end, time1, time2, overhead;
//Baseline
start = System.nanoTime();
total=0; total+=1; total+=2; total+=3; total+=4; total+=5; total+=6;
total+=7; total+=8; total+=9;
end = System.nanoTime();
System.out.println("********************* The sum is " + total);
time1 = end - start;
System.out.println("********************* start=" + start + " end=" + end + " time=" + time1);
//Method #1
start = System.nanoTime();
total = (a0() + a1() + a2() + a3() + a4() + a5() + a6() + a7() + a8() + a9());
end = System.nanoTime();
System.out.println("********************* The sum is " + total);
time2 = end - start;
System.out.println("********************* start=" + start + " end=" + end + " time=" + time2);
overhead = time2 - time1;
System.out.println("********************* overhead=" + overhead );
}
private int a0()
{
return 0;
}
private int a1()
{
return 1;
}
private int a2()
{
return 2;
}
private int a3()
{
return 3;
}
private int a4()
{
return 4;
}
private int a5()
{
return 5;
}
private int a6()
{
return 6;
}
private int a7()
{
return 7;
}
private int a8()
{
return 8;
}
private int a9()
{
return 9;
}
My question is:
In Android, how to measure that execution time overhead programmatically?
What you are describing is simply experimental error.
the overhead variable is inconsistent, sometime it is negative,
positive, or zero. In the ideal solution, it should be always (or at
least most of the time) positive.
I don't have an exact solution for you problem on Android, but when I have done experimental testing in other contexts, I typically run multiple iterations and then divide by the number of iterations to get an average.
Here is some pseudocode:
int N = 10000;
startTimer();
for (int i = 0; i < N; i++) {
runExperiment();
}
stopTimer();
double averageRuntime = timer / N;
The problem is that the code that you are trying to time is executing faster than the resolution of System.nanotime(). Try doing your additions in a loop, for e.g.
for (int i = 0; i < 1000; i++) {
total += i;
}
Increase the loop count (1000) until you start getting reasonable elapsed times.

Time to loop using long and double

Following piece of code is take different timing with long and double, not able to understand why there is the difference in the timing?
public static void main(String[] args) {
long j = 1000000000;
double k = 1000000000;
long t1 = System.currentTimeMillis();
for (int index = 0; index < j; index++) {
}
long t2 = System.currentTimeMillis();
for (int index = 0; index < k; index++) {
}
long t3 = System.currentTimeMillis();
long longTime = t2 - t1;
long doubleTime = t3 - t2;
System.out.println("Time to loop long :: " + longTime);
System.out.println("Time to loop double :: " + doubleTime);
}
Output:
Time to loop long :: 2322
Time to loop double :: 1510
long is taking longer time than double, I have 64 bit window operating system and 64 bit Java.
When I modified my code and add casting long and double to int like
public static void main(String[] args) {
long j = 1000000000;
double k = 1000000000;
long t1 = System.currentTimeMillis();
for (int index = 0; index < (int)j; index++) {
}
long t2 = System.currentTimeMillis();
for (int index = 0; index < (int)k; index++) {
}
long t3 = System.currentTimeMillis();
long longTime = t2 - t1;
long doubleTime = t3 - t2;
System.out.println("Time to loop long :: " + longTime);
System.out.println("Time to loop double :: " + doubleTime);
}
The time got reduced but still there is the difference in the timing, but this time double is taking more time than long(opposite of first case)
Output:
Time to loop long :: 760
Time to loop double :: 1030
Firstly, a long is a 64-bit integer and a double is a 64-bit floating point number. The timing difference will likely be due to the difference in optimisation between integer arithmetic and floating point arithmetic in your CPU's ALU.
Secondly, the second time you run your application, in each for loop, the loop evaluates the stop condition every time, so you're casting from a long and double to an integer respectively on every iteration. If you precast the value to an integer value before the loop's condition then you should get more consistent times:
int j_int = (int) j;
for(int index = 0; index < j_int; index++) { /* Body */ }
int k_int = (int) k;
for(int index = 0; index < k_int; index++) { /* Body */ }
In general, casting from long to int is simpler than from double to int.
The reason is that long and int are both whole numbers and represented in memory simply by their binary representation (and possibly one bit for the sign).
Casting from one to another is quite straightforward by just "cropping" or "extending" the memory area (and handling signs correctly).
However, double are floating point numbers and their binary representation a bit more complicated, using sign, mantissa and exponent.
Casting from here to whole numbers thus is more complicated, as it requires conversion from one binary format to the other first.

Java Fib iterative and Fib recursive time comparison

Please if you could just check my work and help guide me through the System.currentTimeMillis() function. I understand that it takes a snapshot of my computer time and then when I end it it takes another snap shot and I use the difference of those times to get my run time. Just not sure I'm implementing it properly as my times for my iterative function and my recursive are almost always identical or at most 1 off. I'm confused a little as to if my start time is called again before my iterative starts or if really my time check for iterative time is iterative plus my recursive function. Should I have my total iterative time be endTimeIter - endTimeRecur? Any help is appreciated.
public class FibTest{
public static void main (String[] args){
long startTime = System.currentTimeMillis();
int n = 40;
System.out.println("The 40th Fibonacci number per my recursive function is: " + fibRecur(n));
long endTimeRecur = System.currentTimeMillis();
long totalTimeRecur = endTimeRecur - startTime;
System.out.println("The 40th Fibonacci number per my recursive function is: " + fibIter(n));
long endTimeIter = System.currentTimeMillis();
long totalTimeIter = endTimeIter - startTime;
System.out.println("The time it took to find Fib(40) with my recursive method was: " + totalTimeRecur);
System.out.println("The time it took to find Fib(40) with my iterative method was: " + totalTimeIter);
}
public static int fibRecur(int n){
if (n < 3) return 1;
return fibRecur(n-2) + fibRecur(n-1);
}
public static int fibIter(int n){
int fib1 = 1;
int fib2 = 1;
int i, result = 0;
for (i = 2; i < n; i++ ){
result = fib1 + fib2;
fib1 = fib2;
fib2 = result;
}
return result;
}
}
That's one way of how the time difference must be done
long time = System.currentTimeMillis();
methodA();
System.out.println(System.currentTimeMillis() - time);
time = System.currentTimeMillis();
methodB();
System.out.println(System.currentTimeMillis() - time);
In addition to Amir's answer:
One bug in your program is that you print
System.out.println("The 40th Fibonacci number per my recursive function is: " + fibIter(n));
I think what you want to say is:
System.out.println("The 40th Fibonacci number per my iterative function is: " + fibIter(n));

how to get more signficant digits to print from a long value?

I am trying to print a long value held by elapsed, can someone help me with the format of how to do it?
This prints 0.0
but i know it has more significant digits (maybe like .0005324 or something)
System.out.println("It took " + (double)elapsed + " milliseconds to complete SELECTION_SORT algorithm.");
'
System.currentTimeMillis();
long start = System.currentTimeMillis();
int sortedArr[] = selectionSort(arr1);
long elapsed = System.currentTimeMillis() - start;
System.out.println("\n///////////SELECTIONSort//////////////");
System.out.println("\nSelection sort implemented below prints a sorted list:");
print(sortedArr);
System.out.printf("It took %.7f ms....", elapsed);
//System.out.println("It took " + (double)elapsed + " milliseconds to complete SELECTION_SORT algorithm.");'
'
private static int[] selectionSort(int[] arr) {
int minIndex, tmp;
int n = arr.length;
for (int i = 0; i < n - 1; i++) {
minIndex = i;
for (int j = i + 1; j < n; j++)
if (arr[j] < arr[minIndex])
minIndex = j;
if (minIndex != i) {
tmp = arr[i];
arr[i] = arr[minIndex];
arr[minIndex] = tmp;
}
}
return arr;
}'
Changing the format won't give you more resolution which is what your real problem is hee if you print 1 ms with 7 digits you just get 1.0000000 every time. This doesn't help you at all.
What you need is a high resolution timer
long start = System.nanoTime();
int sortedArr[] = selectionSort(arr1);
long elapsed = System.nanoTime() - start;
System.out.println("\n///////////SELECTIONSort//////////////");
System.out.println("\nSelection sort implemented below prints a sorted list:");
print(sortedArr);
System.out.printf("It took %.3f ms....", elapsed / 1e6);
However, if you do this once you are fooling yourself because Java compiles code dynamically and gets fast the more you run it. It can get 100x faster or more making the first number you see pretty useless.
Normally I suggest you run loops many times and ignore the first 10,000+ times. This will change the results so much that you will see that the first digit was completely wrong. I suggest you try this
for(int iter = 1; iter<=100000; iter *= 10) {
long start = System.nanoTime();
int[] sortedArr = null
for(int i=0;i<iter;i++)
sortedArr = selectionSort(arr1);
long elapsed = System.nanoTime() - start;
System.out.println("\n///////////SELECTIONSort//////////////");
System.out.println("\nSelection sort implemented below prints a sorted list:");
print(sortedArr);
System.out.printf("It took %.3f ms on average....", elapsed / 1e6 / iter);
}
You will see you results improve 10x maybe even 100x just by running the code for longer.
You can use print formatting. For a double or float, to get 7 places after the decimal place, you would do:
System.out.printf("It took %.7f ms....", elapsed);
EDIT:
You are actually using a long, not a double, so you cannot have significant digits, because long only takes on integer values.
A long is an integer value and does not have decimal places.
To get an approximation of the runtime, run the same sort in a loop, say 1000 times and then divide the measured time by 1000.
For example:
System.out.println("It took " + ((double)elapsed) / NUMBER_OF_ITERATONS);
Try this:
String.format("%.7f",longvalue);
by using above line you can format your long or any floating point numbers. Here 7 is referred how many digits you want after '.' .

Categories