Comparing methods speed performance using nanotime in Java - java

I would like to compare the speed performance (if there were any) from the two readDataMethod() as I illustrate below.
private void readDataMethod1(List<Integer> numbers) {
final long startTime = System.nanoTime();
for (int i = 0; i < numbers.size(); i++) {
numbers.get(i);
}
final long endTime = System.nanoTime();
System.out.println("method 1 : " + (endTime - startTime));
}
private void readDataMethod2(List<Integer> numbers) {
final long startTime = System.nanoTime();
int i = numbers.size();
while (i-- > 0) {
numbers.get(i);
}
final long endTime = System.nanoTime();
System.out.println("method 2 : " + (endTime - startTime));
}
Most of the time the result I get shows that method 2 has "lower" value.
Run readDataMethod1 readDataMethod2
1 636331 468876
2 638256 479269
3 637485 515455
4 716786 420756
Does this test prove that the readDataMethod2 is faster than the earlier one ?

Does this test prove that the readDataMethod2 is faster than the earlier one ?
You are on the right track in that you're measuring comparative performance, rather than making assumptions.
However, there are lots of potential issues to be aware of when writing micro-benchmarks in Java. I would recommend that you read
How do I write a correct micro-benchmark in Java?

In the first one, you are calling numbers.size() for each iteration.
Try storing it in a variable, and check again.

The reason because of which the second version runs faster is because you are calling numbers.size() on each iteration. Replacing it by storing in a number would make it almost the same as the first one.

Does this test prove that the readDataMethod2 is faster than the earlier one ?
As #aix says, you are on the right track. However, there are a couple of specific issues with your methodology:
It doesn't look like you are "warming up" the JVM. Therefore it is conceivable that your figures could be distorted by startup effects (JIT compilation) or that none of the code has been JIT compiled.
I'd also argue that your runs are doing too little work. A 500000 nanoseconds, is 0.0005 seconds, and that's not much work. The risk is that "other things" external to your application could be introducing noise into the measurements. I'd have more confidence in runs that take tens of seconds.

Related

does splitting logic into multiple methods in java slows down execution? if yes then shall we avoid re-factoring of code

http://s24.postimg.org/9y073weid/refactor_vs_non_refactor.png
Well here is the result for the execution time in nanoseconds for re-factored and non re-factored code for a simple addition operation. 1 to 5 are the consecutive runs of the code.
My intent was just to find out whether splitting up logic into multiple methods makes execution slow or not and here is the result which shows that yes there is considerable time that goes into just putting the methods on stack.
I invite people who have done some research on it before or want to investigate on this area to correct me if I am doing something wrong and draw some conclusive results out of this.
In my opinion yes code re-factoring does help in making code more structured and understandable but in time critical systems like real time game engines I would prefer not to re-factor.
Following was the simple code which I used:
package com.sim;
public class NonThreadedMethodCallBenchMark{
public static void main(String args[]){
NonThreadedMethodCallBenchMark testObject = new NonThreadedMethodCallBenchMark();
System.out.println("************Starting***************");
long startTime =System.nanoTime();
for(int i=0;i<900000;i++){
//testObject.method(1, 2); // Uncomment this line and comment the line below to test refactor time
//testObject.method5(1,2); // uncomment this line and comment the above line to test non refactor time
}
long endTime =System.nanoTime();
System.out.println("Total :" +(endTime-startTime)+" nanoseconds");
}
public int method(int a , int b){
return method1(a,b);
}
public int method1(int a, int b){
return method2(a,b);
}
public int method2(int a, int b){
return method3(a,b);
}
public int method3(int a, int b){
return method4(a,b);
}
public int method4(int a, int b){
return method5(a,b);
}
public int method5(int a, int b){
return a+b;
}
public void run() {
int x=method(1,2);
}
}
You should consider that code which doesn't do anything useful can be optimised away. If you are not careful you can be timing how long it takes to detect the code doesn't do anything useful, rather than run the code. If you use multiple methods, it can take longer to detect useless code and thus give different results. I would always look at the steady state performance after the code has warmed up.
For the most expensive parts of your code, small methods will be inlined so they won't make any difference to performance cost. What can happen is
smaller methods can be optimised better as complex methods can defeat optimisation tricks.
smaller methods can be eliminated as they are inlined.
If you don't ever warm up the code, it is likely to be slower. However, if code is called rarely, it is unlikely to matter (except in low latency system, in which case I suggest you warm up your code on startup)
If you run
System.out.println("************Starting***************");
for (int j = 0; j < 10; j++) {
long startTime = System.nanoTime();
for (int i = 0; i < 1000000; i++) {
testObject.method(1, 2);
//testObject.method5(1,2); // uncomment this line and comment the above line to test non refactor time
}
long endTime = System.nanoTime();
System.out.println("Total :" + (endTime - startTime) + " nanoseconds");
}
prints
************Starting***************
Total :8644835 nanoseconds
Total :3363047 nanoseconds
Total :52 nanoseconds
Total :30 nanoseconds
Total :30 nanoseconds
Note: the 30 nano-seconds is the time it takes to perform a System.nanoTime() call. The inner loop and the methods calls have been eliminated.
Yes, extra method calls cost extra time (except in the circumstance where the compiler/jitter actually does inlining, which I think some of them will do sometimes but under circumstances that you will find hard to control).
You shouldn't worry about this except for the most expensive part of your code, because you won't be able to see the difference in most cases. In those cases where performance doesn't matter, you should refactor to maximize code clarity.
I suggest you refactor at will, occasionally measure performance using a profiler to find the expensive parts. Only when the profiler shows that some specific code is using a significant fraction of the time, should you worry about about function calls (or other speed vs. clarity tradeoffs) in that specific code. You will discover that the performance bottlenecks are often in places that you wouldn't have guessed.

Strange code timing behavior in Java

In the following code:
long startingTime = System.nanoTime();
int max = (int) Math.pow(2, 19);
for(int i = 0; i < max; ){
i++;
}
long timePass = System.nanoTime() - startingTime;
System.out.println("Time pass " + timePass / 1000000F);
I am trying to calculate how much time it take to perform simple actions on my machine.
All the calculations up to the power of 19 increase the time it takes to run this code, but when I went above 19(up to max int value 31) I was amazed to discover that it have no effect on the time it takes.
It always shows 5 milliseconds on my machine!!!
How can this be?
You have just witnessed HotSpot optimizing your entire loop to oblivion. It's smart. You need to do some real action inside the loop. I recommend introducing an int accumulator var and doing some bitwise operations on it, and finally printing the result to ensure it's needed after the loop.
On the HotSpot JVM, -XX:CompileThreshold=10000 by default. This means a loop which iterates 10K times can trigger the whole method to be optimised. In your case you are timing how long it take to detect and compile (in the background) your method.
use another System.nanoTime() in the loop. no one can optimize this.
for(int i = 0; i < max; ){
i++;
dummy+=System.nanoTime();
}
dont forget to do:
System.out.println(dummy);
after the loop. ensures non-optimization

Wrong time of method execution

I want to test the time of adding and getting item in simple and generic hashmap:
public void TestHashGeneric(){
Map hashsimple = new HashMap();
long startTime = System.currentTimeMillis();
for (int i = 0; i < 100000; i++) {
hashsimple.put("key"+i, "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" );
}
for (int i = 0; i < 100000; i++) {
String ret =(String)hashsimple.get("key"+i);
}
long endTime =System.currentTimeMillis();
System.out.println("Hash Time " + (endTime - startTime) + " millisec");
Map<String,String> hm = new HashMap<String,String>();
startTime = System.currentTimeMillis();
for (int i = 0; i < 100000; i++) {
hm.put("key"+i, "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" );
}
for (int i = 0; i < 100000; i++) {
String ret = hm.get("key"+i);
}
endTime = System.currentTimeMillis();
System.out.println("Hash generic Time " + (endTime - startTime) + " millisec");
}
The problem is that I get different time if I change places between hashmap's code section!
if i put the loops (with the time print ofcourse) of generic below the simple I get better time for generic and if i put simple below the generic I get better time for simple!
Same happens if I use different methods for this.
The JIT will compile and optimise your program while it is running, so the 2nd run will always be faster.
You should make the following modifications:
Run both tests untimed first, then re-run them timed so that you don't get affected by the JIT.
You should use System.nanoTime() as this is more accurate for timing (you should never get a diff of 0).
You should also test against some empty methods, as you are also timing the string concatenation operation in each loop.
Also note that in Java generic types are erased, so there should be no runtime difference at all.
The Java Runtime is pretty sophistocated - it does some learning/optimisation while it's running. You might be able to get the test you want by "warming up" the JVM first. Try calling TestHashGeneric() twice and see what the second set of results gives you.
You've also got twice as much stuff in memory on the second run. There's all sorts of variables that can affect this test.
This is not the correct way to perform micro-benchmarks as it is very involved (Why?). I suggest you use Caliper framework for this.
When you have a single loop which reaches the compile threshold (about 10K) the whole method is compiled. This can make either the first loop appear faster (as it is optimised correctly as the second loop has no counter information) or the second loop appear after (as its compiled before it is even started)
This simplest way to fix this is to place each test in its own method and they will be compiled and optimised independently. (The order can still matter but is less important) I would still suggest running the test a number of times to see how the results vary.

Comparing logically similar "for loops"

I came across simple java program with two for loops. The question was whether these for loops will take same time to execute or first will execute faster than second .
Below is programs :
public static void main(String[] args) {
Long t1 = System.currentTimeMillis();
for (int i = 999; i > 0; i--) {
System.out.println(i);
}
t1 = System.currentTimeMillis() - t1;
Long t2 = System.currentTimeMillis();
for (int j = 0; j < 999; j++) {
System.out.println(j);
}
t2 = System.currentTimeMillis() - t2;
System.out.println("for loop1 time : " + t1);
System.out.println("for loop2 time : " + t2);
}
After executing this I found that first for loop takes more time than second. But after swapping there location the result was same that is which ever for loop written first always takes more time than the other. I was quite surprised with result. Please anybody tell me how above program works.
The time taken by either loop will be dominated by I/O (i.e. printing to screen), which is highly variable. I don't think you can learn much from your example.
The first loop will allocate 1000 Strings in memory while the second loop, regardsless of working forwards or not, can use the already pre-allocated objects.
Although working with System.out.println, any allocation should be neglible in comparison.
Long (and other primitive wrappers) has cache (look here for LongCache class) for values -128...127. It is populated at first loop run.
i think, if you are going to do a real benchmark, you should run them in different threads and use a higher value (not just 1000), no IO (printing output during execution time), and not to run them sequentially, but one by one.
i have an experience executing the same code a few times may takes different execution time.
and in my opinion, both test won't be different.

How to disable compiler and JVM optimizations?

I have this code that is testing Calendar.getInstance().getTimeInMillis() vs System.currentTimeMilli() :
long before = getTimeInMilli();
for (int i = 0; i < TIMES_TO_ITERATE; i++)
{
long before1 = getTimeInMilli();
doSomeReallyHardWork();
long after1 = getTimeInMilli();
}
long after = getTimeInMilli();
System.out.println(getClass().getSimpleName() + " total is " + (after - before));
I want to make sure no JVM or compiler optimization happens, so the test will be valid and will actually show the difference.
How to be sure?
EDIT: I changed the code example so it will be more clear. What I am checking here is how much time it takes to call getTimeInMilli() in different implementations - Calendar vs System.
I think you need to disable JIT. Add to your run command next option:
-Djava.compiler=NONE
You want optimization to happen, because it will in real life - the test wouldn't be valid if the JVM didn't optimize in the same way that it would in the real situation you're interested in.
However, if you want to make sure that the JVM doesn't remove calls that it could potentially consider no-ops otherwise, one option is to use the result - so if you're calling System.currentTimeMillis() repeatedly, you might sum all the return values and then display the sum at the end.
Note that you may still have some bias though - for example, there may be some optimization if the JVM can cheaply determine that only a tiny amount of time has passed since the last call to System.currentTimeMillis(), so it can use a cached value. I'm not saying that's actually the case here, but it's the kind of thing you need to think about. Ultimately, benchmarks can only really test the loads you give them.
One other thing to consider: assuming you want to model a real world situation where the code is run a lot, you should run the code a lot before taking any timing - because the Hotspot JVM will optimize progressively harder, and presumably you care about the heavily-optimized version and don't want to measure the time for JITting and the "slow" versions of the code.
As Stephen mentioned, you should almost certainly take the timing outside the loop... and don't forget to actually use the results...
What you are doing looks like benchmarking, you can read Robust Java benchmarking to get some good background about how to make it right. In few words, you don't need to turn it off, because it won't be what happens on production server.. instead you need to know the close the possible to 'real' time estimation / performance. Before optimization you need to 'warm up' your code, it looks like:
// warm up
for (int j = 0; j < 1000; j++) {
for (int i = 0; i < TIMES_TO_ITERATE; i++)
{
long before1 = getTimeInMilli();
doSomeReallyHardWork();
long after1 = getTimeInMilli();
}
}
// measure time
long before = getTimeInMilli();
for (int j = 0; j < 1000; j++) {
for (int i = 0; i < TIMES_TO_ITERATE; i++)
{
long before1 = getTimeInMilli();
doSomeReallyHardWork();
long after1 = getTimeInMilli();
}
}
long after = getTimeInMilli();
System.out.prinltn( "What to expect? " + (after - before)/1000 ); // average time
When we measure performance of our code we use this approach, it give us more less real time our code needs to work. Even better to measure code in separated methods:
public void doIt() {
for (int i = 0; i < TIMES_TO_ITERATE; i++)
{
long before1 = getTimeInMilli();
doSomeReallyHardWork();
long after1 = getTimeInMilli();
}
}
// warm up
for (int j = 0; j < 1000; j++) {
doIt()
}
// measure time
long before = getTimeInMilli();
for (int j = 0; j < 1000; j++) {
doIt();
}
long after = getTimeInMilli();
System.out.prinltn( "What to expect? " + (after - before)/1000 ); // average time
Second approach is more precise, but it also depends on VM. E.g. HotSpot can perform "on-stack replacement", it means that if some part of method is executed very often it will be optimized by VM and old version of code will be exchanged with optimized one while method is executing. Of course it takes extra actions from VM side. JRockit does not do it, optimized version of code will be used only when this method is executed again (so no 'runtime' optimization... I mean in my first code sample all the time old code will be executed... except for doSomeReallyHardWork internals - they do not belong to this method, so optimization will work well).
UPDATED: code in question was edited while I was answering ;)
Sorry, but what you are trying to do makes little sense.
If you turn off JIT compilation, then you are only going to measure how long it takes to call that method with JIT compilation turned off. This is not useful information ... because it tells you little if anything about what will happen when JIT compilation is turned on1.
The times between JIT on and off can be different by a huge factor. You are unlikely to want to run anything in production with JIT turned off.
A better approach would be to do this:
long before1 = getTimeInMilli();
for (int i = 0; i < TIMES_TO_ITERATE; i++) {
doSomeReallyHardWork();
}
long after1 = getTimeInMilli();
... and / or use the nanosecond clock.
If you are trying to measure the time taken to call the two versions of getTimeInMillis(), then I don't understand the point of your call to doSomeReallyHardWork(). A more senible benchmark would be this:
public long test() {
long before1 = getTimeInMilli();
long sum = 0;
for (int i = 0; i < TIMES_TO_ITERATE; i++) {
sum += getTimeInMilli();
}
long after1 = getTimeInMilli();
System.out.println("Took " + (after - before) + " milliseconds");
return sum;
}
... and call that a number of times, until the times printed stabilize.
Either way, my main point still stands, turning of JIT compilation and / or optimization would mean that you were measuring something that is not useful to know, and not what you are really trying to find out. (Unless, that is, you are intending to run your application in production with JIT turned off ... which I find hard to believe ...)
1 - I note that someone has commented that turning off JIT compilation allowed them to easily demonstrate the difference between O(1), O(N) and O(N^2) algorithms for a class. But I would counter that it is better to learn how to write a correct micro-benchmark. And for serious purposes, you need to learn how to derive the complexity of the algorithms ... mathematically. Even with a perfect benchmark, you can get the wrong answer by trying to "deduce" complexity from performance measurements. (Take the behavior of HashMap for example.)

Categories