How to disable compiler and JVM optimizations? - java

I have this code that is testing Calendar.getInstance().getTimeInMillis() vs System.currentTimeMilli() :
long before = getTimeInMilli();
for (int i = 0; i < TIMES_TO_ITERATE; i++)
{
long before1 = getTimeInMilli();
doSomeReallyHardWork();
long after1 = getTimeInMilli();
}
long after = getTimeInMilli();
System.out.println(getClass().getSimpleName() + " total is " + (after - before));
I want to make sure no JVM or compiler optimization happens, so the test will be valid and will actually show the difference.
How to be sure?
EDIT: I changed the code example so it will be more clear. What I am checking here is how much time it takes to call getTimeInMilli() in different implementations - Calendar vs System.

I think you need to disable JIT. Add to your run command next option:
-Djava.compiler=NONE

You want optimization to happen, because it will in real life - the test wouldn't be valid if the JVM didn't optimize in the same way that it would in the real situation you're interested in.
However, if you want to make sure that the JVM doesn't remove calls that it could potentially consider no-ops otherwise, one option is to use the result - so if you're calling System.currentTimeMillis() repeatedly, you might sum all the return values and then display the sum at the end.
Note that you may still have some bias though - for example, there may be some optimization if the JVM can cheaply determine that only a tiny amount of time has passed since the last call to System.currentTimeMillis(), so it can use a cached value. I'm not saying that's actually the case here, but it's the kind of thing you need to think about. Ultimately, benchmarks can only really test the loads you give them.
One other thing to consider: assuming you want to model a real world situation where the code is run a lot, you should run the code a lot before taking any timing - because the Hotspot JVM will optimize progressively harder, and presumably you care about the heavily-optimized version and don't want to measure the time for JITting and the "slow" versions of the code.
As Stephen mentioned, you should almost certainly take the timing outside the loop... and don't forget to actually use the results...

What you are doing looks like benchmarking, you can read Robust Java benchmarking to get some good background about how to make it right. In few words, you don't need to turn it off, because it won't be what happens on production server.. instead you need to know the close the possible to 'real' time estimation / performance. Before optimization you need to 'warm up' your code, it looks like:
// warm up
for (int j = 0; j < 1000; j++) {
for (int i = 0; i < TIMES_TO_ITERATE; i++)
{
long before1 = getTimeInMilli();
doSomeReallyHardWork();
long after1 = getTimeInMilli();
}
}
// measure time
long before = getTimeInMilli();
for (int j = 0; j < 1000; j++) {
for (int i = 0; i < TIMES_TO_ITERATE; i++)
{
long before1 = getTimeInMilli();
doSomeReallyHardWork();
long after1 = getTimeInMilli();
}
}
long after = getTimeInMilli();
System.out.prinltn( "What to expect? " + (after - before)/1000 ); // average time
When we measure performance of our code we use this approach, it give us more less real time our code needs to work. Even better to measure code in separated methods:
public void doIt() {
for (int i = 0; i < TIMES_TO_ITERATE; i++)
{
long before1 = getTimeInMilli();
doSomeReallyHardWork();
long after1 = getTimeInMilli();
}
}
// warm up
for (int j = 0; j < 1000; j++) {
doIt()
}
// measure time
long before = getTimeInMilli();
for (int j = 0; j < 1000; j++) {
doIt();
}
long after = getTimeInMilli();
System.out.prinltn( "What to expect? " + (after - before)/1000 ); // average time
Second approach is more precise, but it also depends on VM. E.g. HotSpot can perform "on-stack replacement", it means that if some part of method is executed very often it will be optimized by VM and old version of code will be exchanged with optimized one while method is executing. Of course it takes extra actions from VM side. JRockit does not do it, optimized version of code will be used only when this method is executed again (so no 'runtime' optimization... I mean in my first code sample all the time old code will be executed... except for doSomeReallyHardWork internals - they do not belong to this method, so optimization will work well).
UPDATED: code in question was edited while I was answering ;)

Sorry, but what you are trying to do makes little sense.
If you turn off JIT compilation, then you are only going to measure how long it takes to call that method with JIT compilation turned off. This is not useful information ... because it tells you little if anything about what will happen when JIT compilation is turned on1.
The times between JIT on and off can be different by a huge factor. You are unlikely to want to run anything in production with JIT turned off.
A better approach would be to do this:
long before1 = getTimeInMilli();
for (int i = 0; i < TIMES_TO_ITERATE; i++) {
doSomeReallyHardWork();
}
long after1 = getTimeInMilli();
... and / or use the nanosecond clock.
If you are trying to measure the time taken to call the two versions of getTimeInMillis(), then I don't understand the point of your call to doSomeReallyHardWork(). A more senible benchmark would be this:
public long test() {
long before1 = getTimeInMilli();
long sum = 0;
for (int i = 0; i < TIMES_TO_ITERATE; i++) {
sum += getTimeInMilli();
}
long after1 = getTimeInMilli();
System.out.println("Took " + (after - before) + " milliseconds");
return sum;
}
... and call that a number of times, until the times printed stabilize.
Either way, my main point still stands, turning of JIT compilation and / or optimization would mean that you were measuring something that is not useful to know, and not what you are really trying to find out. (Unless, that is, you are intending to run your application in production with JIT turned off ... which I find hard to believe ...)
1 - I note that someone has commented that turning off JIT compilation allowed them to easily demonstrate the difference between O(1), O(N) and O(N^2) algorithms for a class. But I would counter that it is better to learn how to write a correct micro-benchmark. And for serious purposes, you need to learn how to derive the complexity of the algorithms ... mathematically. Even with a perfect benchmark, you can get the wrong answer by trying to "deduce" complexity from performance measurements. (Take the behavior of HashMap for example.)

Related

time saved to save string variables vs hardcoding

I have tried searching the web/stackoverflow for this but can not find an answer. I am not sure if it is because it is so obvious or such a negligible difference.
My basic question is... which is better performance/by how much?
String attacker = pairs.getKey();
mobInstanceMap.replace(attacker, 0);
vs:
mobInstanceMap.replace(pairs.getKey(), 0);
I like the first for readability, but I have been developing a MMORPG with which I need to be quite careful with performance. I tried to do an experiment below, but figured I would ask the gurus since it was hard for me to extrapolate. I flipped which one runs as I found out that when running both(one after the other) the second ran much better(probably due to JVM optimization).
Sorry if this has been asked and answered... or is obvious, but I searched and did not find anything(maybe I don't know what to search for). I am not only wondering for 'saving off' strings, but other variables also.
public static void main(String[] args){
long hardcodeTotal = 0;
long saveTotal = 0;
// for (int i = 0; i < 10000; i++){
// saveTotal += checkRunTimeSaveString();
// }
for (int i = 0; i < 10000; i++){
hardcodeTotal += checkRunTimeHardcodeString();
}
System.out.println("hardcodeTotal: " + hardcodeTotal/100.0 + "\t" + "saveTotal: " + saveTotal/100.0);
}
private static long checkRunTimeSaveString(){
long StartTime = System.nanoTime();
Iterator<Entry<String, Integer>> it = mobInstanceMap.entrySet().iterator();
while (it.hasNext()) { //to update the AttackingMap when entity(attackee) moves
Entry<String, Integer> pairs = it.next();
String attacker = pairs.getKey();
mobInstanceMap.replace(attacker, 0);
}
return(System.nanoTime() - StartTime);
}
private static long checkRunTimeHardcodeString(){
long StartTime = System.nanoTime();
Iterator<Entry<String, Integer>> it = mobInstanceMap.entrySet().iterator();
while (it.hasNext()) { //to update the AttackingMap when entity(attackee) moves
Entry<String, Integer> pairs = it.next();
mobInstanceMap.replace(pairs.getKey(), 0);
}
return(System.nanoTime() - StartTime);
}
In all likelihood, the intermediate assigment will be optimized away by the JVM, and they will have exactly the same performance.
In either case, the possible performance hits for these things are hardly ever noticeable. You shouldn't worry about performance unless in extreme circumstances. Always opt for readability.
It should also be added that the type of micro-benchmarking you have in your code above is incredibly hard to get reliable measurements from. Too many factors out of your control (internal JVM optimizations, JVM warmup etc etc) are involved to get an accurate measurement of the exact thing you want to test.
you could profile both implementations with a tool like JVisualVM to see the exact difference. In case you want to test with high volume you could use a tool like Contiperf http://databene.org/contiperf And then share the results for all of us :)
I agree with Keppil that the performance is probably exactly the same. If there is a slight difference, and if you care about such a small improvement in performance, then you probably shouldn't be using Java in the first place.

why it's so different between "long var = System.nanoTime()" and "System.nanoTime()" on time used?

I make a benchmark like this :
for (int i = 0; i < 1000 * 1000; ++i) {
long var = System.nanoTime();
}
it takes 41 ms in my computer with jdk6.0
the follow code only costs 1 ms!!!
for (int i = 0; i < 1000 * 1000; ++i) {
System.nanoTime();
}
I think may be that time is costed by long var , so I make a test like this :
for (int i = 0; i < 1000 * 1000; ++i) {
long var = i;
}
it only costs 1 ms!!!
So,Why the first code block is so slow?
I'm a Chinese. Sorry for my poor English!
It really depends on how you run your benchmark. You most likely get <1ms runs because the JVM is not really running your code: it has determined that the code is not used and skips it:
for (int i = 0; i < 1000 * 1000; ++i) {
long var = i;
}
is equivalent to
//for (int i = 0; i < 1000 * 1000; ++i) {
// long var = i;
//}
and the JVM is probably running the second version.
You should read how to write a micro benchmark in Java or use a benchmarking library such as caliper.
It takes time for the JIT to detect your code doesn't do anything useful. The more complicated the code, the longer it takes (or it might not detect it at all)
In the second and third cases, it can replace the code with nothing (I suspect you can make it 100x longer and it won't run any longer)
Another possibility is you are running all three tests in the same method. When the first loop iterates more than 10,000 times, the whole method is compiled in the background such that when the second and third loops run they have been removed.
A simple way to test this is to change the order of the loops or better to place each loop in its own method to stop this.

Strange code timing behavior in Java

In the following code:
long startingTime = System.nanoTime();
int max = (int) Math.pow(2, 19);
for(int i = 0; i < max; ){
i++;
}
long timePass = System.nanoTime() - startingTime;
System.out.println("Time pass " + timePass / 1000000F);
I am trying to calculate how much time it take to perform simple actions on my machine.
All the calculations up to the power of 19 increase the time it takes to run this code, but when I went above 19(up to max int value 31) I was amazed to discover that it have no effect on the time it takes.
It always shows 5 milliseconds on my machine!!!
How can this be?
You have just witnessed HotSpot optimizing your entire loop to oblivion. It's smart. You need to do some real action inside the loop. I recommend introducing an int accumulator var and doing some bitwise operations on it, and finally printing the result to ensure it's needed after the loop.
On the HotSpot JVM, -XX:CompileThreshold=10000 by default. This means a loop which iterates 10K times can trigger the whole method to be optimised. In your case you are timing how long it take to detect and compile (in the background) your method.
use another System.nanoTime() in the loop. no one can optimize this.
for(int i = 0; i < max; ){
i++;
dummy+=System.nanoTime();
}
dont forget to do:
System.out.println(dummy);
after the loop. ensures non-optimization

Comparing methods speed performance using nanotime in Java

I would like to compare the speed performance (if there were any) from the two readDataMethod() as I illustrate below.
private void readDataMethod1(List<Integer> numbers) {
final long startTime = System.nanoTime();
for (int i = 0; i < numbers.size(); i++) {
numbers.get(i);
}
final long endTime = System.nanoTime();
System.out.println("method 1 : " + (endTime - startTime));
}
private void readDataMethod2(List<Integer> numbers) {
final long startTime = System.nanoTime();
int i = numbers.size();
while (i-- > 0) {
numbers.get(i);
}
final long endTime = System.nanoTime();
System.out.println("method 2 : " + (endTime - startTime));
}
Most of the time the result I get shows that method 2 has "lower" value.
Run readDataMethod1 readDataMethod2
1 636331 468876
2 638256 479269
3 637485 515455
4 716786 420756
Does this test prove that the readDataMethod2 is faster than the earlier one ?
Does this test prove that the readDataMethod2 is faster than the earlier one ?
You are on the right track in that you're measuring comparative performance, rather than making assumptions.
However, there are lots of potential issues to be aware of when writing micro-benchmarks in Java. I would recommend that you read
How do I write a correct micro-benchmark in Java?
In the first one, you are calling numbers.size() for each iteration.
Try storing it in a variable, and check again.
The reason because of which the second version runs faster is because you are calling numbers.size() on each iteration. Replacing it by storing in a number would make it almost the same as the first one.
Does this test prove that the readDataMethod2 is faster than the earlier one ?
As #aix says, you are on the right track. However, there are a couple of specific issues with your methodology:
It doesn't look like you are "warming up" the JVM. Therefore it is conceivable that your figures could be distorted by startup effects (JIT compilation) or that none of the code has been JIT compiled.
I'd also argue that your runs are doing too little work. A 500000 nanoseconds, is 0.0005 seconds, and that's not much work. The risk is that "other things" external to your application could be introducing noise into the measurements. I'd have more confidence in runs that take tens of seconds.

Wrong time of method execution

I want to test the time of adding and getting item in simple and generic hashmap:
public void TestHashGeneric(){
Map hashsimple = new HashMap();
long startTime = System.currentTimeMillis();
for (int i = 0; i < 100000; i++) {
hashsimple.put("key"+i, "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" );
}
for (int i = 0; i < 100000; i++) {
String ret =(String)hashsimple.get("key"+i);
}
long endTime =System.currentTimeMillis();
System.out.println("Hash Time " + (endTime - startTime) + " millisec");
Map<String,String> hm = new HashMap<String,String>();
startTime = System.currentTimeMillis();
for (int i = 0; i < 100000; i++) {
hm.put("key"+i, "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" );
}
for (int i = 0; i < 100000; i++) {
String ret = hm.get("key"+i);
}
endTime = System.currentTimeMillis();
System.out.println("Hash generic Time " + (endTime - startTime) + " millisec");
}
The problem is that I get different time if I change places between hashmap's code section!
if i put the loops (with the time print ofcourse) of generic below the simple I get better time for generic and if i put simple below the generic I get better time for simple!
Same happens if I use different methods for this.
The JIT will compile and optimise your program while it is running, so the 2nd run will always be faster.
You should make the following modifications:
Run both tests untimed first, then re-run them timed so that you don't get affected by the JIT.
You should use System.nanoTime() as this is more accurate for timing (you should never get a diff of 0).
You should also test against some empty methods, as you are also timing the string concatenation operation in each loop.
Also note that in Java generic types are erased, so there should be no runtime difference at all.
The Java Runtime is pretty sophistocated - it does some learning/optimisation while it's running. You might be able to get the test you want by "warming up" the JVM first. Try calling TestHashGeneric() twice and see what the second set of results gives you.
You've also got twice as much stuff in memory on the second run. There's all sorts of variables that can affect this test.
This is not the correct way to perform micro-benchmarks as it is very involved (Why?). I suggest you use Caliper framework for this.
When you have a single loop which reaches the compile threshold (about 10K) the whole method is compiled. This can make either the first loop appear faster (as it is optimised correctly as the second loop has no counter information) or the second loop appear after (as its compiled before it is even started)
This simplest way to fix this is to place each test in its own method and they will be compiled and optimised independently. (The order can still matter but is less important) I would still suggest running the test a number of times to see how the results vary.

Categories