I am trying to find a clean way to find elapsed time. In other words, I want to find the time it takes for a given section of code to get executed.
I know the following two approaches:-
1>
long start = System.nanoTime();
// Code to measure
long elapsedTime = System.nanoTime() - start;
2>
long start = System.currentTimeMillis();
// Code to measure
long elapsedTime = System.currentTimeMillis() - start;
But these two approaches are dirty in the sense I have to litter the original code with my benchmarking code and later when benchmarking is over I will have to remember to remove the code.
I am looking for a cleaner approach which is supported by Eclipse/any other IDE in which I mark the code that I want to benchmark just like we specify breakpoints and Eclipse shows me the elapsed time whenever that portion of code is reached.
Is there any such method available in Eclipse/any other IDE?
I recommend Perf4j perf4j.codehaus.org
Set a field at the top of your class
private static final boolean DEBUG_THIS=true;
then when you have lines you want to use, put them in like this:
if (DEBUG_THIS){
Log.v("Tag","It took " + System.currentTimeMillis() - start + " ms");
// perhpas some other debug code
}
Then when you are tired or done debugging, change the one line of code at the top
private static final boolean DEBUG_THIS=false;
you can leave all your debug code otherwise in place, there if you ever need it again, and costing you nothing in the running version.
You can write a JUnit Test for the code. The JUnit plug-in for Eclipse will show you the time it took to run the test.
Related
i am trying to execute a code for 5 minutes within while loop.
long init= System.currentTimeMillis();
while(((System.currentTimeMillis()-time)/1000%60)<5){
//some part of code
}
but i am not able to get it working any suggestions how to fix it.
System.currentTimeMillis() depends on the implementation and on the Operating system.
Instead use the System.nanoTime() which returns the current value of the most precise available system timer, in nanoseconds.
That might be causing you time problems .
long init= System.nanoTime();
while(((System.nanoTime()-init)/1000000000/60)<5){
//some part of code
}
Just: Change the code as below and try again:
while(((System.currentTimeMillis()-time)/1000%60)<5){
To
while(((System.currentTimeMillis()-init)/1000/60)<5){
change time to init and % to /
Recently, I was writing a plugin using Java and found that retrieving an element(using get()) from a HashMap for the first time is very slow. Originally, I wanted to ask a question on that and found this (No answers though). With further experiments, however, I notice that this phenomenon happens on ArrayList and then all the methods.
Here is the code:
public class Test {
public static void main(String[] args) {
long startTime, stopTime;
// Method 1
System.out.println("Test 1:");
for (int i = 0; i < 20; ++i) {
startTime = System.nanoTime();
testMethod1();
stopTime = System.nanoTime();
System.out.println((stopTime - startTime) + "ns");
}
// Method 2
System.out.println("Test 2:");
for (int i = 0; i < 20; ++i) {
startTime = System.nanoTime();
testMethod2();
stopTime = System.nanoTime();
System.out.println((stopTime - startTime) + "ns");
}
}
public static void testMethod1() {
// Do nothing
}
public static void testMethod2() {
// Do nothing
}
}
Snippet: Test Snippet
The output would be like this:
Test 1:
2485ns
505ns
453ns
603ns
362ns
414ns
424ns
488ns
325ns
426ns
618ns
794ns
389ns
686ns
464ns
375ns
354ns
442ns
404ns
450ns
Test 2:
3248ns
700ns
538ns
531ns
351ns
444ns
321ns
424ns
523ns
488ns
487ns
491ns
551ns
497ns
480ns
465ns
477ns
453ns
727ns
504ns
I ran the code for a few times and the results are about the same. The first call would be even longer(>8000 ns) on my computer(Windows 8.1, Oracle Java 8u25).
Apparently, the first calls is usually slower than the following calls(Some calls may be longer in random cases).
Update:
I tried to learn some JMH, and write a test program
Code w/ sample output: Code
I don't know whether it's a proper benchmark(If the program has some problems, tell me), but I found that the first warm-up iterations spend more time(I use two warm-up iterations in case the warm-ups affect the results). And I think that the first warm-up should be the first call and is slower. So this phenomenon exists, if the test is proper.
So why does it happen?
You're calling System.nanoTime() inside a loop. Those calls are not free, so in addition to the time taken for an empty method you're actually measuring the time it takes to exit from nanotime call #1 and to enter nanotime call #2.
To make things worse, you're doing that on windows where nanotime performs worse compared to other platforms.
Regarding JMH: I don't think it's much help in this situation. It's designed to measure by averaging many iterations, to avoid dead code elimination, account for JIT warmup, avoid ordering dependence, ... and afaik it simply uses nanotime under the hood too.
Its design goals pretty much aim for the opposite of what you're trying to measure.
You are measuring something. But that something might be several cache misses, nanotime call overhead, some JVM internals (class loading? some kind of lazy initialization in the interpreter?), ... probably a combination thereof.
The point is that your measurement can't really be taken at face value. Even if there is a certain cost for calling a method for the first time, the time you're measuring only provides an upper bound for that.
This kind of behaviour is often caused by the compiler or RE. It starts to optimize the execution after the first iteration. Additionally class loading can have an effect (I guess this is not the case in your example code as all classes are loaded in the first loop latest).
See this thread for a similar problem.
Please keep in mind this kind of behaviour is often dependent on the environment/OS it's running on.
In my client (using LWJGL), I use the following code:
private static long getTime() {
return (Sys.getTime() * 1000) / Sys.getTimerResolution();
}
However, I have also just finished coding a server for this game, and up until now, I am been using LWJGL only for the purpose of having that method in my code. Which really, is a bit impractical.
What is a suitable alternative for the above code that uses no libraries at all?
I think you might be looking for System.nanoTime() in the Java libraries. This method would obviously give you a long of the time, of which you could standardize into ticks.
// beginning of the game loop
long startTime = System.nanoTime();
// end of the game loop
long estimatedTime = System.nanoTime() - startTime;
You could divide this number by the amount of ticks you want per second (as Sys.getTimerResolution() does) and then you could have very similar operation to what the LWJGL library provides you.
I wrote a Sudoku puzzle solver using brute force recursion. Now, I wanted to see how long it would take to solve 10 puzzles of similar types. So, I made a folder called easy and placed 10 "easy" puzzles in the folder. When I run the solver the first time it may take 171 ms, the second time it takes 37 ms, and the third run takes 16 ms. Why the different time for solving the exact same problems over again? Shouldn't the time be consistent?
The second problem is that is only displaying the last puzzle solved even though I tell it to repaint the screen after loading the puzzle and again after solving it. If I load just a single puzzle without solving it it will show the initial puzzle state. If I then call the Solve method the final solution is drawn on screen. Here is my method that solves multiple puzzles.
void LoadFolderAndSolve() throws FileNotFoundException {
String folderName = JOptionPane.showInputDialog("Enter folder name");
long startTime = System.currentTimeMillis();
for (int i = 1; i < 11; i++) {
String fileName = folderName + "/puzzle" + i + ".txt";
ReadPuzzle(filename); // this has a call to repaint to show the initial puzzle
SolvePuzzle(); // this has a call to repaint to show the solution
// If I put something to delay here, puzzle 1-9 is still not shown only 10.
}
long finishTime = System.currentTimeMillis();
long difference = finishTime - startTime;
System.out.println("Time in ms - " + difference);
}
The first time it runs the JVM needs to load the classes, create the objects you're using etc - it takes more time. Further, it always takes time for the JVM to "start kicking", which is why, when profiling, usually running a few thousands of loops and dividing the result to get a better estimation.
For the second problem it's impossible to help you without seeing the code, but a good guess would be that you're not "flushing" the data.
I got an interesting "time-travel" problem today, using the following code:
for (int i = 0; i < 1; i++){
long start = System.currentTimeMillis();
// Some code here
System.out.print(i + "\t" + (System.currentTimeMillis() - start));
start = System.currentTimeMillis();
// Some code here
System.out.println("\t" + (System.currentTimeMillis() - start));
}
And I got the result
0 15 -606
And it seems that it is not repeatable. Anyone has any clues on what happened inside during the running time? Just curious...
New edit: I used a small test to confirmed the answers below. I run the program and change the system time during the run, and finally repeat the "time-travel":
0 -3563323 163
Case closed. Thanks guys!
More words: both currentTimeMillis() and nanoTime() are system-timer based, so they will be not monotonic if the system timer is updated (turned back, specifically). It is better to use some internet-based timer for such cases.
System.currentTimeMillis() depends on the system time. So it could be modified by third party systems.
For measuring time is System.nanoTime() the better option.
I recall something like time adjustments made to the systems time once in a while too match actual time. And since currentTimeMillis relies on the system clock that might have happened. Also are you synchronizing with a time server, that could also be.