Timing a block of code with C++ and Java - java

I am trying to compare the accuracy of timing methods with C++ and Java.
With C++ I usually use CLOCKS_PER_SEC, I run the block of code I want to time for a certain amount of time and then calculate how long it took, based on how many times the block was executed.
With Java I usually use System.nanoTime().
Which one is more accurate, the one I use for C++ or the one I use for Java? Is there any other way to time in C++ so I don't have to repeat the piece of code to get a proper measurement? Basically, is there a System.nanoTime() method for C++?
I am aware that both use system calls which cause considerable latencies. How does this distort the real value of the timing? Is there any way to prevent this?

Every method has errors. Before you spend a great deal of time on this question, you have to ask yourself "how accurate do I need my answer to be"? Usually the solution is to run a loop / piece of code a number of times, and keep track of the mean / standard deviation of the measurement. This is a good way to get a handle on the repeatability of your measurement. After that, assume that latency is "comparable" between the "start time" and "stop time" calls (regardless of what function you used), and you have a framework to understand the issues.
Bottom line: clock() function typically gives microsecond accuracy.
See https://stackoverflow.com/a/20497193/1967396 for an example of how to go about this in C (in that instance, using a usec precision clock). There's the ability to use ns timing - see for example the answer to clock_gettime() still not monotonic - alternatives? which uses clock_gettime(CLOCK_MONOTONIC_RAW, &tSpec);
Note that you have to extract seconds and nanoseconds separately from that structure.

Be careful using System.nanoTime() as it is still limited by the resolution that the machine you are running on can give you.
Also there are complications timing Java as the first few times through a function will be a lot slower until they get optimized for your system.
Virtually all modern systems use pre-emptive multi threading and multiple cores, etc - so all timings will vary from run to run. (For example if control gets switched away from your thread while it in the method).
To get reliable timings you need to
Warm up the system by running around the thing you are timing a few hundred times before starting.
Run the code for a good number of times and average the results.
The reliability issues are the same for any language so apply just as well to C as to Java so C may not need the warm-up loop but you will still need to take a lot of samples and average them.

Related

Why is there such a big difference on time between first nanoTime() call and the successive calls?

So my question is more general. I've the following simple code:
for(int i=0;i<10;i++){
long starttime=System.nanoTime();
System.out.println("test");
long runtime=System.nanoTime()-starttime;
System.out.println(i + ":" +"runtime="+runtime);
}
i receive the following output:
test
0:runtime=153956
test
1:runtime=15396
test
2:runtime=22860
test
3:runtime=11197
test
4:runtime=11197
test
5:runtime=12129
test
6:runtime=11663
test
7:runtime=11664
test
8:runtime=53185
test
9:runtime=12130
What is the reason for the difference between the first and the second runtime?Thanks in advance =)
A lot of things, both in the JVM and in the standard library, are lazily initialized to improve the JVM startup time. So the first time you execute the line
System.out.println("test");
a heavyweight initialization process happens. The time to complete it is included in your first measurement. Subsequent calls proceed down the fast path where the state is already initialized.
You can observe the same effect on a great many API calls in Java.
Naturally, there are many more factors which can influence the time it takes to complete any given method call, especially if it includes system calls on its path. However, the outlier in the latency of the first call is special in that it has deterministic causes underlying it and is therefore reliably reproducible.
Many things can affect your calculations.
What about other processes on your machines? Did you consider the JVM warming up? Maybe the garbage collection? All these factors and more leads to this behavior.
If you want to get "better" results you should run it for much more times, and take the average.
That's why you should know how to benchmark things in Java, see How do I write a correct micro-benchmark in Java?.
JVM spent some time initializing all needed objects, access to the system time, system output stream, etc... You have two methods that happen in between:
System.nanoTime()
System.out.println()
Each of those could have executed a lot of init code).
Every consecutive call is much faster because this is all already set up. So when one is benchmarking an application for performance, usually warm up and cool down phase are discarded (e.g. first and last 15 minutes).

How can I see the time taken to complete a method call?

I want to do some research to improve my programmer skills by seeing see how long a method takes to finish.
But I don't want to write any code in the java files I am testing because I have a lot of methods to test (for some of the code I do not have permission to edit the code) so if possible I just want to "watch" the methods.
For example:
public void methodZero(){
methodOne();
methodTwo();
}
Should ideally print something like:
Time of methodZero = x. Time of methodOne = y. Time of methodTwo = z.
Question: Can I measure timing like this? Is there some additional info that would be important to me, like memory use?
Timing a method in isolation is usually completely meaningless, for a start you need statistical samples. If you want to get run times for a method you have to take a lot of things into consideration and give as much context to the target of your timing as possible.
For example, suppose your method has a return value, but you don't use it in any way in your benchmark - you are possibly going to encounter "dead-code elimination" by the JVM, which makes your timings meaningless. The JVM does many clever things like this (openjdk.net: Performance techniques used in the Hotspot JVM), all of which confuse and complicate taking meaningful timings.
A good library to use to help you do this (that also has good documentation and tutorials) is JMH.
Just as important as actual timings, but often related to taking meaningful timings, is measuring algorithm space and time complexity. This allows you to understand how your algorithm will grow as the input dataset changes in size. For example MethodA might be faster on a small input array and impractically slow on an input array 100x bigger, whereas MethodB may take the same time regardless of array size.
If you want to find out where in a program you should start looking to improve performance you can use a profiler (e.g. eclipse.org: An introduction to profiling Java applications). This will help you identify things such as: high memory usage, high GC usage, total method time (e.g. low method call count but high execution time, or high call count and small but significant execution time). However profilers will impact on your program's execution time as well.
TL;DR: profiling and performance testing is hard. Often you aren't really measuring what you think you are measuring.
long startTime = System.nanoTime();
methodOne();
long endTime = System.nanoTime();
long duration = (endTime - startTime);
You as a programmer should be more interested in complexity (Usually Time Complexity, but sometimes you should also worry about Space Complexity) of your algorithm rather than actual time taken to execute the program.
Have a read here about analysing algorithm complexity

CPU Resources and Clock Cycles: System.out.println Or Incrementing a flag

To debug our Android code we have put System.out.println(string) which will let us know how many times a function has been called. The other method would have been to put a flag and keep on incrementing it after every function call. And then at the end printing the final value of flag by System.out.println(...). (practically in my application the function will be called thousands of time)
My question is: In terms of CPU Resources and Clock Cycles which one is lighter: increment operation Or System.out.println?
Incrementing is going to be much, much more efficient - especially if you've actually got anywhere for that output to go. Think of all the operations required by System.out.println vs incrementing a variable. Of course, whether the impact will actually be significant is a different matter - and if your method is already doing a lot of work, then a System.out.println call may not actually make much difference. But if you just want to know how many times it was called, then keeping a counter makes more sense than looking through the logs anyway, IMO.
I would recommend using AtomicLong or AtomicInteger instead of just having a primitive variable, as that way you get simple thread-safety.
Incrementing will be a lot faster in terms of clock cycles. Assuming the increment is fairly close to a hardware increment it would only take a couple of clock cycles. That means you can do millions every second.
On the other hand System.out.println will have to call out to the OS. Use stdout. Convert characters, etc. Each of these steps will take many, many clock cycles.
Coming back to your original question, if you're looking at how many times a function gets called you could try and run a profiler - there are various desktop and android solutions available. That way you wouldn't need to pollute your code with counting/printing, and you can keep your production code lean.
Again thinking a litle further, why would you like to know exact number of times a function is called? If you're concerned about a defect consider writing some unit tests that will prove exactly how many times a function gets called. If you're concerned about performance, perhaps look at load test techniques in combination with your profiler.

Capturing the executing time between 2 statements Java?

I want to capture the time take to go from statement A to Statement B in a Java class. In between these statements there are many web service calls made. I wanted to know if there is some stop watch like functionality in java that i could use to capture the exact time?
Kaddy
This will give you the number of nanoseconds between the two nanoTime() calls.
long start = System.nanoTime();
// Java statements
long diff = System.nanoTime() - start;
For more sophisticated approaches there are several duplicate questions that address Stopwatch classes:
Java performance timing library
Stopwatch class for Java
#Ben S's answer is spot on.
However, it should be noted that the approach of inserting time measurement statements into your code does not scale:
It makes your code look a mess.
It makes your application run slower. Those calls to System.nanoTime() don't come for free!
It introduces the possibility of bugs.
If your real aim is to try and work out why your application is running slowly so that you decide what what to optimize, then a better solution is to use a Java profiler. This has the advantage that you need to make ZERO changes to your source code. (Of course, profiling doesn't give you the exact times spent in particular sections. Rather, it gives you time proportions ... which is far more useful for deciding where to optimize.)
System.currentTimeMillis will get it in milliseconds and nanoTime in nanosceconds.
If you're trying to compare the performance of different techniques, note that the JVM environment is complex so simply taking one time is not meaningful. I always write a loop where I execute method 1 a few thousand times, then do a System.gc, then execute method 2 a few thousands times, then do another System.gc, then loop back and do the whole thing again at least five or six times. This helps to average out time for garbage collection, just-in-time compiles, and other magic things happening in the JVM.

Precise time measurement in Java

Java gives access to two method to get the current time: System.nanoTime() and System.currentTimeMillis(). The first one gives a result in nanoseconds, but the actual accuracy is much worse than that (many microseconds).
Is the JVM already providing the best possible value for each particular machine?
Otherwise, is there some Java library that can give finer measurement, possibly by being tied to a particular system?
The problem with getting super precise time measurements is that some processors can't/don't provide such tiny increments.
As far as I know, System.currentTimeMillis() and System.nanoTime() is the best measurement you will be able to find.
Note that both return a long value.
It's a bit pointless in Java measuring time down to the nanosecond scale; an occasional GC hit will easily wipe out any kind of accuracy this may have given. In any case, the documentation states that whilst it gives nanosecond precision, it's not the same thing as nanosecond accuracy; and there are operating systems which don't report nanoseconds in any case (which is why you'll find answers quantized to 1000 when accessing them; it's not luck, it's limitation).
Not only that, but depending on how the feature is actually implemented by the OS, you might find quantized results coming through anyway (e.g. answers that always end in 64 or 128 instead of intermediate values).
It's also worth noting that the purpose of the method is to find the two time differences between some (nearby) start time and now; if you take System.nanoTime() at the start of a long-running application and then take System.nanoTime() a long time later, it may have drifted quite far from real time. So you should only really use it for periods of less than 1s; if you need a longer running time than that, milliseconds should be enough. (And if it's not, then make up the last few numbers; you'll probably impress clients and the result will be just as valid.)
Unfortunately, I don't think java RTS is mature enough at this moment.
Java time does try to provide best value (they actually delegate the native code to call get the kernal time). However, JVM specs make this coarse time measurement disclaimer mainly for things like GC activities, and support of underlying system.
Certain GC activities will block all threads even if you are running concurrent GC.
default linux clock tick precision is only 10ms. Java cannot make it any better if linux kernal does not support.
I haven't figured out how to address #1 unless your app does not need to do GC. A decent and med size application probably and occasionally spends like tens of milliseconds on GC pauses. You are probably out of luck if your precision requirement is lower 10ms.
As for #2, You can tune the linux kernal to give more precision. However, you are also getting less out of your box because now kernal context switch more often.
Perhaps, we should look at it different angle. Is there a reason that OPS needs precision of 10ms of lower? Is it okay to tell Ops that precision is at 10ms AND also look at the GC log at that time, so they know the time is +-10ms accurate without GC activity around that time?
If you are looking to record some type of phenomenon on the order of nanoseconds, what you really need is a real-time operating system. The accuracy of the timer will greatly depend on the operating system's implementation of its high resolution timer and the underlying hardware.
However, you can still stay with Java since there are RTOS versions available.
JNI:
Create a simple function to access the Intel RDTSC instruction or the PMCCNTR register of co-processor p15 in ARM.
Pure Java:
You can possibly get better values if you are willing to delay until a clock tick. You can spin checking System.nanoTime() until the value changes. If you know for instance that the value of System.nanoTime() changes every 10000 loop iterations on your platform by amount DELTA then the actual event time was finalNanoTime-DELTA*ITERATIONS/10000. You will need to "warm-up" the code before taking actual measurements.
Hack (for profiling, etc, only):
If garbage collection is throwing you off you could always measure the time using a high-priority thread running in a second jvm which doesn't create objects. Have it spin incrementing a long in shared memory which you use as a clock.

Categories