Java milli second precision - java

Is there a Java API/suggestion to use instead of System.currentTimeMillis() to get current time in milli second precision on windows - requirement is two subsequent calls with a sleep time of 1ms in between should give two different time - currently i need to explicitly sleep for 15 ms to get different times

Don't attempt to use time to create unique values. Use the database to generate a unique id (key I'm assuming) for the record. Either use an auto incrementing field or create a separate table with a single record holding the counter that you can lock and update safely.
While you may get a solution that works, counting on timing to prevent a clash of resources will eventually catch up to you.

Since Java 1.5 you can use System.nanoTime() for micro benchmarks with higher precision. As the fixed time this is based on may change (see Javadoc for the method), it might make sense to combine it with System.currentTimeMillis(), e.g.
String time = System.currentTimeMillis() + "" + System.nanoTime();

It's a windows limitation. If you call System.currentTimeMillis() on other operating systems you get much higher precision.
My advise is don't use a time stamp as your source of uniqueness. Use an oracle sequence as it was designed for this problem. Otherwise use the thread name + timetamp (yuk).
OR you can use System.nanoTime(), but it's only useful for time differences, not absolute time.

Since java 1.5, you can use the java.util.UUID to generate unique IDs.
e.g
public static void main(String[] args)
{
System.out.println("uuid=" + UUID.randomUUID().toString());
System.out.println("uuid=" + UUID.randomUUID().toString());
}

The resolution of the currentTimeMillis() call is dependent on the underlying operating system, and should not be relied on for creating unique stamps in your situation. Consider having a UID-singleton which can give you a long value which is incremented by one for each call, and then use that.

Why do you need the times to be unique?
Take the time at the start of the transaction then add one MS for each insert.

It's important here to distinguish accuracy from precision. System.currentTimeMillis() has millisecond precision, but no guarantee whatsoever on accuracy, since it gets this from the underlying OS, and the OS gets it from the hardware, and different hardware clocks have different accuracies.
Using this for data versioning is a bad idea, since even if you had millisecond accuracy, you'd still run the risk of the occasional clash if two things happened in the same millisecond.

Although this is not directly related to the question, I've undestood from the comments that the original attempt is to generate some version identifiers.
This is, of course, a bad idea, as detailed by other posters here.
If you can't use a database, then a better idea would be to use an AtomicInteger or AtomicLong -- then you can invoke getAndIncrement() or incrementAndGet() and not worry about any timing issues that might arise.

The 15ms limitation on absolute time resolution is a feature of your operating system & interrupt rate. I'm aware that for Linux there is a kernel patch to increase the resolution to 1ms (possibly even micro sec?), not sure about Windows though. As others have commented, relative times can be resolved using System#nanoTime() (currently for micro sec precision). Either way, you should consider using db keys (or similar) for assigning unique keys.
Links
Timer resolution in Java
Increasing OS clock resolution

Related

Does Thread.sleep() use the same clock as System.nanoTime()

This question is about the relation between library functions that do some kind of wait, e.g. Thread.sleep(long), Object.wait(long), BlockingQueue.poll(long, TimeUnit), and the values returned by System.nanoTime() and System.currentTimeMillis().
As I understand, there are at least two, mostly independent clocks a Java application has access to:
System.currentTimeMillis(), which is basically the Wall Clock Time, which also means that the user and system software like an NTP daemon may fiddle with it from time to time, possibly leading to that value jumping around in any direction and amount.
System.nanoTime() which is guaranteed to be monotonically and more or less steadily increasing, but may drift because of not-so-accureate processor clock frequencies and artifacts caused by power saving mechanisms.
Now I understand that library functions like Thread.sleep() need to rely on some platform-dependent interface to suspend a thread until the specified amount of time has passed, but is it safe to assume that the time measured by these functions is based on the values of System.nanoTime()?
I know that none of these functions guarantee to measure time more accurately than a few milliseconds, but I am interested about very long (as in hours) waits. I.e. if I call Thread.sleep(10 * 3600 * 1000) the time measured between the two clocks can differ by a few minutes but I assume that one of them will be within a fraction of a second of the requested 10 hours. And if any of the two clocks is, I'm assuming its the one used by System.nanoTime(). Are these assumptions correct?
No, it is not safe to assume that it Thread.sleep is based on System.nanoTime.
Java relies on the OS to perform thread scheduling, and it has no control over how the OS performs it.
For the most part, time based APIs that existed pre-jdk 1.5 (Timer, Thread.sleep, wait(long)) all use milliseconds based time. Most of the concurrency utils added in jdk 1.5+ (java.util.concurrent.*) use the nano based time.
However, i don't think the jvm guarantees these behaviors, so you certainly shouldn't depend on behavior one way or the other.

How long is one millisecond in the Java world?

I have different reasons for asking this question.
What was the decision for switching to micro-seconds in JSR 310: Date and Time API based on.
If I measure time with System.currentTimeMillis() how can I interpret 1ms? How many method-calls, how many sysouts, how many HashMap#pushs.
I'm absolutely aware of the low scientific standard of this question, but I'd like to have some default values for java operations.
Edit:
I was talking about:
long t1 = System.currentTimeMillis();
//do random stuff
System.out.println(System.currentTimeMillis()-t1);
What was the decision for switching to micro-seconds in JSR 310: Date and Time API based on.
Modern hardware has had microsecond-precision clocks for quite some time; and it is not new to JSR 310. Consider TimeUnit, which appeared in 1.5 (along with System.nanoTime(), see below) and has a MICROSECONDS value.
If I measure time with System.currentTimeMillis() how can I interpret 1ms?
As accurate as the hardware clock/OS primitive combo allows. There will always be some skew. But unless you do "real" real time (and you shouldn't be doing Java in this case), you will likely never notice.
Note also that this method measures the number of milliseconds since epoch, and it depends on clock adjustements. This is unlike System.nanoTime(), which relies on an onboard tick counter which never "decrements" with time.
FWIW, Timer{,Task} uses System.currentTimeMillis() to measure time, while the newer ScheduledThreadPool uses System.nanoTime().
As to:
How many method-calls, how many sysouts, how many HashMap#pushs (in 1 ms)
impossible to tell! Method calls depend on what methods do, sysouts depend on the speed of your stdout (try and sysout on a 9600 baud serial port), pushes depend on memory bus speed/CPU cache etc; you didn't actually expect an accurate answer for that, did you?
A system.currentTimeMillis() milli-second is almost exactly a milli-second except when the system clock is being corrected e.g. using Network Time Protocol (NTP). NTP can cause significant leaps in time either forward or backward, but this is rare. If you want a monotonically increasing clock with more resolution, use System.nanoTime() instead.
How many method-calls, how many sysouts, how many HashMap#pushs (in 1 ms)
Empty method calls can be eliminated so only sizable method call matter. What the method does is more important. You can expect between 1 and 10000 method calls in a milli-second.
sysouts are very dependent on the display and whether it has been paused. Under normal conditions, you can expect to get 1 to 100 lines of output depending on length and the display in 1 ms. If the stream is paused for any reason, you might not get any.
HashMap has a put() not push() and you can get around 1000 to 10000 of these in a mill-second. The problem is you have to have something worth putting, and this usually takes longer.
The answers of Peter Lawrey and fge are so far correct, but I would add following detail considering your statement about microsecond resolution of JSR-310. This new API uses nanosecond resolution, not microsecond resolution.
The reason for this is not that clocks based on System.currentTimeMillis() might achieve such high degree of precision. No, such clocks just count milliseconds and are sensible for external clock adjustments which can even cause big jumps in time far beyond any subsecond level. The support for nanoseconds is rather motivated to support at maximum nanosecond-based timestamps in many databases (not all!).
It should be noted though that this "precision" is not real-time accuracy, but serves more for avoiding duplicated timestamps (which are often created via mixed clock and counter mechanisms - not measuring scientifically accurate real-time). Such database timestamps are often used as primary keys by some people - therefore their requirement for a non-duplicate monotonically increasing timestamp).
The alternative System.nanoTime() is supposed and designed to show better monotonically increasing behaviour although I would not bet my life on that in some multi-core environments. But here you indeed get nanosecond-like differences between timestamps so JSR-310 can give at least calculatory support via classes like java.time.Duration (but again not necessarily scientifically accurate nanosecond differences).

Does Java enforce as-if-serial for single threaded applications

I am running some JUnit tests on a single thread and they are failing in a non-deterministic way. I had one person tell me that the optimizing JVM (Oracle Hotspot 64-Bit 17.1-b03) is executing the instructions out of order for speed. I have trouble believing that the java spec would allow that, but I can't find the specific reference.
Wikipedia states that a single thread must enforce within-thread as-if-serial so I shouldn't have to worry about execution order differing from what I wrote.
http://en.wikipedia.org/wiki/Java_Memory_Model#The_memory_model
Example code:
#Test
public void testPersistence() throws Exception
{
// Setup
final long preTestTimeStamp = System.currentTimeMillis();
// Test
persistenceMethod();
// Validate
final long postTestTimeStamp = System.currentTimeMillis();
final long updateTimeStamp = -- load the timestamp from the database -- ;
assertTrue("Updated time should be after the pretest time", updateTimeStamp >= preTestTimeStamp);
assertTrue("Updated time should be before the posttest time", updateTimeStamp <= postTestTimeStamp);
}
void persistenceMethod()
{
...
final long updateTime = System.currentTimeMillis();
...
-- persist updateTime to the database --
...
}
When this test code is run it has completely non-deterministic behavior, sometimes it passes, sometimes if fails on the first assert, and sometimes it fails on the second assert. The values are always within a millisecond or two of each other so it isn't that the persistence is just failing completely. Adding a Thread.sleep(2); between each statement does decrease the number of times the test fails, but doesn't eliminate the failures completely.
Is it possible that this is the fault of the JVM or is it more likely that the database (MsSql) is doing some sort of rounding of the stored data?
The possibility that the JVM is executing statements out of order is so remote that I think you can pretty much dismiss it. If the JVM had a bug like that, it would be showing up in a lot of places besides this one program of yours.
It is true that currentTimeMillis is not guaranteed to actually be accurate to the millisecond. But the possibility that the clock would run backwards is almost as remote as the possibility that the JVM is executing statements out of order.
I've written many, many programs that test how long it takes a function I'm interested in to execute by taking the currentTimeMillis before it starts, executing the function, getting currentTimeMillis when it's done, and subtracting to find an elapsed time. I have never had such a program give me a negative time.
Some possibilities that occur to me:
There's an error in your code to save the timestamp to the database or to read it back. You don't show that code, so we have no way to know if there's a bug there.
Rounding. I don't have a MySQL instance handy, so I'm not sure what the precision of a timestamp is. If it's not as precise as a millisecond, then this would readily explain your problem. For example, say it's only accurate to the second. You get pre time=01:00:00.1, update time=01:00:00.2, post time=01:00:00.4. But update time gets saved as 01:00:00 because that's the limit of precision, so when you read it back update time < re time. Likewise suppose the times are 01:00:00.4, 01:00:00.6, 01:00:00.7. Update time gets rounded to 01:00:01. So update time > post time.
Time zones. Default time zone is an attribute of a connection. If when you write the time you are set to, say, Eastern Time, but when you read it back you are on Pacific Time, then the order of the times will not be what you expected.
Instead of just looking at the relationships, why don't you print the values of all three timestamps? I'd print them as int's and also as Gregorian dates. Oh, and I'd print update time before saving and again after reading it back. Maybe something would become apparent.
If, for example, you see that the update time as read back always end with one or more zeros even when the time as saved had non-zero digits, that would indicate that your times are being truncated or rounded. If the time has read back differs from the time as written by an exact multiple of 1 hour, that might be a time zone problem. If post time is less than pre time, that either indicates a serious problem with your system clock or, more likely, a program bug that's mixing up the times. Etc.
Should be easy enough to determine whether mySql (0r your persistence code) is doing something. Have your persistenceMethod() return the value it persisted and compare with what you read. They surely should match.
I wonder whether it's the trustworthiness of currentTimeMillis() that's in question:
Returns the current time in milliseconds. Note that while the unit of
time of the return value is a millisecond, the granularity of the
value depends on the underlying operating system and may be larger.
For example, many operating systems measure time in units of tens of
milliseconds.
Given that you are doing a >= test, I can't quite see how that might manifest, but it makes me wonder exactly what times you are getting.
That is really strange. Java will certainly not rearrange statements and execute them in a different order if those statements might have side effects which affect subsequent statements.
I think this error happens because System.currentTimeMillis is not as precise as you think. The API documentation of that method says:
Returns the current time in milliseconds. Note that while the unit of time of the return value is a millisecond, the granularity of the value depends on the underlying operating system and may be larger. For example, many operating systems measure time in units of tens of milliseconds.
It sounds strange, but time might even seem to be going backwards in some cases, so the value that currentTimeMillis returns at one moment can be lower than what it returned an instant earlier. See this question: Will System.currentTimeMillis always return a value >= previous calls?

Precise time measurement in Java

Java gives access to two method to get the current time: System.nanoTime() and System.currentTimeMillis(). The first one gives a result in nanoseconds, but the actual accuracy is much worse than that (many microseconds).
Is the JVM already providing the best possible value for each particular machine?
Otherwise, is there some Java library that can give finer measurement, possibly by being tied to a particular system?
The problem with getting super precise time measurements is that some processors can't/don't provide such tiny increments.
As far as I know, System.currentTimeMillis() and System.nanoTime() is the best measurement you will be able to find.
Note that both return a long value.
It's a bit pointless in Java measuring time down to the nanosecond scale; an occasional GC hit will easily wipe out any kind of accuracy this may have given. In any case, the documentation states that whilst it gives nanosecond precision, it's not the same thing as nanosecond accuracy; and there are operating systems which don't report nanoseconds in any case (which is why you'll find answers quantized to 1000 when accessing them; it's not luck, it's limitation).
Not only that, but depending on how the feature is actually implemented by the OS, you might find quantized results coming through anyway (e.g. answers that always end in 64 or 128 instead of intermediate values).
It's also worth noting that the purpose of the method is to find the two time differences between some (nearby) start time and now; if you take System.nanoTime() at the start of a long-running application and then take System.nanoTime() a long time later, it may have drifted quite far from real time. So you should only really use it for periods of less than 1s; if you need a longer running time than that, milliseconds should be enough. (And if it's not, then make up the last few numbers; you'll probably impress clients and the result will be just as valid.)
Unfortunately, I don't think java RTS is mature enough at this moment.
Java time does try to provide best value (they actually delegate the native code to call get the kernal time). However, JVM specs make this coarse time measurement disclaimer mainly for things like GC activities, and support of underlying system.
Certain GC activities will block all threads even if you are running concurrent GC.
default linux clock tick precision is only 10ms. Java cannot make it any better if linux kernal does not support.
I haven't figured out how to address #1 unless your app does not need to do GC. A decent and med size application probably and occasionally spends like tens of milliseconds on GC pauses. You are probably out of luck if your precision requirement is lower 10ms.
As for #2, You can tune the linux kernal to give more precision. However, you are also getting less out of your box because now kernal context switch more often.
Perhaps, we should look at it different angle. Is there a reason that OPS needs precision of 10ms of lower? Is it okay to tell Ops that precision is at 10ms AND also look at the GC log at that time, so they know the time is +-10ms accurate without GC activity around that time?
If you are looking to record some type of phenomenon on the order of nanoseconds, what you really need is a real-time operating system. The accuracy of the timer will greatly depend on the operating system's implementation of its high resolution timer and the underlying hardware.
However, you can still stay with Java since there are RTOS versions available.
JNI:
Create a simple function to access the Intel RDTSC instruction or the PMCCNTR register of co-processor p15 in ARM.
Pure Java:
You can possibly get better values if you are willing to delay until a clock tick. You can spin checking System.nanoTime() until the value changes. If you know for instance that the value of System.nanoTime() changes every 10000 loop iterations on your platform by amount DELTA then the actual event time was finalNanoTime-DELTA*ITERATIONS/10000. You will need to "warm-up" the code before taking actual measurements.
Hack (for profiling, etc, only):
If garbage collection is throwing you off you could always measure the time using a high-priority thread running in a second jvm which doesn't create objects. Have it spin incrementing a long in shared memory which you use as a clock.

Is System.nanoTime() completely useless?

As documented in the blog post Beware of System.nanoTime() in Java, on x86 systems, Java's System.nanoTime() returns the time value using a CPU specific counter. Now consider the following case I use to measure time of a call:
long time1= System.nanoTime();
foo();
long time2 = System.nanoTime();
long timeSpent = time2-time1;
Now in a multi-core system, it could be that after measuring time1, the thread is scheduled to a different processor whose counter is less than that of the previous CPU. Thus we could get a value in time2 which is less than time1. Thus we would get a negative value in timeSpent.
Considering this case, isn't it that System.nanotime is pretty much useless for now?
I know that changing the system time doesn't affect nanotime. That is not the problem I describe above. The problem is that each CPU will keep a different counter since it was turned on. This counter can be lower on the second CPU compared to the first CPU. Since the thread can be scheduled by the OS to the second CPU after getting time1, the value of timeSpent may be incorrect and even negative.
This answer was written in 2011 from the point of view of what the Sun JDK of the time running on operating systems of the time actually did. That was a long time ago! leventov's answer offers a more up-to-date perspective.
That post is wrong, and nanoTime is safe. There's a comment on the post which links to a blog post by David Holmes, a realtime and concurrency guy at Sun. It says:
System.nanoTime() is implemented using the QueryPerformanceCounter/QueryPerformanceFrequency API [...] The default mechanism used by QPC is determined by the Hardware Abstraction layer(HAL) [...] This default changes not only across hardware but also across OS versions. For example Windows XP Service Pack 2 changed things to use the power management timer (PMTimer) rather than the processor timestamp-counter (TSC) due to problems with the TSC not being synchronized on different processors in SMP systems, and due the fact its frequency can vary (and hence its relationship to elapsed time) based on power-management settings.
So, on Windows, this was a problem up until WinXP SP2, but it isn't now.
I can't find a part II (or more) that talks about other platforms, but that article does include a remark that Linux has encountered and solved the same problem in the same way, with a link to the FAQ for clock_gettime(CLOCK_REALTIME), which says:
Is clock_gettime(CLOCK_REALTIME) consistent across all processors/cores? (Does arch matter? e.g. ppc, arm, x86, amd64, sparc).
It should or it's considered buggy.
However, on x86/x86_64, it is possible to see unsynced or variable freq TSCs cause time inconsistencies. 2.4 kernels really had no protection against this, and early 2.6 kernels didn't do too well here either. As of 2.6.18 and up the logic for detecting this is better and we'll usually fall back to a safe clocksource.
ppc always has a synced timebase, so that shouldn't be an issue.
So, if Holmes's link can be read as implying that nanoTime calls clock_gettime(CLOCK_REALTIME), then it's safe-ish as of kernel 2.6.18 on x86, and always on PowerPC (because IBM and Motorola, unlike Intel, actually know how to design microprocessors).
There's no mention of SPARC or Solaris, sadly. And of course, we have no idea what IBM JVMs do. But Sun JVMs on modern Windows and Linux get this right.
EDIT: This answer is based on the sources it cites. But i still worry that it might actually be completely wrong. Some more up-to-date information would be really valuable. I just came across to a link to a four year newer article about Linux's clocks which could be useful.
I did a bit of searching and found that if one is being pedantic then yes it might be considered useless...in particular situations...it depends on how time sensitive your requirements are...
Check out this quote from the Java Sun site:
The real-time clock and
System.nanoTime() are both based on
the same system call and thus the same
clock.
With Java RTS, all time-based APIs
(for example, Timers, Periodic
Threads, Deadline Monitoring, and so
forth) are based on the
high-resolution timer. And, together
with real-time priorities, they can
ensure that the appropriate code will
be executed at the right time for
real-time constraints. In contrast,
ordinary Java SE APIs offer just a few
methods capable of handling
high-resolution times, with no
guarantee of execution at a given
time. Using System.nanoTime() between
various points in the code to perform
elapsed time measurements should
always be accurate.
Java also has a caveat for the nanoTime() method:
This method can only be used to
measure elapsed time and is not
related to any other notion of system
or wall-clock time. The value returned
represents nanoseconds since some
fixed but arbitrary time (perhaps in
the future, so values may be
negative). This method provides
nanosecond precision, but not
necessarily nanosecond accuracy. No
guarantees are made about how
frequently values change. Differences
in successive calls that span greater
than approximately 292.3 years (263
nanoseconds) will not accurately
compute elapsed time due to numerical
overflow.
It would seem that the only conclusion that can be drawn is that nanoTime() cannot be relied upon as an accurate value. As such, if you do not need to measure times that are mere nano seconds apart then this method is good enough even if the resulting returned value is negative. However, if you're needing higher precision, they appear to recommend that you use JAVA RTS.
So to answer your question...no nanoTime() is not useless....its just not the most prudent method to use in every situation.
Since Java 7, System.nanoTime() is guaranteed to be safe by JDK specification. System.nanoTime()'s Javadoc makes it clear that all observed invocations within a JVM (that is, across all threads) are monotonic:
The value returned represents nanoseconds since some fixed but arbitrary origin time (perhaps in the future, so values may be negative). The same origin is used by all invocations of this method in an instance of a Java virtual machine; other virtual machine instances are likely to use a different origin.
JVM/JDK implementation is responsible for ironing out the inconsistencies that could be observed when underlying OS utilities are called (e. g. those mentioned in Tom Anderson's answer).
The majority of other old answers to this question (written in 2009–2012) express FUD that was probably relevant for Java 5 or Java 6 but is no longer relevant for modern versions of Java.
It's worth mentioning, however, that despite JDK guarantees nanoTime()'s safety, there have been several bugs in OpenJDK making it to not uphold this guarantee on certain platforms or under certain circumstances (e. g. JDK-8040140, JDK-8184271). There are no open (known) bugs in OpenJDK wrt nanoTime() at the moment, but a discovery of a new such bug or a regression in a newer release of OpenJDK shouldn't shock anybody.
With that in mind, code that uses nanoTime() for timed blocking, interval waiting, timeouts, etc. should preferably treat negative time differences (timeouts) as zeros rather than throw exceptions. This practice is also preferable because it is consistent with the behaviour of all timed wait methods in all classes in java.util.concurrent.*, for example Semaphore.tryAcquire(), Lock.tryLock(), BlockingQueue.poll(), etc.
Nonetheless, nanoTime() should still be preferred for implementing timed blocking, interval waiting, timeouts, etc. to currentTimeMillis() because the latter is a subject to the "time going backward" phenomenon (e. g. due to server time correction), i. e. currentTimeMillis() is not suitable for measuring time intervals at all. See this answer for more information.
Instead of using nanoTime() for code execution time measurements directly, specialized benchmarking frameworks and profilers should preferably be used, for example JMH and async-profiler in wall-clock profiling mode.
No need to debate, just use the source.
Here, SE 6 for Linux, make your own conclusions:
jlong os::javaTimeMillis() {
timeval time;
int status = gettimeofday(&time, NULL);
assert(status != -1, "linux error");
return jlong(time.tv_sec) * 1000 + jlong(time.tv_usec / 1000);
}
jlong os::javaTimeNanos() {
if (Linux::supports_monotonic_clock()) {
struct timespec tp;
int status = Linux::clock_gettime(CLOCK_MONOTONIC, &tp);
assert(status == 0, "gettime error");
jlong result = jlong(tp.tv_sec) * (1000 * 1000 * 1000) + jlong(tp.tv_nsec);
return result;
} else {
timeval time;
int status = gettimeofday(&time, NULL);
assert(status != -1, "linux error");
jlong usecs = jlong(time.tv_sec) * (1000 * 1000) + jlong(time.tv_usec);
return 1000 * usecs;
}
}
Linux corrects for discrepancies between CPUs, but Windows does not. I suggest you assume System.nanoTime() is only accurate to around 1 micro-second. A simple way to get a longer timing is to call foo() 1000 or more times and divide the time by 1000.
Absolutely not useless. Timing aficionados correctly point out the multi-core problem, but in real-word applications it is often radically better than currentTimeMillis().
When calculating graphics positions in frame refreshes nanoTime() leads to MUCH smoother motion in my program.
And I only test on multi-core machines.
I have seen a negative elapsed time reported from using System.nanoTime(). To be clear, the code in question is:
long startNanos = System.nanoTime();
Object returnValue = joinPoint.proceed();
long elapsedNanos = System.nanoTime() - startNanos;
and variable 'elapsedNanos' had a negative value. (I'm positive that the intermediate call took less than 293 years as well, which is the overflow point for nanos stored in longs :)
This occurred using an IBM v1.5 JRE 64bit on IBM P690 (multi-core) hardware running AIX. I've only seen this error occur once, so it seems extremely rare. I do not know the cause - is it a hardware-specific issue, a JVM defect - I don't know. I also don't know the implications for the accuracy of nanoTime() in general.
To answer the original question, I don't think nanoTime is useless - it provides sub-millisecond timing, but there is an actual (not just theoretical) risk of it being inaccurate which you need to take into account.
This doesn't seem to be a problem on a Core 2 Duo running Windows XP and JRE 1.5.0_06.
In a test with three threads I don't see System.nanoTime() going backwards. The processors are both busy, and threads go to sleep occasionally to provoke moving threads around.
[EDIT] I would guess that it only happens on physically separate processors, i.e. that the counters are synchronized for multiple cores on the same die.
No, it's not... It just depends on your CPU, check High Precision Event Timer for how/why things are differently treated according to CPU.
Basically, read the source of your Java and check what your version does with the function, and if it works against the CPU you will be running it on.
IBM even suggests you use it for performance benchmarking (a 2008 post, but updated).
I am linking to what essentially is the same discussion where Peter Lawrey is providing a good answer.
Why I get a negative elapsed time using System.nanoTime()?
Many people mentioned that in Java System.nanoTime() could return negative time. I for apologize for repeating what other people already said.
nanoTime() is not a clock but CPU cycle counter.
Return value is divided by frequency to look like time.
CPU frequency may fluctuate.
When your thread is scheduled on another CPU, there is a chance of getting nanoTime() which results in a negative difference. That's logical. Counters across CPUs are not synchronized.
In many cases, you could get quite misleading results but you wouldn't be able to tell because delta is not negative. Think about it.
(unconfirmed) I think you may get a negative result even on the same CPU if instructions are reordered. To prevent that, you'd have to invoke a memory barrier serializing your instructions.
It'd be cool if System.nanoTime() returned coreID where it executed.
Java is crossplatform, and nanoTime is platform-dependent. If you use Java - when don't use nanoTime. I found real bugs across different jvm implementations with this function.
The Java 5 documentation also recommends using this method for the same purpose.
This method can only be used to
measure elapsed time and is not
related to any other notion of system
or wall-clock time.
Java 5 API Doc
Also, System.currentTimeMillies() changes when you change your systems clock, while System.nanoTime() doesn't, so the latter is safer to measure durations.
nanoTime is extremely insecure for timing. I tried it out on my basic primality testing algorithms and it gave answers which were literally one second apart for the same input. Don't use that ridiculous method. I need something that is more accurate and precise than get time millis, but not as bad as nanoTime.

Categories