This question is about the relation between library functions that do some kind of wait, e.g. Thread.sleep(long), Object.wait(long), BlockingQueue.poll(long, TimeUnit), and the values returned by System.nanoTime() and System.currentTimeMillis().
As I understand, there are at least two, mostly independent clocks a Java application has access to:
System.currentTimeMillis(), which is basically the Wall Clock Time, which also means that the user and system software like an NTP daemon may fiddle with it from time to time, possibly leading to that value jumping around in any direction and amount.
System.nanoTime() which is guaranteed to be monotonically and more or less steadily increasing, but may drift because of not-so-accureate processor clock frequencies and artifacts caused by power saving mechanisms.
Now I understand that library functions like Thread.sleep() need to rely on some platform-dependent interface to suspend a thread until the specified amount of time has passed, but is it safe to assume that the time measured by these functions is based on the values of System.nanoTime()?
I know that none of these functions guarantee to measure time more accurately than a few milliseconds, but I am interested about very long (as in hours) waits. I.e. if I call Thread.sleep(10 * 3600 * 1000) the time measured between the two clocks can differ by a few minutes but I assume that one of them will be within a fraction of a second of the requested 10 hours. And if any of the two clocks is, I'm assuming its the one used by System.nanoTime(). Are these assumptions correct?
No, it is not safe to assume that it Thread.sleep is based on System.nanoTime.
Java relies on the OS to perform thread scheduling, and it has no control over how the OS performs it.
For the most part, time based APIs that existed pre-jdk 1.5 (Timer, Thread.sleep, wait(long)) all use milliseconds based time. Most of the concurrency utils added in jdk 1.5+ (java.util.concurrent.*) use the nano based time.
However, i don't think the jvm guarantees these behaviors, so you certainly shouldn't depend on behavior one way or the other.
I have different reasons for asking this question.
What was the decision for switching to micro-seconds in JSR 310: Date and Time API based on.
If I measure time with System.currentTimeMillis() how can I interpret 1ms? How many method-calls, how many sysouts, how many HashMap#pushs.
I'm absolutely aware of the low scientific standard of this question, but I'd like to have some default values for java operations.
Edit:
I was talking about:
long t1 = System.currentTimeMillis();
//do random stuff
System.out.println(System.currentTimeMillis()-t1);
What was the decision for switching to micro-seconds in JSR 310: Date and Time API based on.
Modern hardware has had microsecond-precision clocks for quite some time; and it is not new to JSR 310. Consider TimeUnit, which appeared in 1.5 (along with System.nanoTime(), see below) and has a MICROSECONDS value.
If I measure time with System.currentTimeMillis() how can I interpret 1ms?
As accurate as the hardware clock/OS primitive combo allows. There will always be some skew. But unless you do "real" real time (and you shouldn't be doing Java in this case), you will likely never notice.
Note also that this method measures the number of milliseconds since epoch, and it depends on clock adjustements. This is unlike System.nanoTime(), which relies on an onboard tick counter which never "decrements" with time.
FWIW, Timer{,Task} uses System.currentTimeMillis() to measure time, while the newer ScheduledThreadPool uses System.nanoTime().
As to:
How many method-calls, how many sysouts, how many HashMap#pushs (in 1 ms)
impossible to tell! Method calls depend on what methods do, sysouts depend on the speed of your stdout (try and sysout on a 9600 baud serial port), pushes depend on memory bus speed/CPU cache etc; you didn't actually expect an accurate answer for that, did you?
A system.currentTimeMillis() milli-second is almost exactly a milli-second except when the system clock is being corrected e.g. using Network Time Protocol (NTP). NTP can cause significant leaps in time either forward or backward, but this is rare. If you want a monotonically increasing clock with more resolution, use System.nanoTime() instead.
How many method-calls, how many sysouts, how many HashMap#pushs (in 1 ms)
Empty method calls can be eliminated so only sizable method call matter. What the method does is more important. You can expect between 1 and 10000 method calls in a milli-second.
sysouts are very dependent on the display and whether it has been paused. Under normal conditions, you can expect to get 1 to 100 lines of output depending on length and the display in 1 ms. If the stream is paused for any reason, you might not get any.
HashMap has a put() not push() and you can get around 1000 to 10000 of these in a mill-second. The problem is you have to have something worth putting, and this usually takes longer.
The answers of Peter Lawrey and fge are so far correct, but I would add following detail considering your statement about microsecond resolution of JSR-310. This new API uses nanosecond resolution, not microsecond resolution.
The reason for this is not that clocks based on System.currentTimeMillis() might achieve such high degree of precision. No, such clocks just count milliseconds and are sensible for external clock adjustments which can even cause big jumps in time far beyond any subsecond level. The support for nanoseconds is rather motivated to support at maximum nanosecond-based timestamps in many databases (not all!).
It should be noted though that this "precision" is not real-time accuracy, but serves more for avoiding duplicated timestamps (which are often created via mixed clock and counter mechanisms - not measuring scientifically accurate real-time). Such database timestamps are often used as primary keys by some people - therefore their requirement for a non-duplicate monotonically increasing timestamp).
The alternative System.nanoTime() is supposed and designed to show better monotonically increasing behaviour although I would not bet my life on that in some multi-core environments. But here you indeed get nanosecond-like differences between timestamps so JSR-310 can give at least calculatory support via classes like java.time.Duration (but again not necessarily scientifically accurate nanosecond differences).
I wish to calculate the time passed in milliseconds from a specific time in Java.
The classic way it to use System.currentTimeMillis(); for the starting time, and then use this again with the previous one to get the elapsed time. I wish to do something similar to this, but NOT rely on the system time for this.
If I rely on the system time, the user of the program could manipulate the system clock to hack the program.
I have tried using code similar to the following:
int elapsed = 0;
while (true) {
Thread.sleep(10);
elapsed += 10;
}
This works, but I it is not too reliable in the case that the computer lags and then locks up for a second or two.
Any ideas anyone?
You want to utilize System.nanoTime. It has no relation to the system clock. It can only be used to track relative time which seems to be all you want to do.
In an effort to prevent this answer from just being a link to another answer here is a short explanation.
From Documentation
public static long nanoTime() Returns the current value of the most
precise available system timer, in nanoseconds.
This method can only be used to measure elapsed time and is not
related to any other notion of system or wall-clock time. The value
returned represents nanoseconds since some fixed but arbitrary time
(perhaps in the future, so values may be negative). This method
provides nanosecond precision, but not necessarily nanosecond
accuracy. No guarantees are made about how frequently values change.
Differences in successive calls that span greater than approximately
292 years (263 nanoseconds) will not accurately compute elapsed time
due to numerical overflow.
Yet another link to timer information: https://blogs.oracle.com/dholmes/entry/inside_the_hotspot_vm_clocks
You could use Java's Timer class to spawn a "check" callback at some specific precision, lets say every 500ms. This callback would not be used to determine that 500ms actually did pass. You would call System.nanoTime in the callback and compare it to the last time you called System.nanoTime. That would give you a fairly accurate representation of the amount of time that has passed regardless of the wall clock changing.
You can take a look here: System.currentTimeMillis vs System.nanoTime
Java gives access to two method to get the current time: System.nanoTime() and System.currentTimeMillis(). The first one gives a result in nanoseconds, but the actual accuracy is much worse than that (many microseconds).
Is the JVM already providing the best possible value for each particular machine?
Otherwise, is there some Java library that can give finer measurement, possibly by being tied to a particular system?
The problem with getting super precise time measurements is that some processors can't/don't provide such tiny increments.
As far as I know, System.currentTimeMillis() and System.nanoTime() is the best measurement you will be able to find.
Note that both return a long value.
It's a bit pointless in Java measuring time down to the nanosecond scale; an occasional GC hit will easily wipe out any kind of accuracy this may have given. In any case, the documentation states that whilst it gives nanosecond precision, it's not the same thing as nanosecond accuracy; and there are operating systems which don't report nanoseconds in any case (which is why you'll find answers quantized to 1000 when accessing them; it's not luck, it's limitation).
Not only that, but depending on how the feature is actually implemented by the OS, you might find quantized results coming through anyway (e.g. answers that always end in 64 or 128 instead of intermediate values).
It's also worth noting that the purpose of the method is to find the two time differences between some (nearby) start time and now; if you take System.nanoTime() at the start of a long-running application and then take System.nanoTime() a long time later, it may have drifted quite far from real time. So you should only really use it for periods of less than 1s; if you need a longer running time than that, milliseconds should be enough. (And if it's not, then make up the last few numbers; you'll probably impress clients and the result will be just as valid.)
Unfortunately, I don't think java RTS is mature enough at this moment.
Java time does try to provide best value (they actually delegate the native code to call get the kernal time). However, JVM specs make this coarse time measurement disclaimer mainly for things like GC activities, and support of underlying system.
Certain GC activities will block all threads even if you are running concurrent GC.
default linux clock tick precision is only 10ms. Java cannot make it any better if linux kernal does not support.
I haven't figured out how to address #1 unless your app does not need to do GC. A decent and med size application probably and occasionally spends like tens of milliseconds on GC pauses. You are probably out of luck if your precision requirement is lower 10ms.
As for #2, You can tune the linux kernal to give more precision. However, you are also getting less out of your box because now kernal context switch more often.
Perhaps, we should look at it different angle. Is there a reason that OPS needs precision of 10ms of lower? Is it okay to tell Ops that precision is at 10ms AND also look at the GC log at that time, so they know the time is +-10ms accurate without GC activity around that time?
If you are looking to record some type of phenomenon on the order of nanoseconds, what you really need is a real-time operating system. The accuracy of the timer will greatly depend on the operating system's implementation of its high resolution timer and the underlying hardware.
However, you can still stay with Java since there are RTOS versions available.
JNI:
Create a simple function to access the Intel RDTSC instruction or the PMCCNTR register of co-processor p15 in ARM.
Pure Java:
You can possibly get better values if you are willing to delay until a clock tick. You can spin checking System.nanoTime() until the value changes. If you know for instance that the value of System.nanoTime() changes every 10000 loop iterations on your platform by amount DELTA then the actual event time was finalNanoTime-DELTA*ITERATIONS/10000. You will need to "warm-up" the code before taking actual measurements.
Hack (for profiling, etc, only):
If garbage collection is throwing you off you could always measure the time using a high-priority thread running in a second jvm which doesn't create objects. Have it spin incrementing a long in shared memory which you use as a clock.
As documented in the blog post Beware of System.nanoTime() in Java, on x86 systems, Java's System.nanoTime() returns the time value using a CPU specific counter. Now consider the following case I use to measure time of a call:
long time1= System.nanoTime();
foo();
long time2 = System.nanoTime();
long timeSpent = time2-time1;
Now in a multi-core system, it could be that after measuring time1, the thread is scheduled to a different processor whose counter is less than that of the previous CPU. Thus we could get a value in time2 which is less than time1. Thus we would get a negative value in timeSpent.
Considering this case, isn't it that System.nanotime is pretty much useless for now?
I know that changing the system time doesn't affect nanotime. That is not the problem I describe above. The problem is that each CPU will keep a different counter since it was turned on. This counter can be lower on the second CPU compared to the first CPU. Since the thread can be scheduled by the OS to the second CPU after getting time1, the value of timeSpent may be incorrect and even negative.
This answer was written in 2011 from the point of view of what the Sun JDK of the time running on operating systems of the time actually did. That was a long time ago! leventov's answer offers a more up-to-date perspective.
That post is wrong, and nanoTime is safe. There's a comment on the post which links to a blog post by David Holmes, a realtime and concurrency guy at Sun. It says:
System.nanoTime() is implemented using the QueryPerformanceCounter/QueryPerformanceFrequency API [...] The default mechanism used by QPC is determined by the Hardware Abstraction layer(HAL) [...] This default changes not only across hardware but also across OS versions. For example Windows XP Service Pack 2 changed things to use the power management timer (PMTimer) rather than the processor timestamp-counter (TSC) due to problems with the TSC not being synchronized on different processors in SMP systems, and due the fact its frequency can vary (and hence its relationship to elapsed time) based on power-management settings.
So, on Windows, this was a problem up until WinXP SP2, but it isn't now.
I can't find a part II (or more) that talks about other platforms, but that article does include a remark that Linux has encountered and solved the same problem in the same way, with a link to the FAQ for clock_gettime(CLOCK_REALTIME), which says:
Is clock_gettime(CLOCK_REALTIME) consistent across all processors/cores? (Does arch matter? e.g. ppc, arm, x86, amd64, sparc).
It should or it's considered buggy.
However, on x86/x86_64, it is possible to see unsynced or variable freq TSCs cause time inconsistencies. 2.4 kernels really had no protection against this, and early 2.6 kernels didn't do too well here either. As of 2.6.18 and up the logic for detecting this is better and we'll usually fall back to a safe clocksource.
ppc always has a synced timebase, so that shouldn't be an issue.
So, if Holmes's link can be read as implying that nanoTime calls clock_gettime(CLOCK_REALTIME), then it's safe-ish as of kernel 2.6.18 on x86, and always on PowerPC (because IBM and Motorola, unlike Intel, actually know how to design microprocessors).
There's no mention of SPARC or Solaris, sadly. And of course, we have no idea what IBM JVMs do. But Sun JVMs on modern Windows and Linux get this right.
EDIT: This answer is based on the sources it cites. But i still worry that it might actually be completely wrong. Some more up-to-date information would be really valuable. I just came across to a link to a four year newer article about Linux's clocks which could be useful.
I did a bit of searching and found that if one is being pedantic then yes it might be considered useless...in particular situations...it depends on how time sensitive your requirements are...
Check out this quote from the Java Sun site:
The real-time clock and
System.nanoTime() are both based on
the same system call and thus the same
clock.
With Java RTS, all time-based APIs
(for example, Timers, Periodic
Threads, Deadline Monitoring, and so
forth) are based on the
high-resolution timer. And, together
with real-time priorities, they can
ensure that the appropriate code will
be executed at the right time for
real-time constraints. In contrast,
ordinary Java SE APIs offer just a few
methods capable of handling
high-resolution times, with no
guarantee of execution at a given
time. Using System.nanoTime() between
various points in the code to perform
elapsed time measurements should
always be accurate.
Java also has a caveat for the nanoTime() method:
This method can only be used to
measure elapsed time and is not
related to any other notion of system
or wall-clock time. The value returned
represents nanoseconds since some
fixed but arbitrary time (perhaps in
the future, so values may be
negative). This method provides
nanosecond precision, but not
necessarily nanosecond accuracy. No
guarantees are made about how
frequently values change. Differences
in successive calls that span greater
than approximately 292.3 years (263
nanoseconds) will not accurately
compute elapsed time due to numerical
overflow.
It would seem that the only conclusion that can be drawn is that nanoTime() cannot be relied upon as an accurate value. As such, if you do not need to measure times that are mere nano seconds apart then this method is good enough even if the resulting returned value is negative. However, if you're needing higher precision, they appear to recommend that you use JAVA RTS.
So to answer your question...no nanoTime() is not useless....its just not the most prudent method to use in every situation.
Since Java 7, System.nanoTime() is guaranteed to be safe by JDK specification. System.nanoTime()'s Javadoc makes it clear that all observed invocations within a JVM (that is, across all threads) are monotonic:
The value returned represents nanoseconds since some fixed but arbitrary origin time (perhaps in the future, so values may be negative). The same origin is used by all invocations of this method in an instance of a Java virtual machine; other virtual machine instances are likely to use a different origin.
JVM/JDK implementation is responsible for ironing out the inconsistencies that could be observed when underlying OS utilities are called (e. g. those mentioned in Tom Anderson's answer).
The majority of other old answers to this question (written in 2009–2012) express FUD that was probably relevant for Java 5 or Java 6 but is no longer relevant for modern versions of Java.
It's worth mentioning, however, that despite JDK guarantees nanoTime()'s safety, there have been several bugs in OpenJDK making it to not uphold this guarantee on certain platforms or under certain circumstances (e. g. JDK-8040140, JDK-8184271). There are no open (known) bugs in OpenJDK wrt nanoTime() at the moment, but a discovery of a new such bug or a regression in a newer release of OpenJDK shouldn't shock anybody.
With that in mind, code that uses nanoTime() for timed blocking, interval waiting, timeouts, etc. should preferably treat negative time differences (timeouts) as zeros rather than throw exceptions. This practice is also preferable because it is consistent with the behaviour of all timed wait methods in all classes in java.util.concurrent.*, for example Semaphore.tryAcquire(), Lock.tryLock(), BlockingQueue.poll(), etc.
Nonetheless, nanoTime() should still be preferred for implementing timed blocking, interval waiting, timeouts, etc. to currentTimeMillis() because the latter is a subject to the "time going backward" phenomenon (e. g. due to server time correction), i. e. currentTimeMillis() is not suitable for measuring time intervals at all. See this answer for more information.
Instead of using nanoTime() for code execution time measurements directly, specialized benchmarking frameworks and profilers should preferably be used, for example JMH and async-profiler in wall-clock profiling mode.
No need to debate, just use the source.
Here, SE 6 for Linux, make your own conclusions:
jlong os::javaTimeMillis() {
timeval time;
int status = gettimeofday(&time, NULL);
assert(status != -1, "linux error");
return jlong(time.tv_sec) * 1000 + jlong(time.tv_usec / 1000);
}
jlong os::javaTimeNanos() {
if (Linux::supports_monotonic_clock()) {
struct timespec tp;
int status = Linux::clock_gettime(CLOCK_MONOTONIC, &tp);
assert(status == 0, "gettime error");
jlong result = jlong(tp.tv_sec) * (1000 * 1000 * 1000) + jlong(tp.tv_nsec);
return result;
} else {
timeval time;
int status = gettimeofday(&time, NULL);
assert(status != -1, "linux error");
jlong usecs = jlong(time.tv_sec) * (1000 * 1000) + jlong(time.tv_usec);
return 1000 * usecs;
}
}
Linux corrects for discrepancies between CPUs, but Windows does not. I suggest you assume System.nanoTime() is only accurate to around 1 micro-second. A simple way to get a longer timing is to call foo() 1000 or more times and divide the time by 1000.
Absolutely not useless. Timing aficionados correctly point out the multi-core problem, but in real-word applications it is often radically better than currentTimeMillis().
When calculating graphics positions in frame refreshes nanoTime() leads to MUCH smoother motion in my program.
And I only test on multi-core machines.
I have seen a negative elapsed time reported from using System.nanoTime(). To be clear, the code in question is:
long startNanos = System.nanoTime();
Object returnValue = joinPoint.proceed();
long elapsedNanos = System.nanoTime() - startNanos;
and variable 'elapsedNanos' had a negative value. (I'm positive that the intermediate call took less than 293 years as well, which is the overflow point for nanos stored in longs :)
This occurred using an IBM v1.5 JRE 64bit on IBM P690 (multi-core) hardware running AIX. I've only seen this error occur once, so it seems extremely rare. I do not know the cause - is it a hardware-specific issue, a JVM defect - I don't know. I also don't know the implications for the accuracy of nanoTime() in general.
To answer the original question, I don't think nanoTime is useless - it provides sub-millisecond timing, but there is an actual (not just theoretical) risk of it being inaccurate which you need to take into account.
This doesn't seem to be a problem on a Core 2 Duo running Windows XP and JRE 1.5.0_06.
In a test with three threads I don't see System.nanoTime() going backwards. The processors are both busy, and threads go to sleep occasionally to provoke moving threads around.
[EDIT] I would guess that it only happens on physically separate processors, i.e. that the counters are synchronized for multiple cores on the same die.
No, it's not... It just depends on your CPU, check High Precision Event Timer for how/why things are differently treated according to CPU.
Basically, read the source of your Java and check what your version does with the function, and if it works against the CPU you will be running it on.
IBM even suggests you use it for performance benchmarking (a 2008 post, but updated).
I am linking to what essentially is the same discussion where Peter Lawrey is providing a good answer.
Why I get a negative elapsed time using System.nanoTime()?
Many people mentioned that in Java System.nanoTime() could return negative time. I for apologize for repeating what other people already said.
nanoTime() is not a clock but CPU cycle counter.
Return value is divided by frequency to look like time.
CPU frequency may fluctuate.
When your thread is scheduled on another CPU, there is a chance of getting nanoTime() which results in a negative difference. That's logical. Counters across CPUs are not synchronized.
In many cases, you could get quite misleading results but you wouldn't be able to tell because delta is not negative. Think about it.
(unconfirmed) I think you may get a negative result even on the same CPU if instructions are reordered. To prevent that, you'd have to invoke a memory barrier serializing your instructions.
It'd be cool if System.nanoTime() returned coreID where it executed.
Java is crossplatform, and nanoTime is platform-dependent. If you use Java - when don't use nanoTime. I found real bugs across different jvm implementations with this function.
The Java 5 documentation also recommends using this method for the same purpose.
This method can only be used to
measure elapsed time and is not
related to any other notion of system
or wall-clock time.
Java 5 API Doc
Also, System.currentTimeMillies() changes when you change your systems clock, while System.nanoTime() doesn't, so the latter is safer to measure durations.
nanoTime is extremely insecure for timing. I tried it out on my basic primality testing algorithms and it gave answers which were literally one second apart for the same input. Don't use that ridiculous method. I need something that is more accurate and precise than get time millis, but not as bad as nanoTime.