Java - Creating an Internal Clock - java

I wish to calculate the time passed in milliseconds from a specific time in Java.
The classic way it to use System.currentTimeMillis(); for the starting time, and then use this again with the previous one to get the elapsed time. I wish to do something similar to this, but NOT rely on the system time for this.
If I rely on the system time, the user of the program could manipulate the system clock to hack the program.
I have tried using code similar to the following:
int elapsed = 0;
while (true) {
Thread.sleep(10);
elapsed += 10;
}
This works, but I it is not too reliable in the case that the computer lags and then locks up for a second or two.
Any ideas anyone?

You want to utilize System.nanoTime. It has no relation to the system clock. It can only be used to track relative time which seems to be all you want to do.
In an effort to prevent this answer from just being a link to another answer here is a short explanation.
From Documentation
public static long nanoTime() Returns the current value of the most
precise available system timer, in nanoseconds.
This method can only be used to measure elapsed time and is not
related to any other notion of system or wall-clock time. The value
returned represents nanoseconds since some fixed but arbitrary time
(perhaps in the future, so values may be negative). This method
provides nanosecond precision, but not necessarily nanosecond
accuracy. No guarantees are made about how frequently values change.
Differences in successive calls that span greater than approximately
292 years (263 nanoseconds) will not accurately compute elapsed time
due to numerical overflow.
Yet another link to timer information: https://blogs.oracle.com/dholmes/entry/inside_the_hotspot_vm_clocks
You could use Java's Timer class to spawn a "check" callback at some specific precision, lets say every 500ms. This callback would not be used to determine that 500ms actually did pass. You would call System.nanoTime in the callback and compare it to the last time you called System.nanoTime. That would give you a fairly accurate representation of the amount of time that has passed regardless of the wall clock changing.
You can take a look here: System.currentTimeMillis vs System.nanoTime

Related

Best approach for dealing with time measures? [duplicate]

This question already has answers here:
How do I write a correct micro-benchmark in Java?
(11 answers)
Closed 5 years ago.
My goal is to write a framework for measuring method execution or transaction time and for processing the measurements, i.e. storing, analysis etc. Transaction may include calls to external systems, with synchronously or asynchronously waiting for the results.
There already have been some questions around that topic, like
"How do I time a method's execution"
"Measure execution time for a Java method"
"System.currentTimeMillis vs System.nanoTime"
And all the answers boil down to three approaches for taking the time
System.currentTimeMillis()
System.nanoTime()
Instant.now() and Duration(since Java 8)
I know, all of these have some implications
System.currentTimeMillis()
The result of this method depends on the platform. On Linux you get 1ms resolution, of Windows you get 10ms (single core) ~15ms (multi core). So it's ok for measuring large running operations or multiple executions of short running ops.
System.nanoTime()
You get a high resolution time measure, with nano second precision (but not necessarily nano second accuracy) and you get an overflow after 292 years (I could live with that).
Instant.now() and Duration
Since Java 8 there is the new time API. An instant has a second and a nano second field, so it uses on top of the Object reference two long values (same for Duration). You also get nano second precision, depending on the underlying clock (see "Java 8 Instant.now() with nanosecond resolution?"). The instantiaion is done by invoking Instant.now() which maps down to System.currentTimeMillis() for the normal System clock.
Given the facts, it becomes apparent, that best precision is only achievable with System.nanoTime(), but my question targets more towards a best-practice for dealing with the measures in general, which not only includes the measure taking but also the measure handling.
Instant and Duration provide best API support (calculating, comparing, etc) but have os-dependend precision in standard case, and more overhead for memory and creating a measure (Object construction, deeper callstack)
System.nanoTime() and System.currentTimeMillis() have different levels of precision but only have basic "api" support (math operations on long), but are faster to get and smaller to keep in memory.
So what would be the best approach? Are there any implications I didn't think of? Are there any alternatives?
You are focusing too much on the unimportant detail of the precision. If you want to measure/profile the execution of certain operations, you have to make sure that these operation run long enough to make the measurement unaffected by one-time artifacts, small differences in thread scheduling timing, garbage collection or HotSpot optimization. In most cases, if the differences become smaller than the millisecond scale, they are not useful to draw conclusions from them.
The more important aspect is whether the tools are designed for your task. System.currentTimeMillis() and all other wall-clock based APIs, whether they are based on currentTimeMillis() or not, are designed to give you a clock which is intended to be synchronized with Earth’s rotation and its path around the Sun, which loads it with the burden of Leap Seconds and other correction measures, not to speak of the fact that your computer’s clock may be out of sync with the wall clock anyway and get corrected, e.g. via NTP updates, in the worst case jumping right when you are trying to measure your elapsed time, perhaps even backwards.
In contrast, System.nanoTime() is designed to measure elapsed time (exactly what you want to do) and nothing else. Since its return value has an unspecified origin and may even be negative, only differences between two values returned by this method make any sense at all. You will find this even in the documentation:
The values returned by this method become meaningful only when the difference between two such values, obtained within the same instance of a Java virtual machine, is computed.
So when you want to measure and process the elapsed time of your method execution or transactions, System.nanoTime() is the way to go. Granted, it only provides a naked long value, but it isn’t clear what kind of API support you want. Since points of time are irrelevant and even distracting here, you’ll have a duration only, which you may convert to other time units or, if you want to use the new time API, you can create a Duration object using Duration.ofNanos(long), allowing you to add and subtract duration values and compare them, but there isn’t much more you could do. You must not mix them up with wall-clock or calendar based durations…
As a final note, the documentation is a bit imprecise about the limitation. If you are calculating the difference between two values returned by System.nanoTime(), a numerical overflow isn’t bad per se. Since the counter has an unspecified origin, the start value of your operation might be close to Long.MAX_VALUE whereas the end value is close to Long.MIN_VALUE because the JVM’s counter had an overflow. In this case, calculating the difference will cause another overflow, producing a correct value for the difference. But if you store that difference in a signed long, it can hold at most 2⁶³ nanoseconds, limiting the difference to max 292 years, but if you treat it as unsigned long, e.g. via Long.compareUnsigned and Long.toUnsignedString, you may handle even 2⁶⁴ nanoseconds duration, in other words you can measure up to 584 years of elapsed time this way, if your computer doesn’t break in-between…
I would recommend using getThreadCpuTime from ThreadMXBean (see also https://stackoverflow.com/a/7467299/185031). If you want to measure the execution time of a method, you are most of the time not so much interested in the wall clock, but more on the CPU execution time.

Does Thread.sleep() use the same clock as System.nanoTime()

This question is about the relation between library functions that do some kind of wait, e.g. Thread.sleep(long), Object.wait(long), BlockingQueue.poll(long, TimeUnit), and the values returned by System.nanoTime() and System.currentTimeMillis().
As I understand, there are at least two, mostly independent clocks a Java application has access to:
System.currentTimeMillis(), which is basically the Wall Clock Time, which also means that the user and system software like an NTP daemon may fiddle with it from time to time, possibly leading to that value jumping around in any direction and amount.
System.nanoTime() which is guaranteed to be monotonically and more or less steadily increasing, but may drift because of not-so-accureate processor clock frequencies and artifacts caused by power saving mechanisms.
Now I understand that library functions like Thread.sleep() need to rely on some platform-dependent interface to suspend a thread until the specified amount of time has passed, but is it safe to assume that the time measured by these functions is based on the values of System.nanoTime()?
I know that none of these functions guarantee to measure time more accurately than a few milliseconds, but I am interested about very long (as in hours) waits. I.e. if I call Thread.sleep(10 * 3600 * 1000) the time measured between the two clocks can differ by a few minutes but I assume that one of them will be within a fraction of a second of the requested 10 hours. And if any of the two clocks is, I'm assuming its the one used by System.nanoTime(). Are these assumptions correct?
No, it is not safe to assume that it Thread.sleep is based on System.nanoTime.
Java relies on the OS to perform thread scheduling, and it has no control over how the OS performs it.
For the most part, time based APIs that existed pre-jdk 1.5 (Timer, Thread.sleep, wait(long)) all use milliseconds based time. Most of the concurrency utils added in jdk 1.5+ (java.util.concurrent.*) use the nano based time.
However, i don't think the jvm guarantees these behaviors, so you certainly shouldn't depend on behavior one way or the other.

How long is one millisecond in the Java world?

I have different reasons for asking this question.
What was the decision for switching to micro-seconds in JSR 310: Date and Time API based on.
If I measure time with System.currentTimeMillis() how can I interpret 1ms? How many method-calls, how many sysouts, how many HashMap#pushs.
I'm absolutely aware of the low scientific standard of this question, but I'd like to have some default values for java operations.
Edit:
I was talking about:
long t1 = System.currentTimeMillis();
//do random stuff
System.out.println(System.currentTimeMillis()-t1);
What was the decision for switching to micro-seconds in JSR 310: Date and Time API based on.
Modern hardware has had microsecond-precision clocks for quite some time; and it is not new to JSR 310. Consider TimeUnit, which appeared in 1.5 (along with System.nanoTime(), see below) and has a MICROSECONDS value.
If I measure time with System.currentTimeMillis() how can I interpret 1ms?
As accurate as the hardware clock/OS primitive combo allows. There will always be some skew. But unless you do "real" real time (and you shouldn't be doing Java in this case), you will likely never notice.
Note also that this method measures the number of milliseconds since epoch, and it depends on clock adjustements. This is unlike System.nanoTime(), which relies on an onboard tick counter which never "decrements" with time.
FWIW, Timer{,Task} uses System.currentTimeMillis() to measure time, while the newer ScheduledThreadPool uses System.nanoTime().
As to:
How many method-calls, how many sysouts, how many HashMap#pushs (in 1 ms)
impossible to tell! Method calls depend on what methods do, sysouts depend on the speed of your stdout (try and sysout on a 9600 baud serial port), pushes depend on memory bus speed/CPU cache etc; you didn't actually expect an accurate answer for that, did you?
A system.currentTimeMillis() milli-second is almost exactly a milli-second except when the system clock is being corrected e.g. using Network Time Protocol (NTP). NTP can cause significant leaps in time either forward or backward, but this is rare. If you want a monotonically increasing clock with more resolution, use System.nanoTime() instead.
How many method-calls, how many sysouts, how many HashMap#pushs (in 1 ms)
Empty method calls can be eliminated so only sizable method call matter. What the method does is more important. You can expect between 1 and 10000 method calls in a milli-second.
sysouts are very dependent on the display and whether it has been paused. Under normal conditions, you can expect to get 1 to 100 lines of output depending on length and the display in 1 ms. If the stream is paused for any reason, you might not get any.
HashMap has a put() not push() and you can get around 1000 to 10000 of these in a mill-second. The problem is you have to have something worth putting, and this usually takes longer.
The answers of Peter Lawrey and fge are so far correct, but I would add following detail considering your statement about microsecond resolution of JSR-310. This new API uses nanosecond resolution, not microsecond resolution.
The reason for this is not that clocks based on System.currentTimeMillis() might achieve such high degree of precision. No, such clocks just count milliseconds and are sensible for external clock adjustments which can even cause big jumps in time far beyond any subsecond level. The support for nanoseconds is rather motivated to support at maximum nanosecond-based timestamps in many databases (not all!).
It should be noted though that this "precision" is not real-time accuracy, but serves more for avoiding duplicated timestamps (which are often created via mixed clock and counter mechanisms - not measuring scientifically accurate real-time). Such database timestamps are often used as primary keys by some people - therefore their requirement for a non-duplicate monotonically increasing timestamp).
The alternative System.nanoTime() is supposed and designed to show better monotonically increasing behaviour although I would not bet my life on that in some multi-core environments. But here you indeed get nanosecond-like differences between timestamps so JSR-310 can give at least calculatory support via classes like java.time.Duration (but again not necessarily scientifically accurate nanosecond differences).

How is the Timer class in Java sensitive to the system clock? [duplicate]

This question already has answers here:
What should Timertask.scheduleAtFixedRate do if the clock changes?
(2 answers)
Closed 9 years ago.
This highly voted answer on SO regarding the differences between a Timer and a ScheduledThreadPoolExecutor mentions the following while enumerating the differences:
Timer can be sensitive to changes in the system clock,
ScheduledThreadPoolExecutor isn't.
The above is mentioned verbatim inside the great book Java Concurrency in Practice.
I understand the points mentioned in that answer except the above mentioned one. What does it mean to say that Timers can be sensitive to system clock whereas ScheduledThreadPoolExecutors are not?
Timer uses System.currentTimeMillis(), which represents wall clock time and should never be used for checking relative times such as determining how long something took to run or, in this case, how long to delay before executing a task. System.currentTimeMillis() is affected by things like automatic adjustments to the system clock or even manually changing the system clock. As such, if you call it twice and check the difference between the times you get, you can even get a negative number.
System.nanoTime(), on the other hand, is specifically intended for measuring elapsed time and is what should be used for this sort of thing.
Timer uses System.currentTimeMillis() to determine the next execution of a task.
However ScheduledThreadPoolExecutor uses System.nanoTime().
Also nanoTime() method can only be used to measure elapsed time and is not related to any other notion of system or wall-clock time.
System.currentTimeMillis() is sensitive to system clock
System.nanoTime() is not sensitive to system clock as it measures the time elapsed.
Java Documentation: System.nanoTime()
This method can only be used to measure elapsed time and is not
related to any other notion of system or wall-clock time. The value
returned represents nanoseconds since some fixed but arbitrary time
(perhaps in the future, so values may be negative).
Looking at the source, The Timer class schedules tasks using System.currentTimeMillis(). The ScheduledThreadPoolExecutor uses System.nanoTime().
currentTimeMillis() will tend to use the OS clock, the one that tracks the current date/time. nanoTime() will tend to use a higher resolution hardware clock.
If you move your OS clock back an hour, then currentTimeMillis() could should reflect that while nanoTime() should not.

Does Java enforce as-if-serial for single threaded applications

I am running some JUnit tests on a single thread and they are failing in a non-deterministic way. I had one person tell me that the optimizing JVM (Oracle Hotspot 64-Bit 17.1-b03) is executing the instructions out of order for speed. I have trouble believing that the java spec would allow that, but I can't find the specific reference.
Wikipedia states that a single thread must enforce within-thread as-if-serial so I shouldn't have to worry about execution order differing from what I wrote.
http://en.wikipedia.org/wiki/Java_Memory_Model#The_memory_model
Example code:
#Test
public void testPersistence() throws Exception
{
// Setup
final long preTestTimeStamp = System.currentTimeMillis();
// Test
persistenceMethod();
// Validate
final long postTestTimeStamp = System.currentTimeMillis();
final long updateTimeStamp = -- load the timestamp from the database -- ;
assertTrue("Updated time should be after the pretest time", updateTimeStamp >= preTestTimeStamp);
assertTrue("Updated time should be before the posttest time", updateTimeStamp <= postTestTimeStamp);
}
void persistenceMethod()
{
...
final long updateTime = System.currentTimeMillis();
...
-- persist updateTime to the database --
...
}
When this test code is run it has completely non-deterministic behavior, sometimes it passes, sometimes if fails on the first assert, and sometimes it fails on the second assert. The values are always within a millisecond or two of each other so it isn't that the persistence is just failing completely. Adding a Thread.sleep(2); between each statement does decrease the number of times the test fails, but doesn't eliminate the failures completely.
Is it possible that this is the fault of the JVM or is it more likely that the database (MsSql) is doing some sort of rounding of the stored data?
The possibility that the JVM is executing statements out of order is so remote that I think you can pretty much dismiss it. If the JVM had a bug like that, it would be showing up in a lot of places besides this one program of yours.
It is true that currentTimeMillis is not guaranteed to actually be accurate to the millisecond. But the possibility that the clock would run backwards is almost as remote as the possibility that the JVM is executing statements out of order.
I've written many, many programs that test how long it takes a function I'm interested in to execute by taking the currentTimeMillis before it starts, executing the function, getting currentTimeMillis when it's done, and subtracting to find an elapsed time. I have never had such a program give me a negative time.
Some possibilities that occur to me:
There's an error in your code to save the timestamp to the database or to read it back. You don't show that code, so we have no way to know if there's a bug there.
Rounding. I don't have a MySQL instance handy, so I'm not sure what the precision of a timestamp is. If it's not as precise as a millisecond, then this would readily explain your problem. For example, say it's only accurate to the second. You get pre time=01:00:00.1, update time=01:00:00.2, post time=01:00:00.4. But update time gets saved as 01:00:00 because that's the limit of precision, so when you read it back update time < re time. Likewise suppose the times are 01:00:00.4, 01:00:00.6, 01:00:00.7. Update time gets rounded to 01:00:01. So update time > post time.
Time zones. Default time zone is an attribute of a connection. If when you write the time you are set to, say, Eastern Time, but when you read it back you are on Pacific Time, then the order of the times will not be what you expected.
Instead of just looking at the relationships, why don't you print the values of all three timestamps? I'd print them as int's and also as Gregorian dates. Oh, and I'd print update time before saving and again after reading it back. Maybe something would become apparent.
If, for example, you see that the update time as read back always end with one or more zeros even when the time as saved had non-zero digits, that would indicate that your times are being truncated or rounded. If the time has read back differs from the time as written by an exact multiple of 1 hour, that might be a time zone problem. If post time is less than pre time, that either indicates a serious problem with your system clock or, more likely, a program bug that's mixing up the times. Etc.
Should be easy enough to determine whether mySql (0r your persistence code) is doing something. Have your persistenceMethod() return the value it persisted and compare with what you read. They surely should match.
I wonder whether it's the trustworthiness of currentTimeMillis() that's in question:
Returns the current time in milliseconds. Note that while the unit of
time of the return value is a millisecond, the granularity of the
value depends on the underlying operating system and may be larger.
For example, many operating systems measure time in units of tens of
milliseconds.
Given that you are doing a >= test, I can't quite see how that might manifest, but it makes me wonder exactly what times you are getting.
That is really strange. Java will certainly not rearrange statements and execute them in a different order if those statements might have side effects which affect subsequent statements.
I think this error happens because System.currentTimeMillis is not as precise as you think. The API documentation of that method says:
Returns the current time in milliseconds. Note that while the unit of time of the return value is a millisecond, the granularity of the value depends on the underlying operating system and may be larger. For example, many operating systems measure time in units of tens of milliseconds.
It sounds strange, but time might even seem to be going backwards in some cases, so the value that currentTimeMillis returns at one moment can be lower than what it returned an instant earlier. See this question: Will System.currentTimeMillis always return a value >= previous calls?

Categories