How use NTP server when scheduling tasks in Java app? - java

I know NTP servers can be used to synchronize your computer's system clock. But can NTP be used by an application that wants to schedule things in sync with other systems?
Scenario: Developing a java app (perhaps to run in an ESB like Mule) and you won't necessarily be able to control the time of the machine on which it will run. Can your app use an NTP server to obtain the time and schedule tasks to run based on that time?
Let's say you're using Quartz as the scheduler and perhaps joda-time for handling times (if that's useful). The time doesn't have to be super precise, just want to make sure not ahead of remote systems by much.

If you're not super worried about drift, and assuming that the machines aren't just randomly changing time, then you could ping an NTP server to get what time IT thinks it is, and compare that to the time your local machine thinks that it is, then calculate the differential and finally schedule your task in local time.
So, for example, say that the NTP server says that it's 12:30, but your local machine says that it is 12:25. And you want your task to go off at 13:00 NTP time.
So, 12:25 - 12:30 = -0:05. 13:00 + (-0:05) = 12:55, therefore you schedule your task for 12:55.
Addenda --
I can't speak to the naivety of an implementation, I'm not familiar enough with the protocol.
In the end it comes down to what level of practical accuracy is acceptable to you. NTP is used to synchronize time between systems. One of the problems it solves is by being continually invoked, it prevents clock creep. If you use the "NTP Ping, schedule with offset" technique, and, say, that future time is perhaps 8 hrs in the future, there's a very real possibility of clock creep, meaning that although you wanted the task to go off at "12:55", when 12:55 rolls around, it could be off from the original NTP server since the clocks have not been synced (at all), and the job has not been rescheduled to virtually resync.
Obviously, the longer the period between original schedule and actual execution, the more the potential for drift. This is an artifact no matter how good the original NTP ping is. If you do not plan on rescheduling these tasks as they get close to execution time in order to compensate for drift, then odds are any "reasonable" implementation of NTP will suit.
There's the Apache Commons NET library that has a NTP client. Some complain that it uses System.currentTimeMillis(), which has (had?) resolution issues (10-15ms) on Windows. System.nanoTime addresses this, and you could easily change the library to use that, and rebuild it.
I can't speak to how it reflects the "naivety" of the implementation. But in the end it comes down to how close you need to keep the two machines and their jobs (virtually) in sync.

My intuition tells me that the NTP requires a hardware clock adjustments to keep a pace. So if you don't have access to the hardware, you cannot do it.
However, if it is enough to have a few seconds precision, you could periodically send sample time from a server to calculate a skew between the system clock and adjust scheduled time for jobs.

But can NTP be used by an application that wants to schedule things in sync with other systems?
I've never heard of it being used that way. However, there's nothing to stop you implementing a client for the Network Time Protocol (RFC 1305). A full NTP implementation is probably overkill, but you can also use the protocol in SNTP mode (RFC 2030).
You probably want to set up and use a local NTP server if you want high availability and reasonable accuracy.
A Google search indicates that there are a number of Java NTP clients out there ...

Related

Changing time on device => cheating in the game

In some games people can cheat by changing the time. For example: when they have to wait 30 minutes before a building is built.
Can i prevent this, assuming that the devices have connection with my server?
I am programming in java using the libGDX library.
Any solutions that work both on IOS and Android?
Store the time they have to wait to on the server (tell the server they perform a task, server will log that time and when they can do it again) and make the client check if the server thinks it can in fact perform the action. Anytime you store something on the client it is likely that there will be a way around it.
Your best bet would be to use SystemClock.elapsedRealtime() as an assistant to an infrequent server side timecheck.
return the time since the system was booted, and include deep sleep. This clock is guaranteed to be monotonic, and continues to tick even when the CPU is in power saving modes, so is the recommend basis for general purpose interval timing.
After verifying the time from your server, you can do local checks against SystemClock.elapsedRealtime until the next boot up.

Time synchronization

I am creating a web application in Java in which I need to run a reverse timer on client browser. I have planned to send the remaining time from server to client and then tick the timer using javascript.
My questions are:
1. Does the clock tick rate varies with different systems?
2. Is there any better way to do this?
Does the clock tick rate varies with different systems?
Yes, it's the result of really, really small differences of frequencies of the quatrz used in chipsets. So if you do not synchronize your clocks now and then, they will diverge.
However, if you're not designing a satellite, remote control for ballistic missiles, or life supporting devices, you really should not care.
Is there any better way to do this?
Yes, if:
your reverse clock counts down from a year or at least month, or
you are running your client on a device with broken / really inaccurate clock
you may use a NTP protocol to make sure the client and the server clocks are synchronized. There are NTP libraries available for JavaScript and Java.
#npe -s solution with NTP will do, but is theoretically incorrect:
Even if the clocks are perfectly sync-ed, you will send the client the remaining-time. But that message needs to travel on the net, so by the time the client receives it it won't be correct anymore.
A better approach would be to send the end time to the client, which is an absolute value, hence not affected by network lag and do the countdown on the client, calculating the remaining-time there.
that said, the other answers about NTP are also necessary of course

Stable timer independent of system time

I use java.util.Timer to trigger jobs in my app, but I found that it is depend on system time: if system time is adjusted, timer's trigger will be affected.
For example, if system time goes back by 80 seconds, the timer will stop working for 80 seconds.
Java has a System.nanoTime method which is independent of system time, but it seems that it
cannot be used in Timer.
Is there Timer library that supports what I need? Or I have to implement it myself?
Notice that I don't need a precise current time(date), I need a precise time interval
you have 2 options:
Write your one timer, relatively trivial. Before anyone tell reinvent the wheel, being able to carry the task yourself is always important. Especially when it's around 20 lines of code.
Use java.util.concurrent.ScheduledThreadPoolExecturor and use scheduleAtFixedRate, the queue impl. is based on System.nanoTime.
Install ntp daemon that adjusts the time by tuning (slowing down, speeding up) the system clock in very slight fashion during long time span.
Few other things, perfect 100ms is a tall order in non-real time GC environment as GC may STW (stop the world) for seconds sometimes. Sun's GCs can't do that super reliably. IBM's Metronome running on modified Linux kernel is supposed to be able to. You may wish to pay attention to if your application is truly real-time demanding.
If your computer is isolated and off the Internet i think there is not much you can do if the user tampers with the clock.
On the other hand, if this is not the case, you will find quite many API's and Mashups that will allow you to read the correct time. You could read the time from there. You could read the time from time.gov. Also twinsun.com you give you lots of additional options.
100ms seems like too low for Internet time-access.

Measuring latency

I'm working on a multiplayer project in Java and I am trying to refine how I gather my latency measurement results.
My current setup is to send a batch of UDP packets at regular intervals that get timestamped by the server and returned, then latency is calculated and recorded. I take number of samples then work out the average to get the latency.
Does this seem like a reasonable solution to work out the latency on the client side?
I would have the client timestamp the outgoing packet, and have the response preserve the original timestamp. This way you can compute the roundtrip latency while side-stepping any issues caused by the server and client clocks not being exactly synchronized.
You could also timestamp packets used in your game protocol . So you will have more data to integrate your statistics. (This method is also useful to avoid the overhead caused by an additional burst of data. You simply used the data you are already exchanging to do your stats)
You could also start to use other metrics (for example variance) in order to make a more accurate estimation of your connection quality.
If you haven't really started your project yet, consider using a networking framework like KryoNet, which has RMI and efficient serialisation and which will automatically send ping requests using UDP. You can get the ping time values easily.
If you are measuring roundtrip latency, factors like clock drift, precision of HW clock and OS api would affect your measurement. Without spending money on the hardware the closest that you can get is by using RDTSC instructions. But RDTSC doesnt go without its own problems, you have to be careful how you call it.

Sporadic behavior by the machines in stress

We are doing some Java stress runs (involving network IO). Initially things are all fine and the system responds very fast (avg latency in test 2ms). But hours later when I redo the same test I observe the performance goes down (20 - 60ms). It's the same Jar files, same JVM, and the same LAN over which the stress is running. I am not understanding the reason for this behavior.
The LAN is 1GBPS and for the stress requirements I'm sure we are not using all of it.
So my questions:
Can it be because of some switches in the LANs?
Does the machine slow off after some time ( The machines are restarted .. say about 6months back well before the stress can start; They are RHEL5, XEON 64bit Quad core)
What is the general way to debug such an issues?
A few questions...
How much of the environment is under your control and are you putting any measures in place to ensure it's consistent for each run? i.e. are you sharing the network with other systems, is the machine you're using being used solely for your stress testing?
The way I'd look at this is to start gathering details on what your machine and code are up to. That means use perfmon (windows) sar (unix) to find out what the OS and hardware is doing and get a profiler attached to make sure your code is doing the same thing and help pin-point where the bottleneck is occuring from a code perspective.
Nothing terribly detailed but something I hope that will help get you started.
The general way is "measure everything". This, in particular might mean:
Ensure time on all servers is the same (use ntp or something similar);
Measure how long did it take to generate request (what if request generator has a bug?);
Measure when did request leave the client machine(s), or at least how long did it take to do i/o. Sometimes it is enough to know average time necessary for many requests.
Measure when did the request arrive.
Measure how long did it take to generate a response.
Measure how long did it take to send the response.
You can probably start from the 5th element, as this is (you believe) your critical chain. But it is best to log as much as you can - as according to what you've said yourself, it takes days to produce different results.
If you don't want to modify your code, look for cases where you can sniff data without intervening (e.g. define a servlet filter in your web.xml).

Categories