Time synchronization - java

I am creating a web application in Java in which I need to run a reverse timer on client browser. I have planned to send the remaining time from server to client and then tick the timer using javascript.
My questions are:
1. Does the clock tick rate varies with different systems?
2. Is there any better way to do this?

Does the clock tick rate varies with different systems?
Yes, it's the result of really, really small differences of frequencies of the quatrz used in chipsets. So if you do not synchronize your clocks now and then, they will diverge.
However, if you're not designing a satellite, remote control for ballistic missiles, or life supporting devices, you really should not care.
Is there any better way to do this?
Yes, if:
your reverse clock counts down from a year or at least month, or
you are running your client on a device with broken / really inaccurate clock
you may use a NTP protocol to make sure the client and the server clocks are synchronized. There are NTP libraries available for JavaScript and Java.

#npe -s solution with NTP will do, but is theoretically incorrect:
Even if the clocks are perfectly sync-ed, you will send the client the remaining-time. But that message needs to travel on the net, so by the time the client receives it it won't be correct anymore.
A better approach would be to send the end time to the client, which is an absolute value, hence not affected by network lag and do the countdown on the client, calculating the remaining-time there.
that said, the other answers about NTP are also necessary of course

Related

Changing time on device => cheating in the game

In some games people can cheat by changing the time. For example: when they have to wait 30 minutes before a building is built.
Can i prevent this, assuming that the devices have connection with my server?
I am programming in java using the libGDX library.
Any solutions that work both on IOS and Android?
Store the time they have to wait to on the server (tell the server they perform a task, server will log that time and when they can do it again) and make the client check if the server thinks it can in fact perform the action. Anytime you store something on the client it is likely that there will be a way around it.
Your best bet would be to use SystemClock.elapsedRealtime() as an assistant to an infrequent server side timecheck.
return the time since the system was booted, and include deep sleep. This clock is guaranteed to be monotonic, and continues to tick even when the CPU is in power saving modes, so is the recommend basis for general purpose interval timing.
After verifying the time from your server, you can do local checks against SystemClock.elapsedRealtime until the next boot up.

Best way to synchronize playback over 3g networks

I am trying to figure out a way to synchronize rhythm playback on 2+ mobile android devices.
Achieving good precision on WiFi / LAN is simple (Very low latency) but I need a good solution for 3G networks with variable high latency..
One idea I came up with is sending and timing messages and using average time-spans to compensate latency but this idea seems absurd and I'm kinda sure there are better other ways to solve this..
care to help?
I would first of all try to create a as close as possible synchronized clock across all devices that you can use as a reference.
When devices communicate they always include their local synchronized time with the message, this way you can always figure out the difference between when you received the message and when it was transmitted, and also always know that the time the message states you should play a beat is the same across all devices.
The real difficulty here being synchronizing the watches.. I would start by reading this article http://en.wikipedia.org/wiki/Network_Time_Protocol
There is a JAVA based NTP client here:
http://commons.apache.org/net/examples/ntp/NTPClient.java
If you get that to work, there are a number of NTP servers across the world.
http://www.pool.ntp.org/en/use.html
I have actually recently tried to solve the very same question myself. The best solution I have found was the NTP (Network Time Protocol).
The main drawback is that it can take long time to synchronise over a high latency network. So you need to have an app that is running quite a while before you need to have the devices synchronized.
The application I´m working on is not yet tested with perfect timings so I can´t promise that this is a viable solution. But one worth trying.
If your devices are close enough together, you could see whether you are able to use Bluetooth to speed up peer-to-peer synchronisation, while using NTP to get the global time -- probably by adding Bluetooth-linked phones as extra remotes in your NTP config. This would entail pairing end-user devices, which may be an issue for you.

How use NTP server when scheduling tasks in Java app?

I know NTP servers can be used to synchronize your computer's system clock. But can NTP be used by an application that wants to schedule things in sync with other systems?
Scenario: Developing a java app (perhaps to run in an ESB like Mule) and you won't necessarily be able to control the time of the machine on which it will run. Can your app use an NTP server to obtain the time and schedule tasks to run based on that time?
Let's say you're using Quartz as the scheduler and perhaps joda-time for handling times (if that's useful). The time doesn't have to be super precise, just want to make sure not ahead of remote systems by much.
If you're not super worried about drift, and assuming that the machines aren't just randomly changing time, then you could ping an NTP server to get what time IT thinks it is, and compare that to the time your local machine thinks that it is, then calculate the differential and finally schedule your task in local time.
So, for example, say that the NTP server says that it's 12:30, but your local machine says that it is 12:25. And you want your task to go off at 13:00 NTP time.
So, 12:25 - 12:30 = -0:05. 13:00 + (-0:05) = 12:55, therefore you schedule your task for 12:55.
Addenda --
I can't speak to the naivety of an implementation, I'm not familiar enough with the protocol.
In the end it comes down to what level of practical accuracy is acceptable to you. NTP is used to synchronize time between systems. One of the problems it solves is by being continually invoked, it prevents clock creep. If you use the "NTP Ping, schedule with offset" technique, and, say, that future time is perhaps 8 hrs in the future, there's a very real possibility of clock creep, meaning that although you wanted the task to go off at "12:55", when 12:55 rolls around, it could be off from the original NTP server since the clocks have not been synced (at all), and the job has not been rescheduled to virtually resync.
Obviously, the longer the period between original schedule and actual execution, the more the potential for drift. This is an artifact no matter how good the original NTP ping is. If you do not plan on rescheduling these tasks as they get close to execution time in order to compensate for drift, then odds are any "reasonable" implementation of NTP will suit.
There's the Apache Commons NET library that has a NTP client. Some complain that it uses System.currentTimeMillis(), which has (had?) resolution issues (10-15ms) on Windows. System.nanoTime addresses this, and you could easily change the library to use that, and rebuild it.
I can't speak to how it reflects the "naivety" of the implementation. But in the end it comes down to how close you need to keep the two machines and their jobs (virtually) in sync.
My intuition tells me that the NTP requires a hardware clock adjustments to keep a pace. So if you don't have access to the hardware, you cannot do it.
However, if it is enough to have a few seconds precision, you could periodically send sample time from a server to calculate a skew between the system clock and adjust scheduled time for jobs.
But can NTP be used by an application that wants to schedule things in sync with other systems?
I've never heard of it being used that way. However, there's nothing to stop you implementing a client for the Network Time Protocol (RFC 1305). A full NTP implementation is probably overkill, but you can also use the protocol in SNTP mode (RFC 2030).
You probably want to set up and use a local NTP server if you want high availability and reasonable accuracy.
A Google search indicates that there are a number of Java NTP clients out there ...

Measuring latency

I'm working on a multiplayer project in Java and I am trying to refine how I gather my latency measurement results.
My current setup is to send a batch of UDP packets at regular intervals that get timestamped by the server and returned, then latency is calculated and recorded. I take number of samples then work out the average to get the latency.
Does this seem like a reasonable solution to work out the latency on the client side?
I would have the client timestamp the outgoing packet, and have the response preserve the original timestamp. This way you can compute the roundtrip latency while side-stepping any issues caused by the server and client clocks not being exactly synchronized.
You could also timestamp packets used in your game protocol . So you will have more data to integrate your statistics. (This method is also useful to avoid the overhead caused by an additional burst of data. You simply used the data you are already exchanging to do your stats)
You could also start to use other metrics (for example variance) in order to make a more accurate estimation of your connection quality.
If you haven't really started your project yet, consider using a networking framework like KryoNet, which has RMI and efficient serialisation and which will automatically send ping requests using UDP. You can get the ping time values easily.
If you are measuring roundtrip latency, factors like clock drift, precision of HW clock and OS api would affect your measurement. Without spending money on the hardware the closest that you can get is by using RDTSC instructions. But RDTSC doesnt go without its own problems, you have to be careful how you call it.

Java: Anyone know of a library that detects the quality of an internet connection?

I know a simple URLConnection to google can detect if I am connected to the internet, after all I am confident that the internet is all well and fine If I cant connect to google. But what I am looking for at this juncture is a library that can measure how effective my connection to the internet is in terms of BOTH responsiveness and bandwidth available. BUT, I do not want to measure how much bandwidth is potentially available as that is too resource intensive. I really just need to be able to test wether or not I can recieve something like X kB's in Y amount of time. Does such a library already exist?
It's not really possible to be able to judge this. In today's world of ADSL 2+ with 20-odd Mb/s download speeds, you're largely governed by the speed of everything upstream from you. So if you're connecting to a site in another country, for example, the main bottleneck is probably the international link. If you're connected to a site in the same city as you are, then you're probably limited by that server's uplink speed (e.g. they might be 10MB/s and they'll be serving lots of people at once).
So the answer to the question "can I receive X KB in at most Y seconds" depends entirely on where you're downloading from. And therefore, the best way to answer that question is to actually start downloading from where ever it is you're planning to download, and then time it.
In terms of responsiveness, it's basically the same question. You can do an ICMP ping to the server in question, but many servers will have firewalls that drop ICMP packets without replying, so it's not exactly accurate (besides, if then ping is much less than ~100ms then the biggest contribution to latency probably comes from the server's internal processing, not the actual network, meaning an ICMP ping would be useless anyway).
This is true in general of network characteristics - and the internet in particular (because it's so complex) - you can't reliably measure anything about site X and infer anything about site Y. If you want to know how fast site Y will respond, then you just have to connect to site Y and start downloading.
Calculating the user's ability to reliably download a given number of bits in a given period of time might be complex -- but you could start with some of the code found at http://commons.apache.org/net/. That can tell you latency and bandwidth, anyway.
The answer may be wrong a millisecond (substitute any other period) after you've measured it.
Look at any application that gives you a "download time remaining" figure. Notice that it's generally incorrect and/or continually updating, and only becomes accurate at the last second.
Basically, so much change is inevitable over any reasonably complex network, such as the internet, that the only real measure is only available after the fact.

Categories