Timer timer = new Timer(true);
timer.scheduleAtFixedRate(timerTask, 0, 1); // 1 = 1ms delay between each iteration
Each time it triggers it runs a super quick operation which essentially take no time at all: basically it takes the current milliseconds elapsed value and increments it while doing a quick lookup in a Map where the millis is the key.
Do you think 1ms delay would be too fast? Is this going to bog down the system? Are there any dangers in trying to use this super fast timer?
Maybe. A lot can happen in a millisecond on contemporary computers so it depends on lots of things. You probably should figure out the slowest rate acceptable and then pick something reasonable between 1 and that number. This will execute more around 86,400,000 times a day. Does that make sense for what you are trying to accomplish?
EDIT: As some of the comments to the question note, there might be a fundamental flaw in this approach if you assume that the timer will always succeed to execute at the rate you have provided. You can never make this assumption regardless of the rate. It's hard to tell because there are very few details but I have a sense you should look into using queues instead of a Map.
That depends.
Ask yourself these questions:
how often do I absolutely need that value to be checked?
how quick would be acceptable?
how much of your systems CPU time are you willing to sacrifice on this task?
One ms can be long (high end gaming PC) or very, very short (older gen smartphone) and depending on your CPU architecture you'll end up stuffing one of many cores or the one and only core with calculating time differences.
As for your data structure: you'll probably need something like a sorted map containing the start as key and duration as one field of the value. You'd fetch the closest key less than your time and check if the caption stored is still valid... or similar
"Do you think 1ms delay would be too fast?" - Yes, and it will not be able to start task each millisecond (especially on-loading), you can test it yourself (with small example below).
"Is this going to bog down the system?" - actually not very much (I mean mostly CPU time). If you run your operation for example in a loop and will not make any pause, you will get much more load. Your system will spend big amount of time on context switch to start your task, officials it is inevitable overhead, but if you execute very simple task this overhead will be comparable with your useful workload.
"Are there any dangers in trying to use this super fast timer?" - as you wrote above " quick lookup in a Map where the millis is the key" you use millis as a key but you will not run each millisecond so your logic if count to be corrupted.
static int inc = 0;
public static void main(String[] args) throws InterruptedException {
TimerTask timerTask = new TimerTask() {
public void run() {
inc++;
}
};
Timer timer = new Timer(true);
timer.scheduleAtFixedRate(timerTask, 0, 1);
Thread.sleep(500000);
timer.cancel();
timer.purge();
System.out.println(inc);
}
Related
Currently this is how I get time elapsed for a process to get completed.
long start = System.currentTimeMillis();
process();
long end = System.currentTimeMillis();
long duration = stop - start;
System.out.println(duration);
The problem I am facing currently is the process() gets interrupted(paused) by my pc's hibernation whenever there's no power(My work environment is outside so I have to be on battery) so that it can continue whenever I restart the pc when plugged. I am fully aware I would get a wrong duration if for example I am not able to restart for about 2hrs since the process got paused(during hibernation) as this would mean the extra 2hrs will be accounted for by the CPU clock via the CMOS battery backup whenever the line
long end = System.currentTimeMillis();
is reached. How would it be possible to get the exact duration irrespective of the process being paused severally.
So your question is "how much wall clock time was taken by the process, without time spent in hibernation". Well, wall clock time is easy to get, but you don't really have a chance to know whether hibernation happened or not.
However...
If your process is a set of discrete steps, you could do something like the following
List<Duration> durations = new ArrayList<>();
for(Step step : steps) {
Instant stepStart = Instant.now();
process(step);
durations.add(Duration.between(stepStart, Instant.now()));
}
long totalMillis = durations.stream()
.mapToLong(Duration::toMillis)
.filter(ms -> ms < 1000) // Cut off limit, to disregard hibernate steps
.sum();
This times each step separately, and if the time for a step takes more than 1 second, it's not taken into account in the total. You could also use an "average" time for those steps, so the end result would be a bit more realistic (of course this depends on the number of steps, the assumed runtime of a single step, etc.).
This only works if there is a good limit to what is "too much" time, and it provides a less accurate result. If you're doing something with BigInteger, it's likely that steps with larger values take more time, so a single cutoff value would not work (although you could consider some kind of dynamic cutoff value, based on the input).
Cheapest, easiest and best solution: run the code on a server.
as an excercise I'm trying to create a metronome with Java using Thread.sleep as a timer and JMF for sounds. It's working out quite well but for some reason JMF seems to be only playing sounds at a maximum of 207 beats per minute.
From my Metronome-class:
public void play() {
soundPlayer.play();
waitPulse();
play();
}
From my SoundPlayer-class:
public void play() {
new Thread(new ThreadPlayer()).start();
}
private class ThreadPlayer implements Runnable {
public void run() {
System.out.println("Click");
player.setMediaTime(new Time(0));
player.start();
}
}
I've made SoundPlayer.play() work as a thread to test if it would make a difference, but it's not. I can easily change tempos up to about 207bpm but even if I set my timer to 1000bpm the sounds aren't played any faster than about 207bpm.
I've put the System.out.println("Click"); inside my ThreadPlayer.run() to check if my loop is working correctly – it is.
The issue seems to be with my implementation of JMF. I'm pretty sure there is a simple solution, can anybody help me out?
Thank you so much for your help! :)
The answer about Thread.sleep() being unreliable is correct: you can't count on it to return in exactly the amount of time you specify. In fact, I'm rather surprised your metronome is usable at all, especially when your system is under load. Read the docs for Thread.sleep() for more details. Max Beikirch's answer about MIDI is a good suggestion: MIDI handles timing very well.
But you ask how to do this with audio. The trick is to open an audio stream and fill it with silence between the metronome clicks and insert the metronome clicks where you want. When you do that, your soundcard plays back the samples (whether they contain a click or silence) at a constant rate. The key here is to keep the audio stream open and never close it. The clock, then, is the audio hardware, not your system clock -- a subtle but important distinction.
So, let's say you are generating 16 bit mono samples at 44100 Hz. Here is a function that will create a click sound at the requested rate. Keep in mind that this click sound is bad for speakers (and your ears) so if you actually use it, play it at a low volume. (Also, this code is untested -- it's just to demonstrate the concept)
int interval = 44100; // 1 beat per second, by default
int count = 0;
void setBPM( float bpm ) {
interval = ( bpm / 60 ) * 44100 ;
}
void generateMetronomeSamples( short[] s ) {
for( int i=0; i<s.length; ++i ) {
s = 0;
++count;
if( count == 0 ) {
s = Short.MAX_VALUE;
}
if( count == interval ) {
count = 0;
}
}
}
Once you set the tempo with setBPM, you can play back the samples generated by calling the the generateMetronomeSamples() function repeatedly, and streaming that output to your speakers using JavaSound. (see JSResources.org for a good tutorial)
Once you have that working, you could replace the harsh click with a sound you get from a WAV or AIFF or a short tone or whatever.
Take your time and take a look at MIDI! - http://www.ibm.com/developerworks/library/it/it-0801art38/ or http://docs.oracle.com/javase/tutorial/sound/TOC.html . It's the best solution for everything related to computer made sound.
My assumption would be, and maybe someone else can jump in here, is that thread running times are up to the whims of the thread scheduler. You can't guaranty how long it will take for the JVM to get back to that thread. Also, seeing as the JVM is running as a process on the machine, and is subject to the OS's process scheduler, you are looking at at least two levels of unpredictability.
Just like Jamie Duby said, just because you tell the Thread to sleep 1 millisecond, doesn't mean that it will get called back at exactly one millisecond. The ONLY guarantee is that AT LEAST one millisecond has passed since you called Thread.sleep();. In reality, the processor cannot process the code fast enough to play a beep sound every millisecond, which is why you see the delay. If you want a dramatic example, make a home-made timer class and try making it count one millisecond for a full minute, you will see that the timer is off by quite a bit.
The person who really deserves the answer credit here is Max Beikrich, Midi is the only way you are going to be able to produce the output you are looking for.
I have far more experience as a musician than a programmer but I just finished a metronome application I started a while back, I had put the project on hold for a while because I couldn't figure out why I was having the same problem you are. Yes Thread.sleep() can be unreliable but I'm managed to make a good metronome using that method.
I saw you mentioned trying an ExecutorService I don't think using concurrent classes will solve your problem. My guess is its a system resource issue, I'm pretty certain MIDIs are the way to go with a metronome. I force my students to practice with a metronome and I've used many, I haven't ever been too concerned with the audio quality of the ticks, timing is far more important and MIDIs are going to be much faster than any other audio file. I used the javax.sound.midi library from the Sound API. I suspect that will solve your problem.
You may notice your ticks are uneven when it works properly, this is due to the Thread.sleep() method not being very reliable. How I solved that problem was by doing all my calculations in nanoseconds using the System.nanoTime() method instead of the System.currentTimeMillis() method, just don't forget to convert back to milliseconds before passing the sleep time into the Thread.sleep() method.
I don't want to post the code for my metronome here in case you want to figure it out on your own but if you'd like to see it just send me an e-mail kevin.bigler3#gmail.com and I'd be happy to send it to you. Good luck.
When programming animations and little games I've come to know the incredible importance of Thread.sleep(n); I rely on this method to tell the operating system when my application won't need any CPU, and using this making my program progress in a predictable speed.
My problem is that the JRE uses different methods of implementation of this functionality on different operating systems. On UNIX-based (or influenced) OS:es such as Ubuntu and OS X, the underlying JRE implementation uses a well-functioning and precise system for distributing CPU-time to different applications, and so making my 2D game smooth and lag-free. However, on Windows 7 and older Microsoft systems, the CPU-time distribution seems to work differently, and you usually get back your CPU-time after the given amount of sleep, varying with about 1-2 ms from target sleep. However, you get occasional bursts of extra 10-20 ms of sleep time. This causes my game to lag once every few seconds when this happens. I've noticed this problem exists on most Java games I've tried on Windows, Minecraft being a noticeable example.
Now, I've been looking around on the Internet to find a solution to this problem. I've seen a lot of people using only Thread.yield(); instead of Thread.sleep(n);, which works flawlessly at the cost of the currently used CPU core getting full load, no matter how much CPU your game actually needs. This is not ideal for playing your game on laptops or high energy consumption workstations, and it's an unnecessary trade-off on Macs and Linux systems.
Looking around further I found a commonly used method of correcting sleep time inconsistencies called "spin-sleep", where you only order sleep for 1 ms at a time and check for consistency using the System.nanoTime(); method, which is very accurate even on Microsoft systems. This helps for the normal 1-2 ms of sleep inconsistency, but it won't help against the occasional bursts of +10-20 ms of sleep inconsistency, since this often results in more time spent than one cycle of my loop should take all together.
After tons of looking I found this cryptic article of Andy Malakov, which was very helpful in improving my loop: http://andy-malakov.blogspot.com/2010/06/alternative-to-threadsleep.html
Based on his article I wrote this sleep method:
// Variables for calculating optimal sleep time. In nanoseconds (1s = 10^-9ms).
private long timeBefore = 0L;
private long timeSleepEnd, timeLeft;
// The estimated game update rate.
private double timeUpdateRate;
// The time one game loop cycle should take in order to reach the max FPS.
private long timeLoop;
private void sleep() throws InterruptedException {
// Skip first game loop cycle.
if (timeBefore != 0L) {
// Calculate optimal game loop sleep time.
timeLeft = timeLoop - (System.nanoTime() - timeBefore);
// If all necessary calculations took LESS time than given by the sleepTimeBuffer. Max update rate was reached.
if (timeLeft > 0 && isUpdateRateLimited) {
// Determine when to stop sleeping.
timeSleepEnd = System.nanoTime() + timeLeft;
// Sleep, yield or keep the thread busy until there is not time left to sleep.
do {
if (timeLeft > SLEEP_PRECISION) {
Thread.sleep(1); // Sleep for approximately 1 millisecond.
}
else if (timeLeft > SPIN_YIELD_PRECISION) {
Thread.yield(); // Yield the thread.
}
if (Thread.interrupted()) {
throw new InterruptedException();
}
timeLeft = timeSleepEnd - System.nanoTime();
}
while (timeLeft > 0);
}
// Save the calculated update rate.
timeUpdateRate = 1000000000D / (double) (System.nanoTime() - timeBefore);
}
// Starting point for time measurement.
timeBefore = System.nanoTime();
}
SLEEP_PRECISION I usually put to about 2 ms, and SPIN_YIELD_PRECISION to about 10 000 ns for best performance on my Windows 7 machine.
After tons of hard work, this is the absolute best I can come up with. So, since I still care about improving the accuracy of this sleep method, and I'm still not satisfied with the performance, I would like to appeal to all of you java game hackers and animators out there for suggestions on a better solution for the Windows platform. Could I use a platform-specific way on Windows to make it better? I don't care about having a little platform specific code in my applications, as long as the majority of the code is OS independent.
I would also like to know if there is anyone who knows about Microsoft and Oracle working out a better implementation of the Thread.sleep(n); method, or what's Oracle's future plans are on improving their environment as the basis of applications requiring high timing accuracy, such as music software and games?
Thank you all for reading my lengthy question/article. I hope some people might find my research helpful!
You could use a cyclic timer associated with a mutex. This is IHMO the most efficient way of doing what you want. But then you should think about skipping frames in case the computer lags (You can do it with another nonblocking mutex in the timer code.)
Edit: Some pseudo-code to clarify
Timer code:
While(true):
if acquireIfPossible(mutexSkipRender):
release(mutexSkipRender)
release(mutexRender)
Sleep code:
acquire(mutexSkipRender)
acquire(mutexRender)
release(mutexSkipRender)
Starting values:
mutexSkipRender = 1
mutexRender = 0
Edit: corrected initialization values.
The following code work pretty well on windows (loops at exactly 50fps with a precision to the millisecond)
import java.util.Date;
import java.util.Timer;
import java.util.TimerTask;
import java.util.concurrent.Semaphore;
public class Main {
public static void main(String[] args) throws InterruptedException {
final Semaphore mutexRefresh = new Semaphore(0);
final Semaphore mutexRefreshing = new Semaphore(1);
int refresh = 0;
Timer timRefresh = new Timer();
timRefresh.scheduleAtFixedRate(new TimerTask() {
#Override
public void run() {
if(mutexRefreshing.tryAcquire()) {
mutexRefreshing.release();
mutexRefresh.release();
}
}
}, 0, 1000/50);
// The timer is started and configured for 50fps
Date startDate = new Date();
while(true) { // Refreshing loop
mutexRefresh.acquire();
mutexRefreshing.acquire();
// Refresh
refresh += 1;
if(refresh % 50 == 0) {
Date endDate = new Date();
System.out.println(String.valueOf(50.0*1000/(endDate.getTime() - startDate.getTime())) + " fps.");
startDate = new Date();
}
mutexRefreshing.release();
}
}
}
Your options are limited, and they depend on what exactly you want to do. Your code snippet mentions the max FPS, but the max FPS would require that you never sleep at all, so I'm not entirely sure what you intend with that. None of that sleep or yield checking is going to make any difference in most of the problem situations however - if some other app needs to run now and the OS doesn't want to switch back soon, it doesn't matter which one of those you call, you'll get control back when the OS decides to do so, which will almost certainly be more than 1ms in the future. However, the OS can certainly be coaxed into making switches more often - Win32 has the timeBeginPeriod call for precisely this purpose, which you may be able to use somehow. But there is a good reason for not switching too often - it's less efficient.
The best thing to do, although somewhat more complex, is usually to go for a game loop that doesn't require real-time updates, but instead performs logic updates at fixed intervals (eg. 20x a second) and renders whenever possible (perhaps with arbitrary short sleeps to free up CPU for other apps, if not running in full-screen). By buffering a past logic state as well as the current one you can interpolate between them to make the rendering appear as smooth as if you were doing logic updates each time. For more information on this approach, you can see the Fix Your Timestep article.
I would also like to know if there is anyone who knows about Microsoft and Oracle working out a better implementation of the Thread.sleep(n); method, or what's Oracle's future plans are on improving their environment as the basis of applications requiring high timing accuracy, such as music software and games?
No, this won't be happening. Remember, sleep is just a method saying how long you want your program to be asleep for. It is not a specification for when it will or should wake up, and never will be. By definition, any system with sleep and yield functionality is a multitasking system, where the requirements of other tasks have to be considered, and the operating system always gets the final call on the scheduling of this. The alternative wouldn't work reliably, because if a program could somehow demand to be reactivated at a precise time of its choosing it could starve other processes of CPU power. (eg. A program that spawned a background thread and had both threads performing 1ms of work and calling sleep(1) at the end could take turns to hog a CPU core.) Thus, for a user-space program, sleep (and functionality like it) will always be a lower bound, never an upper bound. To do better than that requires the OS itself to allow certain apps to pretty much own the scheduling, and this is not a desirable feature in operating systems for consumer hardware (while being a common and useful feature for industrial applications).
Thread.Sleep says you're app needs no more time. This means that in a worst case scenario you'll have to wait for an entire thread slice (40ms or so).
Now in bad cases when a driver or something takes up more time it could be you have to wait for 120ms (3*40ms) so Thread.Sleep is not the way to go. Go another way, like registering a 1ms callback and starting draw code very X callbacks.
(This is on windows, i'd use MultiMedia tools to get those 1ms resolution callbacks)
Timing stuff is notoriously bad on windows. This article is a good place to start. Not sure if you care, but also note that there can be worse problems (especially with System.nanoTime) on virtual systems as well (when windows is the guest operating system).
Thread.sleep is inaccurate and makes the animation jittery most of the time.
If you replace it completely with Thread.yield you'll get a solid FPS without lag or jitter, however the CPU usage increases greatly. I moved to Thread.yield a long time ago.
This problem has been discussed on Java Game Development forums for years.
When programming animations and little games I've come to know the incredible importance of Thread.sleep(n); I rely on this method to tell the operating system when my application won't need any CPU, and using this making my program progress in a predictable speed.
My problem is that the JRE uses different methods of implementation of this functionality on different operating systems. On UNIX-based (or influenced) OS:es such as Ubuntu and OS X, the underlying JRE implementation uses a well-functioning and precise system for distributing CPU-time to different applications, and so making my 2D game smooth and lag-free. However, on Windows 7 and older Microsoft systems, the CPU-time distribution seems to work differently, and you usually get back your CPU-time after the given amount of sleep, varying with about 1-2 ms from target sleep. However, you get occasional bursts of extra 10-20 ms of sleep time. This causes my game to lag once every few seconds when this happens. I've noticed this problem exists on most Java games I've tried on Windows, Minecraft being a noticeable example.
Now, I've been looking around on the Internet to find a solution to this problem. I've seen a lot of people using only Thread.yield(); instead of Thread.sleep(n);, which works flawlessly at the cost of the currently used CPU core getting full load, no matter how much CPU your game actually needs. This is not ideal for playing your game on laptops or high energy consumption workstations, and it's an unnecessary trade-off on Macs and Linux systems.
Looking around further I found a commonly used method of correcting sleep time inconsistencies called "spin-sleep", where you only order sleep for 1 ms at a time and check for consistency using the System.nanoTime(); method, which is very accurate even on Microsoft systems. This helps for the normal 1-2 ms of sleep inconsistency, but it won't help against the occasional bursts of +10-20 ms of sleep inconsistency, since this often results in more time spent than one cycle of my loop should take all together.
After tons of looking I found this cryptic article of Andy Malakov, which was very helpful in improving my loop: http://andy-malakov.blogspot.com/2010/06/alternative-to-threadsleep.html
Based on his article I wrote this sleep method:
// Variables for calculating optimal sleep time. In nanoseconds (1s = 10^-9ms).
private long timeBefore = 0L;
private long timeSleepEnd, timeLeft;
// The estimated game update rate.
private double timeUpdateRate;
// The time one game loop cycle should take in order to reach the max FPS.
private long timeLoop;
private void sleep() throws InterruptedException {
// Skip first game loop cycle.
if (timeBefore != 0L) {
// Calculate optimal game loop sleep time.
timeLeft = timeLoop - (System.nanoTime() - timeBefore);
// If all necessary calculations took LESS time than given by the sleepTimeBuffer. Max update rate was reached.
if (timeLeft > 0 && isUpdateRateLimited) {
// Determine when to stop sleeping.
timeSleepEnd = System.nanoTime() + timeLeft;
// Sleep, yield or keep the thread busy until there is not time left to sleep.
do {
if (timeLeft > SLEEP_PRECISION) {
Thread.sleep(1); // Sleep for approximately 1 millisecond.
}
else if (timeLeft > SPIN_YIELD_PRECISION) {
Thread.yield(); // Yield the thread.
}
if (Thread.interrupted()) {
throw new InterruptedException();
}
timeLeft = timeSleepEnd - System.nanoTime();
}
while (timeLeft > 0);
}
// Save the calculated update rate.
timeUpdateRate = 1000000000D / (double) (System.nanoTime() - timeBefore);
}
// Starting point for time measurement.
timeBefore = System.nanoTime();
}
SLEEP_PRECISION I usually put to about 2 ms, and SPIN_YIELD_PRECISION to about 10 000 ns for best performance on my Windows 7 machine.
After tons of hard work, this is the absolute best I can come up with. So, since I still care about improving the accuracy of this sleep method, and I'm still not satisfied with the performance, I would like to appeal to all of you java game hackers and animators out there for suggestions on a better solution for the Windows platform. Could I use a platform-specific way on Windows to make it better? I don't care about having a little platform specific code in my applications, as long as the majority of the code is OS independent.
I would also like to know if there is anyone who knows about Microsoft and Oracle working out a better implementation of the Thread.sleep(n); method, or what's Oracle's future plans are on improving their environment as the basis of applications requiring high timing accuracy, such as music software and games?
Thank you all for reading my lengthy question/article. I hope some people might find my research helpful!
You could use a cyclic timer associated with a mutex. This is IHMO the most efficient way of doing what you want. But then you should think about skipping frames in case the computer lags (You can do it with another nonblocking mutex in the timer code.)
Edit: Some pseudo-code to clarify
Timer code:
While(true):
if acquireIfPossible(mutexSkipRender):
release(mutexSkipRender)
release(mutexRender)
Sleep code:
acquire(mutexSkipRender)
acquire(mutexRender)
release(mutexSkipRender)
Starting values:
mutexSkipRender = 1
mutexRender = 0
Edit: corrected initialization values.
The following code work pretty well on windows (loops at exactly 50fps with a precision to the millisecond)
import java.util.Date;
import java.util.Timer;
import java.util.TimerTask;
import java.util.concurrent.Semaphore;
public class Main {
public static void main(String[] args) throws InterruptedException {
final Semaphore mutexRefresh = new Semaphore(0);
final Semaphore mutexRefreshing = new Semaphore(1);
int refresh = 0;
Timer timRefresh = new Timer();
timRefresh.scheduleAtFixedRate(new TimerTask() {
#Override
public void run() {
if(mutexRefreshing.tryAcquire()) {
mutexRefreshing.release();
mutexRefresh.release();
}
}
}, 0, 1000/50);
// The timer is started and configured for 50fps
Date startDate = new Date();
while(true) { // Refreshing loop
mutexRefresh.acquire();
mutexRefreshing.acquire();
// Refresh
refresh += 1;
if(refresh % 50 == 0) {
Date endDate = new Date();
System.out.println(String.valueOf(50.0*1000/(endDate.getTime() - startDate.getTime())) + " fps.");
startDate = new Date();
}
mutexRefreshing.release();
}
}
}
Your options are limited, and they depend on what exactly you want to do. Your code snippet mentions the max FPS, but the max FPS would require that you never sleep at all, so I'm not entirely sure what you intend with that. None of that sleep or yield checking is going to make any difference in most of the problem situations however - if some other app needs to run now and the OS doesn't want to switch back soon, it doesn't matter which one of those you call, you'll get control back when the OS decides to do so, which will almost certainly be more than 1ms in the future. However, the OS can certainly be coaxed into making switches more often - Win32 has the timeBeginPeriod call for precisely this purpose, which you may be able to use somehow. But there is a good reason for not switching too often - it's less efficient.
The best thing to do, although somewhat more complex, is usually to go for a game loop that doesn't require real-time updates, but instead performs logic updates at fixed intervals (eg. 20x a second) and renders whenever possible (perhaps with arbitrary short sleeps to free up CPU for other apps, if not running in full-screen). By buffering a past logic state as well as the current one you can interpolate between them to make the rendering appear as smooth as if you were doing logic updates each time. For more information on this approach, you can see the Fix Your Timestep article.
I would also like to know if there is anyone who knows about Microsoft and Oracle working out a better implementation of the Thread.sleep(n); method, or what's Oracle's future plans are on improving their environment as the basis of applications requiring high timing accuracy, such as music software and games?
No, this won't be happening. Remember, sleep is just a method saying how long you want your program to be asleep for. It is not a specification for when it will or should wake up, and never will be. By definition, any system with sleep and yield functionality is a multitasking system, where the requirements of other tasks have to be considered, and the operating system always gets the final call on the scheduling of this. The alternative wouldn't work reliably, because if a program could somehow demand to be reactivated at a precise time of its choosing it could starve other processes of CPU power. (eg. A program that spawned a background thread and had both threads performing 1ms of work and calling sleep(1) at the end could take turns to hog a CPU core.) Thus, for a user-space program, sleep (and functionality like it) will always be a lower bound, never an upper bound. To do better than that requires the OS itself to allow certain apps to pretty much own the scheduling, and this is not a desirable feature in operating systems for consumer hardware (while being a common and useful feature for industrial applications).
Thread.Sleep says you're app needs no more time. This means that in a worst case scenario you'll have to wait for an entire thread slice (40ms or so).
Now in bad cases when a driver or something takes up more time it could be you have to wait for 120ms (3*40ms) so Thread.Sleep is not the way to go. Go another way, like registering a 1ms callback and starting draw code very X callbacks.
(This is on windows, i'd use MultiMedia tools to get those 1ms resolution callbacks)
Timing stuff is notoriously bad on windows. This article is a good place to start. Not sure if you care, but also note that there can be worse problems (especially with System.nanoTime) on virtual systems as well (when windows is the guest operating system).
Thread.sleep is inaccurate and makes the animation jittery most of the time.
If you replace it completely with Thread.yield you'll get a solid FPS without lag or jitter, however the CPU usage increases greatly. I moved to Thread.yield a long time ago.
This problem has been discussed on Java Game Development forums for years.
I'm writing an AI-testing Framework for a competition. Participants submit a Bot class which matches a given Interface. Then all the bots play a turn-based game. On every turn, I want to do the following:
For every bot B:
start a thread that runs at most N cycles and does B.getNextMove()
wait for all threads to complete
Make all moves (from each bot).
My difficulty comes in saying "at most N cycles". I can limit all the bots by time (say half a second per turn) but this means that some can get more processor cycles than others and doesn't permit a strict "your bot should be able to make its decision re: a turn in X time" requirement in the competition.
As stated, this is in Java. Any ideas? I've been looking at concurrency and locking but this doesn't feel like the right direction. Also, it's possible to not run the bots in Parralel and then use time for the restriction (given that the computer isn't running anything else at the time), but this would be undesirable as it would significantly slow down the speed at which we could have results from the games.
I'd make an interface with the bot to have them do 1 iteration of their algorithm,and do a simple count.
If you need hard time/cpu limits there arn't that many(easy) ways to manage that in java.
You can't measure cpu cycles with java, but you can measure CPU time - which is a vast improvement over using just wall clock time.
To get the cpu time of the current thread you'd use (from the standard java.lang.management package)
ThreadMXBean tm = ManagementFactory.getThreadMXBean();
long cpuTime = tm.getCurrentThreadCpuTime();
Since you're controlling the bot execution and explicitly calling next yourself, why not just count the iterations? e.g.
public class Botcaller extends Thread
{
private Bot bot;
int cycles_completed;
public static final int MAX_ALLOWED_CYCLES=...;
public void run()
{
while (cycles_completed <MAX_ALLOWED_CYCLES)
{
bot.move;
cycles_completed++;
yield()
}
}
}
I think it will be very difficult to control threads at this low a level from Java. I no of know way to stop a thread safely that is running.
Another option is to let each bot run in whatever time it wants. Have one thread per bot but poll the location in a master thread every second. If they haven't updated location in that "round" then they miss out and have to wait for the next one. Make sure that the world state that each bot sees is the one controlled by the master thread.
The nice thing about this approach is that it lets bots sacrifice rounds if they want to do something more complicated.
I think Robocode does something similar... you may want to look there.