I'm making a Java game and I'm stuck on how to do timing easily. I know there's the delta time thing, but that would mean that I have to insert that delta variable in almost every movement/update function?
I was wondering if I can just time the main While loop of the game so it runs/loops every 50 miliseconds, rather then as fast as it can.
Changes in position are functions of time, so it makes sense to have to pass the time delta into most functions that update game state. It's the natural thing to do; I wouldn't be apprehensive of doing so.
While it is possible to insert delays every frame (and you almost certainly want to do so, as there's no point in having a game run at 1000 fps), it is usually detrimental to have your update logic assume that each frame will have taken a preset amount of time. Doing so results in choppiness when one frame happens to take longer than others.
To give your game a certain framerate, the general approach for each frame is as follows:
Record the current time as startTime.
Do the state updates.
Redraw.
Record the current time as endTime.
Sleep for (1 / framerate) - (endTime - startTime)
Related
With the languages and libraries I've worked so far, there was always an option to sync the main loop of the program (a game or anything with an always changing context) to the current display's refresh rate. So I had the option to either switch on VSYNC or just let the loop execute as many times per second as it could. I'm referring to SDL2, OpenGL 3.0 with GLFW, the HTML5 canvas etc.
I'm looking for something similar on Android now in OpenGL ES 2.0, but so far all the example code I can find simply uses a variation of sleep and set the framerate to 60 or 30. So they basically count the amount of time passed since the last iteration, and only advance further and call the requestRender() function if a given amount of time has passed (0.016 ms in case of 60 frames per second etc.).
I'm simply wondering if there's a better option than this or not. I'm just concerned that not every phone has the same screen refresh rate, so hard coding any amount doesn't seem to be a desired method. As far as I understand it is not that simple to figure out the given phone's refresh rate, or at least it is not possible with "pure" Java and OpenGL.
What you need to do is match the display's frame rate, and advance game state according to how much time has elapsed since the previous frame. There are two ways to go about this:
Stuff the BufferQueue full and rely on the "swap buffers" back-pressure.
This is very easy to implement: just swap buffers as fast as you can. In early versions of Android this could actually result in a penalty where SurfaceView#lockCanvas() would put you to sleep for 100ms. Now it's paced by the BufferQueue, and the BufferQueue is emptied as quickly as SurfaceFlinger is able.
Use Choreographer.
Choreographer allows you to set a callback that fires on the next VSYNC. The actual VSYNC time is passed in as an argument. So even if your app doesn't wake up right away, you still have an accurate picture of when the display refresh period began. Using this value, rather than the current time, yields a consistent time source for your game state update logic.
source: https://source.android.com/devices/graphics/arch-gameloops
To give some background information, I'm currently working on a Java coded pinball game. I'm keeping it in an MVC design model. It has a fairly realistic physics system that allows it to work collisions, gravity, friction etc. The system runs on a 20 FPS system right now.
The problem I'm having is that the physics loop that checks for collisions in the system works by running a method that using the current velocity of the ball calculates the time until the next collision. The most effective way for this to work would obviously be to keep running the check to account for the movement of the ball between checks to get it as accurate as possible, and if the time until collision is less than the time until the next check, then carry out the collision.
However, right now the system I am working with can only run the loop 20 times per second, which does not provide as accurate results as I would like, particularly during times of high acceleration, such as at ball launch.
The timer loop that I use is in the controller section of the MVC, and places a call to the physics section, located within the model. I can pass in the time remaining at the time the method is called in the controller, which the physics system can use, however I don't know how to run the loop multiple times while still tracking the remaining time before the next screen refresh?
Ideally I would like to run this at least 10 times per screen refresh. If anybody needs any more information please just ask.
Thanks for any help.
So the actual problem is that you do not know when the the collision will happen and when the next frame update is?
Shouldnt these be seperate running tasks? One thread that manages the collision detection and one that does the updating? each thread can run on its own interval (Timer.addTask(...)) and they should propebly be synchronized so colission/location updates are not performed when the render thread is executed.
Hope this answers your question.
Regards, Rob.
I've been working on a little proof of concept game that has a rolling environment in the background (2D). I have a custom class object called roadTile, which basically is a 200px tall block with a picture to paint on and some physics. I'm storing the tiles in a LinkedList 5 at a time. I have a main loop that is responsible for moving everything and checking for collisions. The loop is supposed to be performed every 50ms (controlled by a timer) and it keeps the rate fixed by measuring the length of the last run and then taking deducting it from the sleep-time.
This usually runs fine and the loops run in less than a millisecond, but when I start the program, it "chokes" after the first 2-7 times. I removed the old tile and put a new one on the bottom of the list. On calling the add (new roadTile()), the program halts for 20-400ms which is a millenium in computer time, and to top this off, the behavior isn't consistent. Sometimes it works fine some times not.
I'm pretty clueless how to eliminate this, any thoughts?
Make sure that you cache everything before you start working with graphics. This might be the cause of your delay.
SystemClock.uptimeMillis()
This call from the Android SystemClock will reset back to zero before it 'Maxes Out'
Right now I use this to base off animations, movement etc below is an example of where a reset would essentially freeze my application.
if (currentTime > frameTime + sequenceTime)
{
frameTime = currentTime;
}
Here lies the problem currentTime is say 50 then the frameTime is set to 50 right? Ideally the currentTimewould increase with the SystemClock.uptimeMillis() but if its reset? The currentTime becomes very small compared to the frameTime How would I go about fixing this or reset the currentTime for all objects?
This is just a small example if I have different objects having a similar dilemma.
From my reading of the documentation, the "uptime" clock only gets reset when the device is rebooted. Unless your application ... somehow ... manages to keep running over a reboot, you shouldn't need to worry about the clock resetting.
(On the other hand, if your application does need to do animations that keep going after a reboot, then maybe you should use the 'currentTimeMillis' clock. The Android documentation for SystemClock describes the alternatives.)
The documentation says this "Note: This value may get reset occasionally (before it would otherwise wrap around).".
This doesn't make a lot of sense to me. The clock is a millisecond clock and is returned as a long, so you would not expect it to ever wrap around. (2^64 milliseconds is a very, very long time.) The only explanation I can think of is that some devices use a 32 bit hardware timer to implement this clock ... which is kind of lame.
I am creating an android game where enemies are generated randomly and there can be multiple at once.
Is it better to create the enemies at a random time from a timer (so 5s, then 4s, then 6s... etc), or through the game loop (count to 50, create enemy, count to 64, create enemy).
If the phone the person used was slow at rendering the game loop, the timer could create too many enemies, but if it used the game loop, they would not get enemies very quickly. There appear to be pro's and con's for each.
Also, which is better for saving processing power so it can render images faster?
Thanks in advance
Tom
ALSO, if I used a timer for each "group" of enemies, there would be 3 timers running.
I recommend a combination: The engine should be driven by "ticks" that in itself don't represent a specific duration. All engine decisions should be done based on time calculations independent of the ticks (e.g. System.currentTimeMillis subtractions). This way when there is high load on the machine you get lower frames per second but the distance of movements is not influenced. When there is lower load you get smoother graphics and movements. You should check for FPS and if they get to high you should even set the thread to sleep or you can generate more enemies. If it gets too low you can lower graphic details or prevent generation of new enemies to adapt to the situation. So I wouldn't start timers but store times for events that you precalculate to occur in the future and check in the game loop if it is time for them to happen (not with exact comparision, of course, but eventtime < now).