With the languages and libraries I've worked so far, there was always an option to sync the main loop of the program (a game or anything with an always changing context) to the current display's refresh rate. So I had the option to either switch on VSYNC or just let the loop execute as many times per second as it could. I'm referring to SDL2, OpenGL 3.0 with GLFW, the HTML5 canvas etc.
I'm looking for something similar on Android now in OpenGL ES 2.0, but so far all the example code I can find simply uses a variation of sleep and set the framerate to 60 or 30. So they basically count the amount of time passed since the last iteration, and only advance further and call the requestRender() function if a given amount of time has passed (0.016 ms in case of 60 frames per second etc.).
I'm simply wondering if there's a better option than this or not. I'm just concerned that not every phone has the same screen refresh rate, so hard coding any amount doesn't seem to be a desired method. As far as I understand it is not that simple to figure out the given phone's refresh rate, or at least it is not possible with "pure" Java and OpenGL.
What you need to do is match the display's frame rate, and advance game state according to how much time has elapsed since the previous frame. There are two ways to go about this:
Stuff the BufferQueue full and rely on the "swap buffers" back-pressure.
This is very easy to implement: just swap buffers as fast as you can. In early versions of Android this could actually result in a penalty where SurfaceView#lockCanvas() would put you to sleep for 100ms. Now it's paced by the BufferQueue, and the BufferQueue is emptied as quickly as SurfaceFlinger is able.
Use Choreographer.
Choreographer allows you to set a callback that fires on the next VSYNC. The actual VSYNC time is passed in as an argument. So even if your app doesn't wake up right away, you still have an accurate picture of when the display refresh period began. Using this value, rather than the current time, yields a consistent time source for your game state update logic.
source: https://source.android.com/devices/graphics/arch-gameloops
Related
I have studied a lot in SO to find an answer for this but non of the documents can explain the performance phenomena I measure in my app. The standard answer is to log the time at the beginning and end of the onDraw method and then the delta is the time consumed. Of course I have done this but it does not explain the entire time consumed by the application before the screen is refreshed. So I wonder whether there is an element of activity on system level which I am not aware of.
Now more details:
The app works on a Nexus 7 with Android Version 5.1.1.
Purpose of the app is to show graphically the layout of a golf course while the player moves over the fairway. The player position is retrieved by GPS signals (onLocationChange). Whenever the position of the player has changed (which is happening almost every second when he is walking), the graphical layout of the area has to be redrawn since distances, orientation etc. have now changed with the new view point.
(I think it would not help to copy the code here because it would be too much to study. Furthermore this is more a principle question about understanding the architecture correctly.)
To simplify it we can say that the app has two main tasks:
A) retrieving new location information via GPS in a short interval of 1 or 2 seconds (depending on the movement).
B) Drawing the golf course's hole layout again based on the new position as
given from A.
The phenomena I measure from my logging over a period of 15 minutes in which many redraws happen naturally is that there is obviously a third big unknown time consumer beside the computations of A) and B). Lets call it X). The expected sequence in my logs would be like:
A
B
A
B
...
What I notice is this:
A
B
A
X
B
...
A
B
X
A
A
A
A
B
It means the duration of X which can grow to 5 seconds (while B performs in an average of 200ms) is so big that once it is done, obviously a number of queued events of A (new GPS positions) arrive in an interval of milliseconds.
Furthermore I notice that the duration of X grows with the time the app runs so that at the end the app cannot respond anymore.
I have read about the rendering thread and wonder whether this is the candidate for X. But what I understood from those documents is that the rendering thread time is covered by the time consumed in onDraw (B in my example). The impression I get however is that the program updates the canvas (B) and that then the device (or the OS ?) needs some considerable time to make it visible. However I could not find any documentation or SO case giving such an indication. So, if somebody could explain how the system mechanism works conceptually and (ideally) how this explains the phenomena measured above, it would be highly appreciated.
Last remark: Of course I have read a lot about how to keep the UI thread responsive, using Asynctask etc. I also intend to implement this but I cannot expect based on the measured phenomena above that this will cure my problem. It might reduce the time slice for B on the main thread but as long as I cannot identify the main consumer of X, it is unlikely Asynctask can reduce the overall performance of the program.
Systrace will allow you to profile your app and see what is taking up the time. https://developer.android.com/studio/profile/systrace.html
I'm writing an Android app in java. Trying to make a simple rhythm game where you just tap the button on a beat. I was using a timer object with Schedule at Fixed Rate to make the button flash but then I discovered that the time is variable by a few milliseconds.
Obviously a rhythm game needs particular timing to come out right, so is it possible to make this more precise and accurate or am I barking up the wrong tree with using this method for precise timing?
I don't know for Android but here is what happens for "real" Java...
A Timer uses System.currentTimeMillis() to keep track of time; this method is sensitive to system time changes (say, you run an NTP server for instance).
Which is why, if you want better precision, you use a ScheduledExecutorService; this relies o System.nanoTime(), which is a nanosecond precision counter which keeps increasing for the life of the process, even if the system time changes.
So --> try a ScheduledExecutorService instead.
I suggest you don't use ScheduleAtFixedRate!! Use a Looper and Handler.sendMessageDelayed()
I want to use the Android to take multiple images in one second. The basic idea is to use a Timer at a certain FPS that will trigger the camera to capture images.
The problem is that when I want to trigger the camera more than 1 times in one second, say every 500ms, there will be an error in startPreview. java.lang.RuntimeException: startPreview failed
How can i fixed this?. Thanks.
You should call startPreview() in your onPictureTaken() callback, and nothing guarantees that this callback will be activated at the frame rate you expected. Many cameras provide burst-shot mode, but there is no common API yet. Hopefully, soon this API will arrive.
I take the same error for trying to take many pictures even when the camera is not ready.
So you should define a boolean isItSafeToTakePicture to control if the previous photo-take-action is finished.
Using a boolean like this should solve the issue, even though you may not be able to set 500 ms interval for taking photos, this boolean will define the minimum time limit.
To give some background information, I'm currently working on a Java coded pinball game. I'm keeping it in an MVC design model. It has a fairly realistic physics system that allows it to work collisions, gravity, friction etc. The system runs on a 20 FPS system right now.
The problem I'm having is that the physics loop that checks for collisions in the system works by running a method that using the current velocity of the ball calculates the time until the next collision. The most effective way for this to work would obviously be to keep running the check to account for the movement of the ball between checks to get it as accurate as possible, and if the time until collision is less than the time until the next check, then carry out the collision.
However, right now the system I am working with can only run the loop 20 times per second, which does not provide as accurate results as I would like, particularly during times of high acceleration, such as at ball launch.
The timer loop that I use is in the controller section of the MVC, and places a call to the physics section, located within the model. I can pass in the time remaining at the time the method is called in the controller, which the physics system can use, however I don't know how to run the loop multiple times while still tracking the remaining time before the next screen refresh?
Ideally I would like to run this at least 10 times per screen refresh. If anybody needs any more information please just ask.
Thanks for any help.
So the actual problem is that you do not know when the the collision will happen and when the next frame update is?
Shouldnt these be seperate running tasks? One thread that manages the collision detection and one that does the updating? each thread can run on its own interval (Timer.addTask(...)) and they should propebly be synchronized so colission/location updates are not performed when the render thread is executed.
Hope this answers your question.
Regards, Rob.
I want to make a program that will make a pop-up appear at a certain time in the future, eg. 5:00 tonight. Basically, the program is a reminder/notification system for appointments, meetings, etc.
My first instinct was to create a sort of "Clock Listener" that would check the computer's time every minute or so and see if currentTime == alarmTime. However, I don't know if this takes up too much resources or if it is just a bad practice to have your program constantly doing things like that. Also, for the alarm to be accurate, I think it would need to check every second, rather than every minute (since if it isn't checking the seconds and will go off at 5:00:xx, it could go off at 5:00:59, which may be too late for some people's liking). Is checking the clock every second too much?
My second thought was when the program starts running, calculate how long it is until the alarm is set to go off (say, in five hours). Then, wait five hours, and then sound the alarm. But then I thought, though unlikely, it would be possible for the user to change the computer's time, and then the alarm would go off at the wrong time. So this doesn't really work.
I've seen some solutions that use threads, but I'm not familiar with those yet, so I'd rather not use them.
I'm leaning towards the first solution, but I want to make sure it's efficient and won't slow down other programs. Perhaps I'm overthinking it and checking the clock is a trivial operation, but I just wanted to make sure I'm not doing anything wrong.
The sleep solution is very straightforward, but using java.util.Timer is not much harder, and gives you a clear way to extend your utility for multiple alarms, etc. Assuming you are going to use Swing to display the notification, note that your TimerTask will need to perform some of its work on the Swing event thread. SwingUtilities.invokeAndWait(...) will help you with that.
The first solution is OK. Waking up, checking the time, and going back to sleep should not be a problem at all. You can check every second if you want, but if you only need 1-minute resolution perhaps it is enough to check e.g. every 30 seconds.
The second approach has the problem you have outlined already. If you just go to sleep for the time remaining, and the user changes the time (or the time is changed by some other means, e.g. synchronisation with a time server), the alarm would go off at the wrong time. You could avoid this if you could register some sort of hook so that your program is called back when the system time changes, but you cannot easily do this in Java.