Graphics spoilt by intermittent jumps - java

I have written a game app with bitmaps moving around the screen. It employs a separate thread which writes directly to a canvas. On my Samsung Galaxy Y the animations seems smooth throughout the game, however on a "Tabtech m7" tablet the smooth graphics appear to be interrupted by intermittent freezes of about half a second duration, and spaced about three or four seconds apart. Is it possible that it is just a feature of the (cheap) tablet hardware, or is it more likely that it's some aspect of my programming? And if it's me, how could I go about diagnosing the cause?

Have a look in your log to see if the garbage collector is running approximately when you get the freezes. If so you could perhaps try and find out if its you or the system that is allocation memory in a inappropriate way.
In DDMS you can have a look at the Allocation Tracker, could possibly tell you whats going on.

Yes, echoing erbsman. To avoid GC make sure you're not allocating any new objects in your game loop. Also, GC's can be kicked off if you do lot of string conversions (ie, updating score) Like if you do Integer.toString(10) kinda stuff.

Related

LWJGL - Reason for cyclic freezes?

I am currently working on a 2D Game that uses LWJGL, but I have stumbled across some serious performance issues.
When I render more than ~100 sprites, the window freezes for a very small amount of time. I did some tests and I found out the following:
The problem occurs with both Vsync enabled or disabled
The problem occurs even if I cap the frames at 60
The program is not just rendering less frames for a short time, the Rendering seems to actually pause
There are no other operations like Matrix-Calculations that slow down the program
I already have implemented batch rendering, but it does not seem to improve the performance
The frequency of the freezes increases with the amount of Sprites
My Graphics Card driver is up to date
The problem occurs although the framerate seems to be quite acceptable, with 100 rendered sprites at the same time, I have ~1500 fps, with 1000 sprites ~200 fps
I use a very basic shader, the transformation matrices are passed to the shader via uniform variables each rendering call (Once per sprite per frame). The size of the CPU/GPU bus shouldn't be an issue.
I have found a very similar issue here, but none of the suggested solutions work for me.
This is my first question here, please let me know if I am missing some important information.
It's probably GC.
Java is sadly not the best language for games thanks to GC and lack of any structures that can be allocated at stack, from languages similar to Java - c# is often better choice thanks to much more tools to control memory, like stack alloc and just structures in general.
So when writing game in languages with GC you should make sure your game loop does not allocate too many objects, in many cases in other languages people often try to go for 0 or near 0 allocations in loop.
You can create objects pools for your entities/sprites, so you don't allocate new ones, just re-use existing ones.
And if it's simple 2d game, then probably just avoiding allocating objects when there is no need to should be enough (like passing just two ints instead of object holding location on 2d map).
And you should use profiler to confirm what changes are worth it.
There are also more tricky solutions, like using off heap manually allocated memory to store some data without object overhead, but I don't think simple game will need such solutions. Just typical game-dev solutions like pooling and avoiding not needed objects should be enough.

libGDX: FPS on Android device is low

In my game I have created many loops and methods inside render. The FPS range in my laptop ranges from 56 to 60, which is ok. However, when I run it in the Android OS on a Galaxy Note 4, the range of FPS ranges from 24 to 45, which is not ok.
Now I need a new render thread, to render synchronously with:
Gdx.app.getApplicationListener().render();
Can anyone help me solve this problem?
Even a low end laptop usually has much more processing power than a high end smartphone, thus what performs smoothly on your laptop can lag behind terribly on your Galaxy Note.
You've provided almost no information in the question, so the things I can suggest are general approaches;
Profile the game on your phone, and find possible bottlenecks, so you
can deal with them.
http://developer.android.com/tools/debugging/debugging-tracing.html
OpenGL profiling will also be very helpful, you can monitor context
switches and more with it.
https://github.com/libgdx/libgdx/wiki/Profiling
Also as a general rule of thumb, do not create new objects in the
render loop, or do it as little as possible if absolutely necessary.
Initializing a pool of reusable objects at start will help you a lot.
https://github.com/libgdx/libgdx/wiki/Memory-management

Android GPU profiling - OpenGL Live Wallpaper is slow

I'm developing a Live Wallpaper using OpenGL ES 3.0. I've set up according to the excellent tutorial at http://www.learnopengles.com/how-to-use-opengl-es-2-in-an-android-live-wallpaper/, adapting GLSurfaceView and using it inside the Live Wallpaper.
I have a decent knowledge of OpenGL/GLSL best practices, and I've set up a simple rendering pipeline where the draw loop is as tight as possible. No re-allocations, using one static VBO for non-changing data, a dynamic VBO for updates, using only one draw call, no branching in the shaders et cetera. I usually get very good performance, but at seemingly random but reoccurring times, the framerate drops.
Profiling with the on-screen bars gives me intervals where the yellow bar ("waiting for commands to complete") shoots away and takes everything above the critical 60fps threshold.
I've read any resources on profiling and interpreting those numbers I can get my hands on, including the nice in-depth SO question here. However, the main takeaway from that question seems to be that the yellow bar indicates time spent on waiting for blocking operations to complete, and for frame dependencies. I don't believe I have any of those, I just draw everything at every frame. No reading.
My question is broad - but I'd like to know what things can cause this type of framerate drop, and how to move forward in pinning down the issue.
Here are some details that may or may not have impact:
I'm rendering on demand, onOffsetsChanged is the trigger (render when dirty).
There is one single texture (created and bound only once), 1024x1024 RGBA. Replacing the one texture2D call with a plain vec4 seems to help remove some of the framerate drops. Reducing the texture size to 512x512 does nothing for performance.
The shaders are not complex, and as stated before, contain no branching.
There is not much data in the scene. There are only ~300 vertices and the one texture.
A systrace shows no suspicious methods - the GL related methods such as buffer population and state calls are not on top of the list.
Update:
As an experiment, I tried to render only every other frame, not requesting a render every onOffsetsChanged (swipe left/right). This was horrible for the look and feel, but got rid of the yellow lag spikes almost completely. This seems to tell me that doing 60 requests per frame is too much, but I can't figure out why.
My question is broad - but I'd like to know what things can cause this
type of framerate drop, and how to move forward in pinning down the
issue.
(1) Accumulation of render state. Make sure you "glClear" the color/depth/stencil buffers before you start each render pass (although if you are rendering directly to the window surface this is unlikely to be the problem, as state is guaranteed to be cleared every frame unless you set EGL_BUFFER_PRESERVE).
(2) Buffer/texture ghosting. Rendering is deeply pipelined, but OpenGL ES tries to present a synchronous programming abstraction. If you try to write to a buffer (SubBuffer update, SubTexture update, MapBuffer, etc) which is still "pending" use in a GPU operation still queued in the pipeline then you either have to block and wait, or you force a copy of that resource to be created. This copy process can be "really expensive" for large resources.
(3) Device DVFS (dynamic frequency and voltage scaling) can be quite sensitive on some devices, especially for content which happens to sit just around a level decision point between two frequencies. If the GPU or CPU frequency drops then you may well get a spike in the amount of time a frame takes to process. For debug purposes some devices provide a means to fix frequency via sysfs - although there is no standard mechnanism.
(4) Thermal limitations - most modern mobile devices can produce more heat than they can dissipate if everything is running at high frequency, so the maximum performance point cannot be sustained. If your content is particularly heavy then you may find that thermal management kicks in after a "while" (1-10 minutes depending on device, in my experience) and forcefully drops the frequency until thermal levels drop within safe margins. This shows up as somewhat random increases in frame processing time, and is normally unpredictable once a device hits the "warm" state.
If it is possible to share an API sequence which reproduces the issue it would be easier to provide more targeted advice - the question is really rather general and OpenGL ES is a very wide API ;)

Java (JavaSound): Is "clip.play()" an expensive call?

I've read here on StackOverflow that every time you play a clip in JavaSound, behind the scenes it creates a thread to play it. If it is true (and if it isn't please tell me, as I have not found any documentation/source on that), would it be considered as an expensive call, since creating threads is an expensive task in any OS/JVM? I am not sure yet, but I may need to play 10 to 20 clips concurrently, so I was wondering if that would be a problem.
PS: If it is an exoensive call for other reasons beside creating threads, please let me know.
Threads are NOT expensive, particularly. I've personally made a program that has over 500 running. Server programs can spawn considerably more than that.
Sound processing is not inexpensive, but I don't know that it is much more cpu-intensive than many graphics effects, like lighting in 3D. I made a program that both played a sound and made a "glow ball" that grew and faded while the sound was playing. The "glow ball" continually updated a RadialGradientPaint to achieve this effect. I ran into a ceiling of about 10 balls and sounds, and it was the graphical balls that were the bigger processing load.
Still, you might not be able to do a whole lot else with 17 Clips playing. You'll have to test it, and will hear dropouts if the cpu can't keep up.
Your 17 Clips may take up a huge amount of RAM. You know that they are all loaded into memory, yes? At 44100 samples for each second, and typically 4 bytes per sample (stereo, 16-bit PCM), that starts to add up quick.
So, there may be reasons to consider using SourceDataLine's instead, especially for the longer sounds.
Also, it seems some OS systems don't handle multiple sounds very well. I've had problems come up here with Linux in particular. I ended up writing a program to mix all the playing sounds into one output SourceDataLine as a way to handle this.
Another way I get some efficiency is that I load my own custom made Clip. I've giving this Clip multiple cursors (pointers) that can independently move through the audio data. This way, I can play a Clip multiple times (and at varying speeds) overlapping. To do this with a Java Clip, you have to load it into RAM multiple times. So, you might consider writing something like that. The output from the multiple cursors can be summed and played via a SourceDataLine.

Android touch event list and garbage collector

My game has been stuttering because of the GC and it ranges from 40ms to 140ms.
My game is not creating new objects or anything in the update or render threads so I'm pretty sure my project is clean EXCEPT for one.
In the update method I have a List<TouchEvents> touchEvents = getTouchEvents();
I am pretty sure this is what is causing the GC to kick in as it only GC every time I'm moving around as it requires me touching the screen (using the ACTION_MOVE event).
How would I optimize or prevent this?
EDIT:
Now I'm starting to think it has to do with my FPS limit method.
I'm assuming since I am limiting FPS to 30 the GC does not have enough time without interfering with my game.
I came up with this theory after I took the limiter off and ran my game at full 60FPS.
The game goes PERFECTLY SMOOTH when running at 60FPS but not at 30FPS.
Any ideas?
Personally I wouldn't recommend capping the fps. Instead, let it run as fast as it can and refer to elapsed time when doing movement and physics.

Categories