I have create a game app in libgdx and usually it works well, but sometimes it goes in slow motion, in my opinion is a RAM memory problem. I've a Main class that extends Game.class, in main class i create a play screen class, when the player died i create again the play screen class. I believe that the RAM memory that the memory has not released and after many death it accumulates, in fact if I run the application with the task manager open when the death increase memory also increases.
This is the code:
public class MyGdxGame extends Game(){
private PlayScreen play_screen;
private SpriteBatch batch;
public void create(){
batch=new SpriteBatch();
play_screen=new PlayScreen(this);
setScreen(play_screen);
}
public void render(){
if(play_screen.death==true){
play_screen=new PlayScreen(this);
setScreen(play_screen);
}
}
So I did a test:
public void render(){
do{
play_screen=null;
play_screen=new PlayScreen(this);
setScreen(play_screen);
}while(1!=2);
}
I ran the app with the task manager open, memory increases rapidly until it crashes. So how i can clean up the RAM memory?
Many LibGDX objects has to be manually cleared for memory, see this. They implement the interface Disposable which has the method #dispose().
I don't see you disposing any of your used resources, your SpriteBatch is one of these objects.
When you're done with it call batch.dispose(). Setting it to null afterwards is optional, but recommended because using a disposed resource might lead to unintended behavior. The code will look something like this, which should be called when you no longer need it:
if(batch != null) {
batch.dispose();
batch = null;
}
LibGDX memory leaks is almost ever because one or more resources was not disposed
Related
I am writing a JavaFX application that receives data points on a socket and visualizes them in real time. The problem is that the JavaFX rendering is too slow. I have a Swing implementation that runs fast enough but I need to use JavaFX instead.
The constraints that I am working within are:
The control for the visualization must only be updated by the JavaFX application thread (I believe this is required for all JavaFX and Swing applications).
The visualization should be updated smoothly from the perspective of the human eye. Around 10 updates per second would be sufficient. Once every second would not be sufficient.
The incoming data rate is high enough (about 50 events per second which is not that high in other contexts) and the per event processing is expensive enough that the incoming data must be received and processed in a thread other than the JavaFX application thread so that the GUI doesn't block (I believe this is a somewhat common requirement for many GUI applications).
My approach so far has been to use a Canvas JavaFX node as the visualization control and for the reception thread to schedule updates to the Canvas to run later in the JavaFX application thread, like this.
public void onEvent(Event event) {
....do processing...
Platform.runLater(new Runnable() {
#Override
public void run() {
graphics.setFill(...);
graphics.fillRect(...);
}});
}
I have thought of a couple of approaches that might speed this up:
Use a WritableImage instead of a Canvas for the visualization. The downside is that WritableImage/PixelWriter doesn't seem to have many drawing methods, for example it doesn't even have fillRect. I think I would have to implement my own versions and my versions would probably be slower.
Have a Canvas object owned by the thread that processes incoming data. Copy from that canvas to the canvas that is a node in the scene graph in the JavaFX application thread. The copy would probably be done with code along these lines sceneCanvas.getGraphicsContext2D().drawImage(processingCanvas.snapshot(SnapshotParameters(), null) 0, 0);. The downside of this is that I think it isn't thread safe and it seems that the snapshot call is relatively expensive.
Render to an AWT BufferedImage in the thread that processes incoming data and then copy from the BufferedImage to the Canvas using SwingFXUtils.toFXImage(). The downside of this is that the threading semantics seem unclear and it seems a little silly to use AWT.
Would you be able to suggest some potential approaches?
Thank you!
I assume, the main problem is that your code pushes too many drawing-tasks into the queue of your FX Application thread. Usually, it is sufficient to have 60 drawing operations per second, which is equal to the refresh rate of your monitor. If you get more "incoming data" events than that, you'll draw more often than necessary, wasting CPU. So you must decouple data processing from painting.
One solution is to use an AnimationTimer. Its handle method will be called in every animation frame, so usually 60 times per second. The animation timer handles redrawing in case new data has been processed.
// generic task that redraws the canvas when new data arrives
// (but not more often than 60 times per second).
public abstract class CanvasRedrawTask<T> extends AnimationTimer {
private final AtomicReference<T> data = new AtomicReference<T>(null);
private final Canvas canvas;
public CanvasRedrawTask(Canvas canvas) {
this.canvas = canvas;
}
public void requestRedraw(T dataToDraw) {
data.set(dataToDraw);
start(); // in case, not already started
}
public void handle(long now) {
// check if new data is available
T dataToDraw = data.getAndSet(null);
if (dataToDraw != null) {
redraw(canvas.getGraphicsContext2D(), dataToDraw);
}
}
protected abstract void redraw(GraphicsContext context, T data);
}
// somewhere else in your concrete canvas implementation
private final RedrawTask<MyData> task = new RedrawTask<MyData>(this) {
void redraw(GraphicsContext context, MyData data) {
// TODO: redraw canvas using context and data
}
}
// may be called by a different thread
public void onDataReceived(...) {
// process data / prepare for redraw task
// ...
// handover data to redraw task
task.requestRedraw(dataToDraw);
}
I have a GameScreen and after end of the level I set the screen back to GameScreen as a restart when user taps restart button. I do this this.setScreen(new GameScreen(game)); and before I do that line of code I dispose the screen itself, all the textures used in screen, font files and so on, everything except box2D (world because disposing it gives me native error and makes the game crash ). But even though I dispose assets before I set screen game still crashes after 15-20 restarts.
I have analyzed the memory usage by printing JavaHeap, and found out that the momory usage increases in every restart until certain point and then gets back to low point like this:
- Restart1: 10MB
- Restart2: 13MB
- Restart3: 15MB
- Restart4: 10MB
- Restart5: 11MB
- Restart6: 14MB
- Restart7: 9MB
I have read about memory usage and found out that this kind of behavior is normal. But still my game crashes after few restarts without even giving error message.
What could be causing this?
EDIT: I tested the game on ZTE Blade and found out that the game gets slower by every reset, but still crashes in around 15-20 resets.
The memory up and down pattern is standard for garbage collection, you only have to worry if it starts being unable to reach the previous low point after a garbage collection as that would indicate a memory leak. It sounds like there might be something you aren't disposing of but why are you disposing anything if you are just going to reload all the same assets?
Switch to using the AssetManager. If you call AssetManager.load in your Screen constructor, AssetManager.finishLoading in your Screen.show method and AssetManager.unload in your Screen.hide method you should never unload any of your GameScreen assets because of how AssetManager does reference counting and you would only unload those assets if you navigated to a different screen. Don't forget to call `AssetManager.update in your render method.
You probably have solved your problem somehow since it has been almost a year but I will just put this here anyway, hopefully it may help people who are looking for solutions to this kind of errors.
I have a similar mechanism for one of my applications and I experienced a native crash with Box2d world too. It was when I used setScreen(new GameScreen(game)) after disposing the original one.
I used to have the following kind of initialization:
public class GameScreen implements Screen {
/*Global Variables*/
...
final private static World world = new World(new Vector2(0, 0), true); //Create a world with no gravity;
...
public GameScreen(SGM game){...}
It turned out that I have to initialize world in the constructor. Now I have the following working without any crash no matter how many times I dispose and recreate it:
public class GameScreen implements Screen {
/*Global Variables*/
...
final private static World world;
...
public MatchScreen(SGM game){
this.game = game;
world = new World(new Vector2(0, 0), true); //Create a world with no gravity
I'm not sure if that was also the cause in your case but well, it's just another suggestion.
I am creating a simple pet simulator, it is my first project created for an assignment. Most of the functionality is working fine, I have re-written it many times as I have gotten better at setting out projects, however while adding in a timer I have encountered a massive floor.
After running the project my game seems to work fine, images are being rendered (Perhaps not the most efficiently) and my timer/FPS counter works well. However ever since I have added this timing/FPS code, it has been slowly getting slower and slower in FPS and then freezing up and crashing.
I followed Ninja Cave's timing tutorial for LWJGL. http://ninjacave.com/lwjglbasics4
Here is my source code, not all classes are included as there are quite a few, but can if need be. I have tried to just include the rendering focussed ones.
Main Class
http://pastebin.com/BpkHHnnj
Rendering Class
http://pastebin.com/QtJeYw1a
Texture Loader Class
http://pastebin.com/RX5iDXQm
Main Game State Class
http://pastebin.com/pvgDLkeM
The Pet Class
http://pastebin.com/VF6cq9S4
Thanks
I'm currently working on fixing your issue, but your renderer.readyTexture() just spins wildly out of control, and is essentially leaking memory, and fast, which explains the drop in speed.
Edit: I got the memory usage to stabilize.
Add public Map<String, Texture> loadedTextures = new HashMap<String, Texture>(); to your renderer class in render.java and change your renderer.readyTexture() method to this:
public void readyTexture(String textureDir){
if (!loadedTextures.containsKey(textureDir) || loadedTextures.get(textureDir) == null) {
texture = lt.loadTexture(textureDir);
loadedTextures.put(textureDir, texture);
} else {
texture = loadedTextures.get(textureDir);
}
textureDirString = textureDir;
texture.bind();
texLoaded = true;
System.out.println("Loaded: " + textureDirString);
}
Now that you have the code, the Map/HashMap stores the already loaded textures. In the renderer.readyTexture() method, I have it check if the Map does not contain the key textureDir and if it does, I check to see if it is null. If the entry is not in the Map or the contained Texture is null, we load the texture, and store it in the map. If the texture is stored, we pull it out from the Map and bind it.
Before, you were loading up the image every time, and the garbage collector was not removing it, this is possibly an issue with Slick, but if you cache everything properly, it works just fine.
I hope this helps.
One part of my code looks like this:
public void goToMainMenu() {
Assets.LoadMenuTexutres();
Assets.unloadGameTexutres();
game.setScreen(new MainMenuScreen(game));
}
It works but when I call the method I get around .5 sec delay (because of loading Textures is heavy in OpenGL) and then I get to the MainMenuScreen but all the animation get choppy for around .3 seconds. Why do I get this choppy lag after I load assets/Textures and how do I prevent it?
Cheers!
You can use built-in AssetManager
My guess is that your delays are because of:
All the loading of textures
All the unloading of textures, and then having the garbage collector run on the now free Bitmap objects
To avoid this delay from spoiling the user experience, I suggest you move the loading and unloading into an AsyncTask, and show a ProgressDialog saying "Loading..." or something.
I'm curious if anyone has information on whether or not this is an actual optimization or unnecessary bloat.
I have some screens that are pushed and popped from the stack via user interaction and all of them have the same background image.
Instead of loading the image on each screen, I have implemented a static method which loads the image from disk the first time it's accessed, then keeps the bitmap in a static variable for future use.
Is there some way to profile this or is anyone aware of a downside to this?
public final class App {
private static Bitmap _bgBitmap = null;
/*
* Get a standard background for screens
* Method caches background in memory for less disk access
*/
public static Bitmap getScreenBackground(){
if (_bgBitmap == null){
try {
_bgBitmap = Bitmap.getBitmapResource("ScreenBG.jpg");
}
catch(Exception e){}
}
return _bgBitmap;
}
}
I suppose the only reason of having a Bitmap as a static field somewhere is to speed up creating another screen that also uses the same bitmap. IMHO this is a nice approach, however the answer to your question may differ depending on how exactly you use the bitmap:
Do you draw it directly on Graphics instance in some paint()?
Do you resize it before drawing?
Do you create a Background instance from the bitmap? In this case you'll need to investigate whether the Background instance creates a copy of bitmap for its internal usage (in this case RAM consupmtion may be doubled (2 bitmaps), so it would be nice to share across screens the Background instance rather than the bitmap).
Another point - it sounds like there maybe a case when there is no screen instances that use the bitmap. If yes, then you could detect such case in order to nullify the _bgBitmap so if OS decides to free some RAM it could GC the bitmap instance. However if app workflow implies such screen to be created soon, then maybe it is cheaper to leave the bitmap alive.
Also, how large is the bitmap? If it is relatively small, then you can just don't bother yourself with further optimization (your current lazy loading is good enough). You can count size in bytes consumed in RAM by knowing its with and height: int size = 4 * width * height. You can also log/popup the time taken to load the bitmap from resources. If it is relatively small, then maybe don't even use the current lazy loading? Note the timings should be taken on real devices only, since BB simulators are in times faster than devices.