I'm working on an android game, and the concept is quite sound, but i'm constantly frustrated by outofmemory errors and finding myself spending the majority of my time attempting to avoid them.
As of now, all animation is done in a canvas using png image resources. I'm guessing that some of the bigger apps don't quite use this method since much more graphically impressive games than mine exist and run much better.
Where i'm getting the problems is when I first load the image resources and assign them to variable handles. I load all the images from the very beginning of the app starting in a constructor, and I often hit the outofmemory error at this point usually on devices running android 2.1 more so.
So my question is how can I avoid this or change the way i'm doing the animation to be easier on memory.
Re-scaling helps temporarily, but is unacceptable as a long term solution, I want the images to look good. And like I said, other apps look good and have much more animation so i'm sure it's possible to do this.
The other option I've been suggested is to more closely control the object lifecycles to only allocate images when I need them. This doesn't seem to be an ideal solution either because do to the setup of my app, most of the images need to be drawn most of the time, or at least may be called at any time.
My last idea is to possibly add in some code that de-allocates the memory when an image is not immediately being used. Again this doesn't seem ideal because in circumstances when for instance there is a single enemy on the canvas, every time his frame changes he would need to re-allocate an image for it, killing performance.
Anyone have any general ideas I should look into? I know i'm missing something big. Can I manually make the available memory larger? Is there a format of image with a smaller memory footprint.
I'm trying to get a better knowledge of the basics of whats at work here.
My entire drawable folder is only 864kb, That seems ridiculous to me. What is happening that makes 864kb worth of images exceed the memory limits? I'd like to know specifically where the problem is occurring. I need a solution, not a workaround.
Edit: I've also noticed that OpenGL seems to be the alternative to canvas I've been using. I've never used OpenGL... is this what i'm looking for? and if yes, how difficult would it be to switch from simply drawing to canvas to the equivalent in OpenGL?
Related
It seems that in android with opengl when you rotate your screen, activity gets recreated. Does thus cause all the opengl programs to be unloaded from the memory? When I use GLES20.glUseProgram(savedProgramId); It says that there is no such program. What do I do wrong? (By the way, I keep my program id in a static field)
You can make changes to your manifest to indicate that you will handle changes in screen orientation yourself.
See 'configchanges'+'orientation' here: http://developer.android.com/guide/topics/manifest/activity-element.html
However, you'll still have the problem that your OpenGL context will be lost when the user switches between apps.
The most correct thing to do is to fully handle loss and recreation of the OpenGL context and all associated resources. In a large and complex project this can be very difficult.
A reasonable alternative is to use setPreserveEGLContextOnPause (http://developer.android.com/reference/android/opengl/GLSurfaceView.html#setPreserveEGLContextOnPause%28boolean%29) which is available on Android 4.0 and above.
The documentation states that the OpenGL context might not always be preserved, but my opinion is that it works well enough to ship with and avoids a lot of complicated code. When your app is in the background, it might get terminated due to memory pressure anyway, so if it's terminated occasionally due to a device's limit on EGL contexts then that seems acceptable to me.
My team is working on a graphing project that relates to market trading. We are hitting a road block with SWT's performance when we have to draw thousands of data points at a time. Are there any alternatives either within SWT or outside (such as openGL) that will give us a boost in the performance?
Extra information: We are designing within Eclipse RCP.
Edit: To clarify, these are dynamic charts not static.
I ended up manipulating imageData to get the speed I needed. Building an image and then having it draw afterwards allows for a significant speed increase. Hope this helps anyone else that runs into this issue.
I am creating an app that requires a sound or sounds to potentially be played every ~25ms. (300beats per minute with potentially 8 "plays" per beat)
At first I used SoundPool to accomplish this. I have 3 threads. One is updating the SurfaceView animation, one is updating the time using System.nanoTime(), and the other is playing the sounds (mp3s) using Soundpool.
This works, but seems to be using a lot of processor power as anytime a background process runs such as the WiFi rescanning, or GC, it starts skipping beats here and there, which is unacceptable.
I am looking for an alternative solution. I've looked at mixing and also the JET engine.
The JET engine doesn't seem like a solution as it only uses MIDIs. My app requires high-quality sounds (recordings from actual instruments). (correct me if I'm wrong on midi not being high quality)
Mixing seems very complicated with Android as it seems first you must get the raw sound (takes up a lot of memory) and also create "silence" in between sounds. I am not sure if this is the most elegant solution as my app will have variable speed (bpm) controlled by the user.
If anyone is experienced in this area, I would GREATLY appreciate any advice.
Thank you
After downloading and using Xuggler, my initial impressions are very good; it supports a whole host of codecs, it was relatively hassle free to get going and the getting started tutorial videos explained all the necessary concepts very clearly.
However, after playing around with it for a couple of days I'm really tearing my hair out over getting all the audio and video to sync up nicely. It's fine when playing normally but when adding pausing, seeking and accounting for occasional 6 second pauses while my external hard drive spins up it becomes an absolute nightmare.
I've partly implemented something already but it's nowhere near perfect - you can seek around a few times but after a while it still drifts off.
I can't help thinking this is a common use case of Xuggler and someone must have done this sort of thing already much better than I have. But alas, I can't find any examples beyond the ones on the website. Is there a higher level API around that manages all the audio / video sync issues and just provides some higher level controls (play, pause, stop etc.)? I've no problem going down the route of doing it myself if there's nothing out there already, but I've never been a fan of reinventing the wheel (especially if my new wheel is in all likelihood worse than the old one!)
This is really a two part answer - the first being yes, there is a higher level "player" framework here. It's in early stages but much better than anything I would have cobbled together quickly, and I'm sure the guy running it would be open to any improvements in the code.
Secondly, I didn't really go with the above at all because I looked to VLCJ instead which uses libVLC which in turn has all the synchronisation stuff built in nicely. To get multiple players embedded in the application reliably you need to use out of process players (see here for how I went about doing it) but once that framework is in place it works reliably, fast and overall very well.
I have been developing Android application since 3 to 4 months. I am naive, But pretty much exposed to all of the fundamentals regarding application development on android. However I found really painful while developing application with lots of images, By saying images I mean one of my application has around 10 to 13 images(Small enough to accommodate screen size). The problem is I have to make different copies of it by making,
HDPI - High resolution support
MDPI - Medium resolution support
LDPI - Low resolution support
I have come up with an idea,
IDEA : My idea is to actually have only MDPI images in drawable folder, When my
application will installed first time, I want my application to detect what type of
resolution is supported by device? After knowing which resolution is supported one of my
built in method will either use a MDPI version(images), if handset supports it or else
it will scale up or scale down my images and stores into internal storage for future
reference. When user uninstall my application I will remove these images from internal
storage.
Now this idea has raised a question,
Question :
Whether this idea is feasible? and Programatically possible?
If it is, Should I be really concerned about one time computational overhead?
Is there any mechanism(third party) which can ease my problem? (I hate photoshop and scaling up and down all those images)
Any expert help or guidance will be a big favour!
Thanks in advance!
Krio
I dont really understand why you would do this. The system already basically does this for you. You dont have to specify different images for different display densities, the system just gives you the opportunity to so you can make your app look its best. If you only supply a single image the system will scale it appropriately for you based on the density of the handset.
As for help with scaling the images yourself for packaging, you could look at image magick. This is a powerful scriptable image manipulation tool. You might need to spend a bit of time getting up to speed with it, but I am sure you could write a script that you could reuse for all of your images after that to convert high dpi images to lower dpi ones by scaling down.
Take a look to this article. It describes how android handle directory names for resources. Also take a look a look to this - how android choose the best match for directory name. If you want to use the same resource for all dpis just place it in the default drawable folder.