out of memory error , my app's fault? - java

i have a aplication on the android market , in wich exceptions and errors are catched and sent to me by acra.
But i receive quite a lot out of memory errors..
In different kind of classes...some my app, some general java..
Does this always mean there is a problem in my app, or can it also be the phone ran out of memory due to a other process?
Will users also get a fc dialog ?
Additional Information
There is nothing memory intensite in my app..
no images...no big chunks of data..
only a simple view..and most intensive a mobclix ad..
i'm new to java...so i may have a leak somewhere..but i do find it hard to debug that.
But at this point i'm not even sure there is someting wrong...
i get about 25 -50 OOM error's daily..but compared to 60.000 ads it shows a day.
(i show only 1 or 2 ads for each time it's started) that is not too much.
1 receive errors like :
"java.lang.OutOfMemoryError
at org.apache.http.impl.io.AbstractSessionInputBuffer.init(AbstractSessionInputBuffer.java:79)
at org.apache.http.impl.io.SocketInputBuffer.<init>(SocketInputBuffer.java:93)
at android.net.http.AndroidHttpClientConnection.bind(AndroidHttpClientConnection.java:114)
at android.net.http.HttpConnection.openConnection(HttpConnection.java:61)
at android.net.http.Connection.openHttpConnection(Connection.java:378)
at android.net.http.Connection.processRequests(Connection.java:237)
at android.net.http.ConnectionThread.run(ConnectionThread.java:125)
"
"java.lang.OutOfMemoryError
at java.io.BufferedReader.<init>(BufferedReader.java:102)
at com.mobclix.android.sdk.Mobclix$FetchResponseThread.run(Mobclix.java:1422)
at com.mobclix.android.sdk.MobclixAdView$FetchAdResponseThread.run(MobclixAdView.java:390)
at java.util.Timer$TimerImpl.run(Timer.java:290)
"
"java.lang.OutOfMemoryError
at org.apache.http.util.ByteArrayBuffer.<init>(ByteArrayBuffer.java:53)
at org.apache.http.impl.io.AbstractSessionOutputBuffer.init(AbstractSessionOutputBuffer.java:77)
at org.apache.http.impl.io.SocketOutputBuffer.<init>(SocketOutputBuffer.java:76)
at android.net.http.AndroidHttpClientConnection.bind(AndroidHttpClientConnection.java:115)
at android.net.http.HttpConnection.openConnection(HttpConnection.java:61)
at android.net.http.Connection.openHttpConnection(Connection.java:378)
at android.net.http.Connection.processRequests(Connection.java:237)
at android.net.http.ConnectionThread.run(ConnectionThread.java:125)
"
So the main question is..am i leaking somewhere..
or can this be considered normal because in a small % of cases the phone may be out of memory due to other aplications running on it.

A common JVM problem is that only unreferenced objects can be removed by the Garbage Collector. If you have large persistent objects then it's important to set unused variables in those objects to null so that they are dereferenced. A classic problem is keeping something like a HashMap object around with a lot of values in it when you don't need it since every entry in the HashMap is chewing up memory.

Have you used allocation tracker in DDMS? Could help you find unexpected memory leaks.
http://developer.android.com/resources/articles/track-mem.html
(I haven't used it myself so far though)

As Thomas suggested, you really want to use the DDMS to look at your memory usage.
Also, a very common problem for leaks is use of static variables - use them only if you know what you're doing.
Handling bitmaps can also get very expensive on Android. What does your app do? Also, do you have lots references to any UI elements? Any ones defined as static?

There are things that may be out of your control (memory on the phone is an example) but nonetheless you're responsible for the behavior of your application.
How you handle memory issues will influence how users view your application. If it plays well with other applications, users will be more likely to use it. If it doesn't, they won't.

What do you mean by "general java" exceptions and if these are unrelated to your piece of software, then why are you receiving them?
As you probably know, the Dalvik virtual machine only has a small amount of memory allotted to itself (and to your application). This is implemented this way to avoid the possibility of a process growing out of control and draining all of the available resources, making the phone unusable. So if your application is performing many memory-intensive operations (like loading pictures) and you are not careful with your allocations (and clearing them as soon as they are unneeded), then bizarre outcomes may be observed.
About the force close, since you are catching these exceptions, they should not cause a crash of your application, unless you have missed to re-instantiate something after you have caught an exception.
Maybe inspection of your code and elimination of unneeded memory allocations will prove helpful. Also, you can test as my boss does - he just freaks out pushing buttons at random until something crashes :D
EDIT
Since you say that there is nothing memory expensive in your code (sans the ads probably), then you can have a simple check to see if the whole system is being low on memory when the error occurs, or it is your application that causes it. Have a look at the onLowMemory callback. It is called when the whole phone is low on memory.

When you get OutOfMemoryError, you can be sure it is your application and not another one which causes it. Each Android app is run in it's own Dalvik VM with 16Mb of maximum memory allocation.
If you do not use bitmaps (which are a frequent source of memory leaks), you also have to check if you handle orientation changes correctly, that is without keeping in memory any reference to an object relative to the UI.

Related

Garbage Collector not freeing "trash memory" as it should in an Android application

Hello!
I'm a beginner Java and Android developer and I've been having trouble lately dealing with my app's memory management. I will break this text into sections, in order to make it clearer and readable.
A brief description of my app
It's a game that consists of several stages (levels). Each stage has a starting point for the player and an exit, which leads the player to the next stage. Each stage has its own set of obstacles. Currently, when the player reaches the final stage (I've only created 4 so far) he/she automatically goes back to the first stage (level 1).
An abstract class called GameObject (extends Android.View) defines the base structure and behaviour for the player and all the other objects (obstacles, etc) present in the game. All the objects (that are, essentially, views) are drawn in a custom view created by me (extends FrameLayout). The game logic and the game loop is handled by a side thread (gameThread). The stages are created by retrieving metadata from xml files.
The problem
Besides all the possible memory leaks on my code (all of which I've been working hard to find and solve), there is a strange phenomenon related to the garbage collector happening. Instead of describing it with words and risk getting you confused, I will use images. As Confucius said, "An image is worth a thousand words". Well, in this case, I've just saved you from reading 150,000 words, since my GIF below has 150 frames.
Description: the first image represents my app's memory usage when the "stage 1" is first loaded. The second image (GIF) firstly represents my app's memory usage timeline when the "stage 1" is loaded for the second time (this happens, as described earlier, when the player beat the last stage) and is followed by four garbage collections forcefully initiated by me.
As you might have noticed, there is a huge difference (almost 50MB) in the memory usage between the two situations. When the "Stage 1" is firstly loaded, when the game starts, the app is using 85MB of memory. When the same stage is loaded for the second time, a little bit later, the memory usage is already at 130MB! That's probably due to some bad coding on my part and I'm not here because of this. Have you noticed how, after I forcefully performed 2 (actually 4, but only the first 2 mattered) garbage collections, the memory usage went back to it's "normal state" (the same memory usage as when the stage was firstly loaded)? That's the weird phenomenon I was talking about.
The question
If the garbage collector is supposed to remove from memory objects that are no long being referenced (or, at least, have only weak references), why is the "trash memory" that you saw above being removed only when I forcefully call the GC and not on the GC's normal executions? I mean, if the garbage collection manually initiated by me could remove this "thrash", then the normal GC's executions would be able to remove it as well. Why isn't it happening?
I've even tried to call System.gc() when the stages are being switched, but, even though the garbage collection happens, this "thrash" memory isn't removed like when I manually perform the GC. Am I missing something important about how the garbage collector works or about how Android implements it?
Final considerations
I've spent days searching, studying and making modifications on my code but I could not find out why this is happening. StackOverflow is my last resort. Thank you!
NOTE: I was going to post some possibly relevant part of my app's source code, but since the question is already too long I will stop here. If you feel the need to check some of the code, just let me know and I will edit this question.
What I have already read:
How to force garbage collection in Java?
Garbage collector in Android
Java Garbage Collection Basics by Oracle
Android Memory Overview
Memory Leak Patterns in Android
Avoiding Memory Leaks in Android
Manage your app's memory
What you need to know about Android app memory leaks
View the Java heap and memory allocations with Memory Profiler
LeakCanary (memory leak detection library for Android and Java)
Android Memory Leak and Garbage Collection
Generic Android Garbage Collection
How to clear dynamically created view from memory?
How References Work in Android and Java
Java Garbage Collector - Not running normally at regular intervals
Garbage Collection in android (Done manually)
... and more I couldn't find again.
Garbage collection is complicated, and different platforms implement it differently. Indeed, different versions of the same platform implement garbage collection differently. (And more ... )
A typical modern collector is based on the observation that most objects die young; i.e. they become unreachable soon after they are created. The heap is then divided into two or more "spaces"; e.g. a "young" space and an "old" space.
The "young" space is where new objects are created, and it is collected frequently. The "young" space tends to be smaller, and a "young" collection happens quickly.
The "old" space is where long-lived objects end up, and it is collected infrequently. On "old" space collection tends to be more expensive. (For various reasons.)
Object that survive a number of GC cycles in the "new" space get "tenured"; i.e they are moved to the "old" space.
Occasionally we may find that we need to collect the new and old spaces at the same time. This is called a full collection. A full GC is the most expensive, and typically "stops the world" for a relatively long time.
(There are all sorts of other clever and complex things ... which I won't go into.)
Your question is why doesn't the space usage drop significantly until you call System.gc().
The answer is basically that this is the efficient way to do things.
The real goal of collection is not to free as much memory all of the time. Rather, the goal is to ensure that there is enough free memory when it is needed, and to do this either with minimum CPU overheads or a minimum of GC pauses.
So in normal operation, the GC will behave as above: do frequent "new" space collections and less frequent "old" space collections. And the collections
will run "as required".
But when you call System.gc() the JVM will typically try to get back as much memory as possible. That means it does a "full gc".
Now I think you said it takes a couple of System.gc() calls to make a real difference, that could be related to use of finalize methods or Reference objects or similar. It turns out that finalizable objects and Reference are processed after the main GC has finished by a background thread. The objects are only actually in a state where they can be collected and deleted after that. So another GC is needed to finally get rid of them.
Finally, there is the issue of the overall heap size. Most VMs request memory from the host operating system when the heap is too small, but are reluctant to give it back. The Oracle collectors note the free space ratio at the end of successive "full" collections. They only reduce the overall size of the heap if the free space ratio is "too high" after a number of GC cycles. There are a number of reasons that the Oracle GCs take this approach:
Typical modern GCs work most efficiently when the ratio of garbage to non-garbage objects is high. So keeping the heap large aids efficiency.
There is a good chance that the application's memory requirement will grow again. But the GC needs to run to detect that.
A JVM repeatedly giving memory back to the OS and and re-requesting it is potentially disruptive for the OS virtual memory algorithms.
It is problematic if the OS is short of memory resources; e.g. JVM: "I don't need this memory. Have it back", OS: "Thanks", JVM: "Oh ... I need it again!", OS: "Nope", JVM: "OOME".
Assuming that the Android collector works the same way, that is another explanation for why you had to run System.gc() multiple times to get the heap size to shrink.
And before you start adding System.gc() calls to your code, read Why is it bad practice to call System.gc()?.
I got the same problem on my app, I seen you have understood the GC, try to watch this video on why the GC is needed. try to add this code to your app class (the java file of the app, like each java file for each activity) and add this code under the Override of the "onCreate" (the code is in kotlin)
here is the hole class:
open class _appName_() : Application(){
private var appKilled = false
override fun onCreate() {
super.onCreate()
thread {
while (!appKilled){
Thread.sleep(6000)
System.runFinalization()
Runtime.getRuntime().gc()
System.gc()
}
}
}
override fun onTerminate() {
super.onTerminate()
appKilled = true
}
}
this bit of code make that every 6 sec GC is called

Java Used Heap RAM usage peaks, how can I avoid them?

First of all, I can't really show code, I am sorry, these software belongs to the company I work for, not me. I will try to explain my problem the best I can.
I am developing a little application based on JavaFX, that shows values in LineCharts, these are refreshed every 800ms-1000ms (0,8-1 seconds), and calls System.gc() every time I refresh (Around once every 0,8-1 seconds).
I am having RAM usage peaks every 10-20 seconds:
In this specific example, this doesn't look like a problem, but in some cases it goes up to 700-750 MB (Making the Heap Size go up to 1.2-1.3 GB, and taking a long time to release it back to the OS).
I know about (and currently use, without noticing any huge improvement) Heap Tuning Paremeters, but I don't think these can fix the problem here, they are helping at specific points, and slightly reduce the memory consumption, but not solve the problem.
Any ideas on how can I design my code not to have these RAM peaks? I don't have a process that uses memory and releases it every 10-20 seconds, so I assume there is something else allocating and releasing that ammount of RAM (Maybe JavaFX?), JVisualVM only says int[], byte[] and char[], and I am not even using Integer values in my code (I work with Double values in this software).
Thank you all.
Sorry, but the only reasonable answer here: you have to do profiling in order to understand where those peaks are coming from. You have to identify the root cause of this problem; and that is nothing that we can help with.
This program runs in your setup, with your data, and shows behavior that needs to analyzed over time.
My guess would be that your program is creating large amounts of objects that will be thrown away quickly afterwards ( I guess you have those calls to System.gc() in there for a reason). And guess what: creating garbage on high rate is a bad idea. Because it keeps your GC constantly spinning; and it (obviously?!) contributes to high memory load.
So, as said: you have to identify the root cause and fix that. In that sense: you have to study the tooling you are using. An alternative to profiling might be to have the GC log its activities; and analyze that output. See here for some information on that.
I found the solution:
Both MrSmith42 and GhostCat pointed out that calling System.gc() doesn't really help me here. They were right, in fact, that was the problem.
Removing System.gc() solved the problem for me
Thank you, MrSmith42 and GhostCat.
System.gc() does not trigger a garbage collection directly it is more like a hint to the VM that you think performing a garbage collection would be a good idea. What your VM does is its own decision based on the implementation.
Only if the VM runs out of memory it will sure perform a garbage collection but that also without you calling System.gc().
A quite long discussion about this topic can be found here:
When does System.gc() do anything

Check if there is enough memory before allocating byte array

I need to load a file into memory. Before I do that I want to make sure that there is enough memory in my VM left. If not I would like to show an error message. I want to avoid the OutOfMemory exception.
Approach:
Get filesize of my file
Use Runtime.getRuntime().freeMemory()
Check if it fits
Would this work or do you have any other suggestions?
The problem with any "check first then do" strategy is that there may be changes between the "check" and the "do" that render the entire thing useless.
A "try then recover" strategy is almost always a better idea and, unfortunately, that means trying to allocate the memory and handling the exception. Even if you do the "check first" option, you should still code for the possibility that the allocation may fail.
A classic example of that would be checking that a file exists before opening it. If someone were to delete the file between your check and open, you'll get an exception regardless of the fact the file was there very recently.
Now I don't know why you have an aversion to catching the exception but I'd urge you to rethink it. Since Java relies heavily on them, they're generally accepted as a good way to do things, if you don't actually control what it is you're trying (such as opening files or allocating memory).
If, as it seems from your comments, you're worried about the out-of-memory affecting other threads, that shouldn't be the case if you try to allocate one big area for the file. If you only have 400M left and you ask for 600, your request will fail but you should still have that 400M left.
It's only if you nickle-and-dime your way up to the limit (say trying 600 separate 1M allocations) would other threads start to feel the pinch after you'd done about 400. And that would only happen if you dodn't release those 400 in a hurry.
So perhaps a possibility would be to work out how much space you need and make sure you allocate it in one hit. Either it'll work or it won't. If it doesn't, your other threads will be no worse off.
I suppose you could use your suggested method to try and make sure the allocation left a bit of space for the other threads (say 100M or 10% or something like that), if you're really concerned. But I'd just go ahead and try anyway. If your other threads can't do their work because of low memory, there's ample precedent to tell the user to provide more memory for the VM.
Personally I would advice against loading a massive file directly into memory, rather try to load it in chunks or use some sort of temp file to store intermediate data.
You may want to look at the FileChannel.map(FileChannel.MapMode, long, long) method. This allows mapping a file (think POSIX mmap) without filling the heap. The operating system will (hopefully successfully) take care of the memory for you.

Java - programmatically reduce application load when runs out of memory

No, really, that's what I'm trying to do. Server is holding onto 1600 users - back end long-running process, not web server - but sometimes the users generate more activity than usual, so it needs to cut its load down, specifically when it runs out of "resources," which pretty much means heap memory. This is a big design question - how to design this?
This might likely involve preventing OOM instead of recovering from them. Ideally
if(nearlyOutOfMemory()) throw new MyRecoverableOOMException();
might happen.
But that nearlyOutOfMemory() function I don't really know what might be.
Split the server into shards, each holding fewer users but residing in different physical machines.
If you have lots of caches, try to use soft references, which get cleared out when the VM runs out of heap.
In any case, profile, profile, profile first to see where CPU time is consumed and memory is allocated and held onto.
I have actually asked a similar question about handling OOM and it turns out that there's not too many options to recover from it. Basically you can:
1) invoke external shell script (-XX:OnOutOfMemoryError="cmd args;cmd args") which would trigger some action. The problem is that if OOM has happened in some thread which doesn't have a decent recovery strategy, you're doomed.
2) Define a threshold for Old gen which technically isn't OOM but a few steps ahead, say 80% and act if the threshold has been reached. More details here.
You could use Runtime.getRuntime() and the following methods:
freeMemory()
totalMemory()
maxMemory()
But I agree with the other posters, using SoftReference, WeakReference or a WeakHashMap will probably safe you the trouble of manually recovering from that condition.
A throttling, resource regulating servlet filter may be of use too. I did encounter DoSFilter of jetty/eclipse.

How to memory profile in Java?

I'm still learning the ropes of Java so sorry if there's a obvious answer to this. I have a program that is taking a ton of memory and I want to figure a way to reduce its usage, but after reading many SO questions I have the idea that I need to prove where the problem is before I start optimizing it.
So here's what I did, I added a break point to the start of my program and ran it, then I started visualVM and had it profile the memory(I also did the same thing in netbeans just to compare the results and they are the same). My problem is I don't know how to read them, I got the highest area just saying char[] and I can't see any code or anything(which makes sense because visualvm is connecting to the jvm and can't see my source, but netbeans also does not show me the source as it does when doing cpu profiling).
Basically what I want to know is which variable(and hopefully more details like in which method) all the memory is being used so I can focus on working there. Is there a easy way to do this? I right now I am using eclipse and java to develop(and installed visualVM and netbeans specifically for profiling but am willing to install anything else that you feel gets this job done).
EDIT: Ideally, I'm looking for something that will take all my objects and sort them by size(so I can see which one is hogging memory). Currently it returns generic information such as string[] or int[] but I want to know which object its referring to so I can work on getting its size more optimized.
Strings are problematic
Basically in Java, String references ( things that use char[] behind the scenes ) will dominate most business applications memory wise. How they are created determines how much memory they consume in the JVM.
Just because they are so fundamental to most business applications as a data type, and they are one of the most memory hungry as well. This isn't just a Java thing, String data types take up lots of memory in pretty much every language and run time library, because at the least they are just arrays of 1 byte per character or at the worse ( Unicode ) they are arrays of multiple bytes per character.
Once when profiling CPU usage on a web app that also had an Oracle JDBC dependency I discovered that StringBuffer.append() dominated the CPU cycles by many orders of magnitude over all other method calls combined, much less any other single method call. The JDBC driver did lots and lots of String manipulation, kind of the trade off of using PreparedStatements for everything.
What you are concerned about you can't control, not directly anyway
What you should focus on is what in in your control, which is making sure you don't hold on to references longer than you need to, and that you are not duplicating things unnecessarily. The garbage collection routines in Java are highly optimized, and if you learn how their algorithms work, you can make sure your program behaves in the optimal way for those algorithms to work.
Java Heap Memory isn't like manually managed memory in other languages, those rules don't apply
What are considered memory leaks in other languages aren't the same thing/root cause as in Java with its garbage collection system.
Most likely in Java memory isn't consumed by one single uber-object that is leaking ( dangling reference in other environments ).
It is most likely lots of smaller allocations because of StringBuffer/StringBuilder objects not sized appropriately on first instantantations and then having to automatically grow the char[] arrays to hold subsequent append() calls.
These intermediate objects may be held around longer than expected by the garbage collector because of the scope they are in and lots of other things that can vary at run time.
EXAMPLE: the garbage collector may decide that there are candidates, but because it considers that there is plenty of memory still to be had that it might be too expensive time wise to flush them out at that point in time, and it will wait until memory pressure gets higher.
The garbage collector is really good now, but it isn't magic, if you are doing degenerate things, it will cause it to not work optimally. There is lots of documentation on the internet about the garbage collector settings for all the versions of the JVMs.
These un-referenced objects may just have not reached the time that the garbage collector thinks it needs them to for them to be expunged from memory, or there could be references to them held by some other object ( List ) for example that you don't realize still points to that object. This is what is most commonly referred to as a leak in Java, which is a reference leak more specifically.
EXAMPLE: If you know you need to build a 4K String using a StringBuilder create it with new StringBuilder(4096); not the default, which is like 32 and will immediately start creating garbage that can represent many times what you think the object should be size wise.
You can discover how many of what types of objects are instantiated with VisualVM, this will tell you what you need to know. There isn't going to be one big flashing light that points at a single instance of a single class that says, "This is the big memory consumer!", that is unless there is only one instance of some char[] that you are reading some massive file into, and this is not possible either, because lots of other classes use char[] internally; and then you pretty much knew that already.
I don't see any mention of OutOfMemoryError
You probably don't have a problem in your code, the garbage collection system just might not be getting put under enough pressure to kick in and deallocate objects that you think it should be cleaning up. What you think is a problem probably isn't, not unless your program is crashing with OutOfMemoryError. This isn't C, C++, Objective-C, or any other manual memory management language / runtime. You don't get to decide what is in memory or not at the detail level you are expecting you should be able to.
In JProfiler, you can take go to the heap walker and activate the biggest objects view. You will see the objects the retain most memory. "Retained" memory is the memory that would be freed by the garbage collector if you removed the object.
You can then open the object nodes to see the reference tree of the retained objects. Here's a screen shot of the biggest object view:
Disclaimer: My company develops JProfiler
I would recommend capturing heap dumps and using a tool like Eclipse MAT that lets you analyze them. There are many tutorials available. It provides a view of the dominator tree to provide insight into the relationships between the objects on the heap. Specifically for what you mentioned, the "path to GC roots" feature of MAT will tell you where the majority of those char[], String[] and int[] objects are being referenced. JVisualVM can also be useful in identifying leaks and allocations, particularly by using snapshots with allocation stack traces. There are quite a few walk-throughs of the process of getting the snapshots and comparing them to find the allocation point.
Java JDK comes with JVisualVM under bin folder, once your application server (for example is running) you can run visualvm and connect it to your localhost, which will provide you memory allocation and enable you to perform heap dump
If you use visualVM to check your memory usage, it focuses on the data, not the methods. Maybe your big char[] data is caused by many String values? Unless you are using recursion, the data will not be from local variables. So you can focus on the methods that insert elements into large data structures. To find out what precise statements cause your "memory leakage", I suggest you additionally
read Josh Bloch's Effective Java Item 6: (Eliminate obsolete object references)
use a logging framework an log instance creations on the highest verbosity level.
There are generally two distinct approaches to analyse Java code to gain an understanding of its memory allocation profile. If you're trying to measure the impact of a specific, small section of code – say you want to compare two alternative implementations in order to decide which one gives better runtime performance – you would use a microbenchmarking tool such as JMH.
While you can pause the running program, the JVM is a sophisticated runtime that performs a variety of housekeeping tasks and it's really hard to get a "point in time" snapshot and an accurate reading of the "level of memory usage". It might allocate/free memory at a rate that does not directly reflect the behaviour of the running Java program. Similarly, performing a Java object heap dump does not fully capture the low-level machine specific memory layout that dictates the actual memory footprint, as this could depend on the machine architecture, JVM version, and other runtime factors.
Tools like JMH get around this by repeatedly running a small section of code, and observing a long-running average of memory allocations across a number of invocations. E.g. in the GC profiling sample JMH benchmark the derived *·gc.alloc.rate.norm metric gives a reasonably accurate per-invocation normalised memory cost.
In the more general case, you can attach a profiler to a running application and get JVM-level metrics, or perform a heap dump for offline analysis. Some commonly used tools for profiling full applications are Async Profiler and the newly open-sourced Java Flight Recorder in conjunction with Java Mission Control to visualise results.

Categories