This question already has answers here:
Catching java.lang.OutOfMemoryError?
(14 answers)
Closed 8 years ago.
How good is a try catch at catching out of memory exceptions? I don't have any experience writing software that at the low level manages its own memory, but I could imagine an approach to doing that.
I don't know the way Java actually handles memory exceptions. Is it possible that the program managing memory runs out of memory? When could I try to catch an out of memory exception and it would fail to catch the exception?
Thanks!
You do not have to worry about any implicit allocations that happen as part of catching the throwable, per se. It is always possible to catch them. The JVM even keeps pre-allocated instances of OOM errors available for when trouble happens, just so that they themselves never fail to allocate.
However, there may well be secondary concerns:
Any allocation could be the proverbial straw that breaks the camel's back, so you likely won't know just where your code is going to throw an OOM error. It could even happen in a completely different thread from the one you're doing your memory-consuming work in, thus crashing completely different parts of the JVM.
Depending on just what you're going to do when you catch it, you may be allocating more memory (such as a LogRecord or StringBuilder, the latter of which could even happen implicitly as part of syntactical string concatenation), which could run out of memory again.
These concerns only apply if you're running out of memory "the normal way", however; that is, by allocating lots of "normal" objects. In contrast, if the operation running out of memory is, for instance, the single allocation of, say, a 10 GB array, then they do not pose a problem.
How good is a try catch at catching out of memory exceptions?
It works just fine at one level ... but at another level it can be risky, and / or futile. See below.
Is it possible that the program managing memory runs out of memory?
The "program" managing memory is the garbage collector. AFAIK, it always has enough memory for its own purposes. Though if it didn't it would have no alternative but to hard crash the JVM.
When could I try to catch an out of memory exception and it would fail to catch the exception?
Only if the JVM crashes. Or, if the OOME is thrown on a different thread stack to the one where you are trying to catch it.
OK, so why is catching OOME risky and / or futile.
The first reason is that an OOME can be thrown on any thread stack without any warning. When you catch the exception, the handler (typically) has no way to know what was happening "up stack", and exactly how it failed. Hence, it has no way of knowing if the application's execution state has been damaged or compromised. (Was the thread in the middle of updating something important? Was it about to notify some other thread ... and the thread will now never get the notification?)
The second reason is that OOMEs are often a result of a Java storage leak ... caused by something "hang onto" objects when it shouldn't do. If you catch an OOME and attempt to resume ... when the problem was caused by a leak ... the chances are that the offending objects will still be reachable, and another OOME will follow pretty soon. In other words, your application could get stuck in a state where it is continually throwing and recovering from OOMEs. At best this is going to kill performance ... because the last thing that the JVM typically does before an OOME is to perform a full (stop the world) garbage collection. And that takes a significant time.
Note that is not to say that you should never catch OOMEs. Indeed, catching an OOME, reporting it and then shutting down is often a good strategy.
No, the thing that is risky / futile is to catch the OOME and then attempt to recover and continue running.
In java Out Of Memory is not an exception it is an error. You can do anything at program level but it is a system limitation. You can escape it by increasing default heap memory size.
export JVM_ARGS="-Xmx1024m -XX:MaxPermSize=256m"
An OutOfMemory is an error not an exception. Errors are specifically designed to diferenciate things that can be catch, and hopefully recover, by your program to low level errors that are non recoverable.
You only catch some thing if you think that something can be done to recover from this exception/error, you can do nothing to recover from a OutOfMemory.
First of all,out of memory is Error in java not an Exception.We can handle only exceptions in java by using try-catch construct or throws clause.
Both Error and Exception classes extends Throwable class.But Error is irrecoverable condition and Exception is hand able.
For your situation ,go through Garbage Collection in java .
Related
Suppose a thread in my program reading a file from disk, and it encountered an Error(outOfMemory) and thread got killed without a chance to execute closing of streams given in finally. Will it keep that stream open even after that thread kills?
The finally block will still be executed. However, if the JVM is out of memory, there's a chance that there will be a problem closing the stream, resulting in another out of memory error thrown from within the finally block. If that happens, the stream will likely not be closed until the JVM exits.
In most case it should be closed. But it mostly depends on memory left when hiting close method and reference retention you have on the stream.
OOM are raised when trying to use more Heap memory than JVM is allowed to. But it doesn't mean you have no memory available at all. After OOM is raised, a lot of memory can be available due to many reasons : the process just try to allocate a BIG array that don't fit into memory, many intermediate allocated objects may have been discarded due to raised exception, GC may have run deeper collection than usual incremental ones, Stack memory can be used to process stream closing, etc.
Then, most streams are closed when garbage collected. Generally, you open and close a stream in the scope of method, then when exited there's no more reference over it. Thus the reference become eligible to garbage collection and may close automatically (however you have to wait for GC to collect it).
Most software good practice are based on "best effort". Don't try to think/do too much. Make the "best effort" to clean up and let it crash.
What are you suppose to do about a non-closed stream while your entire JVM is going away ?
In your case ("stream handling"), "best effort" is done trough usage of try-with-resources statement.
If you are worry about overhead of non-closed streams, you just have to use try-with-resources statement ("best effort" application) and MUST focus on reference retention which the real cause of "memory leak" in Java (as most Stream are closed when garbage collected).
The real problem about "non-closed streams" is related to limitation OS apply about number of "file descriptor/handler" that a process can have at a given time.
Thread aren't supposed to be "killed" and if so, you may quickly run into troubles as monitor aren't freed (which will cause more damage through your VM).
As there is little guarantee about when and even if finalizers run and finalizers are almost considered a smell nowadays - is there any way to persuade the JVM to completely skip all finalization processes?
I ask because we have a mammoth application which, when moved to a newer JVM (not sure which at this stage) is brought to its knees by what looks very much like the known problems with finalisers (exceptions being thrown and therefore very slow GC).
Added
There is some discussion on Troubleshooting a java memory leak: finalization? where it is suggested that the primary problem arises when exceptions are thrown within finalizers because that slows down the finalization process dramatically.
My issue shows as a dramatic slow-down when memory becomes low and analysis of heap dumps show a large number of Finalizer objects (over 10,000,000) - suggesting to me that the slowdown could be their fault because they are delaying the GC. Obviously I may be wrong.
I do not have the power to demand a refactor.
Is there any way to persuade the JVM to completely skip all finalization processes?
In a word No.
But unless a large proportion of your objects have finalize methods and/or the finalize methods are particularly expensive, I think that they are unlikely to make GC "very slow". I expect the problem is something else.
I suggest that you turn on GC logging to try and get a better picture of what is actually happening.
But I also agree, that refactoring the code to get rid of the finalize() methods would probably be a good thing in the long run. (There are very few situations where using finalize is genuinely the best solution.)
UPDATE - your new evidence is pretty convincing, though not a proof!.
I do not have the power to demand a refactor.
Then, I suggest you place the evidence at the feet of the people who do :-).
Alternatively, you could add an exception handler to the suspect finalize methods to see if they are throwing exceptions. (And if they are, then change them to avoid the exceptions being thrown ...)
But the bottom line is that if finalization is the real cause of your performance problems then the best (and probably the only) way to cure them is to change the code.
It is possible to suppress finalization on certain objects. It does not require bytecode manipulation as one commenter noted.
The procedure is described here and source code is provided. The class java.lang.ref.Finalizer is responsible for maintaining a list of objects that have not yet been finalized. To suppress finalization of your objects of interest, it suffices to use reflection APIs to acquire a lock on the field lock, iterate over the linked list unfinalized that the Finalizer class maintains, and remove your object from this list.
I've tried this method to safely instantiate custom-serialized objects while bypassing constructor invocation, and avoiding problems when the finalizer would be invoked with normal GC. Before applying this method my JVM would break with hard faults in the finalizer thread; since applying this method the faults do not occur.
You cannot switch off finalizer in java, but all you can do is write code that will facilitate GC :-).
I have monitor my java application with profiler to know memory leak. And I got class which taking almost 80% of memory which is
java.lang.ref.Finalizer
Then I google it for above class and found great article
http://www.fasterj.com/articles/finalizer1.shtml
Now can any one suggest me how do I increase FinalizerThread's priority to collect those object's in GC.
One more thing I am facing this issue on Linux with kernel version Linux 2.6.9-5.ELsmp (i386) and Linux 2.6.18-194.17.4.el5 (i386) but it's working fine (without OOM Error) on Linux 2.6.18-128.el5PAE (i386).
Is this issue because of Linux Kernels?
Is there any JVM variable to improve FinalizerThread's priority?
Thanx in advance.
To answer the question literally you can do this. However as outlined below, it is likely to be pointless, esp as the Thread already has a high priority.
for(Thread t: Thread.getAllStackTraces().keySet())
if (t.getName().equals("Finalizer")) {
System.out.println(t);
t.setPriority(Thread.MAX_PRIORITY);
System.out.println(t);
}
prints
Thread[Finalizer,8,system]
Thread[Finalizer,10,system]
Unless you are using 100% of all your cores, or close to it, the priority doesn't matter because even the lowest priority will get as much CPU as it want.
Note: On Linux, it will ignore raised priorities unless you are root.
Instead you should reduce the work the finalizer is doing. Ideally it shouldn't have anything to do. One cause of high load in the finalizer is creating resources which should be closed but are being discarded. (leaving the finalizer to close the resource instead)
In short, you should try and determine what resources are being finalized and make sure they don't need to do anything when finalize() is called, ideally don't use this method at all.
It is possible that resources are taking slight longer to close on the older version of the Linux kernel. I would check that the hardware is identical as this could mean it takes longer to clean up resource. (But the real fix is to ensure it doesn't need to do this)
The finaliser thread should rarely be doing much work. The resources should already have been released. Make sure your code is handling resources in the standard way.
Java SE 7:
try (final Resource resource = acquire()) {
use(resource);
}
Pre-Java SE 7:
final Resource resource = acquire();
try {
use(resource);
} finally {
resource.release();
}
As Tom Hawtin points out, you can get hold of the FinalizerThread by creating a finalize() method that calls Thread.currentThread() and then stashes it in a static. You can probably change its priority too.
But there's a good chance it won't do any good. There is likely to only be one finalizer thread. And the problem is likely to be either that:
the thread can't keep up because there's simply too much work to be done, or
the thread is being blocked by something you are doing in the finalizer methods.
(And, I'd expect the finalizer thread to already be marked as high priority.)
But either way, I think that a better solution is to get rid of the finalize() methods. I bet that they are doing something that is either unnecessary ... or dodgy. In particular, using finalizers to reclaim resources from dropped objects is a poor way to solve that particular problem. (See Tom Hawtin's answer.)
i have a aplication on the android market , in wich exceptions and errors are catched and sent to me by acra.
But i receive quite a lot out of memory errors..
In different kind of classes...some my app, some general java..
Does this always mean there is a problem in my app, or can it also be the phone ran out of memory due to a other process?
Will users also get a fc dialog ?
Additional Information
There is nothing memory intensite in my app..
no images...no big chunks of data..
only a simple view..and most intensive a mobclix ad..
i'm new to java...so i may have a leak somewhere..but i do find it hard to debug that.
But at this point i'm not even sure there is someting wrong...
i get about 25 -50 OOM error's daily..but compared to 60.000 ads it shows a day.
(i show only 1 or 2 ads for each time it's started) that is not too much.
1 receive errors like :
"java.lang.OutOfMemoryError
at org.apache.http.impl.io.AbstractSessionInputBuffer.init(AbstractSessionInputBuffer.java:79)
at org.apache.http.impl.io.SocketInputBuffer.<init>(SocketInputBuffer.java:93)
at android.net.http.AndroidHttpClientConnection.bind(AndroidHttpClientConnection.java:114)
at android.net.http.HttpConnection.openConnection(HttpConnection.java:61)
at android.net.http.Connection.openHttpConnection(Connection.java:378)
at android.net.http.Connection.processRequests(Connection.java:237)
at android.net.http.ConnectionThread.run(ConnectionThread.java:125)
"
"java.lang.OutOfMemoryError
at java.io.BufferedReader.<init>(BufferedReader.java:102)
at com.mobclix.android.sdk.Mobclix$FetchResponseThread.run(Mobclix.java:1422)
at com.mobclix.android.sdk.MobclixAdView$FetchAdResponseThread.run(MobclixAdView.java:390)
at java.util.Timer$TimerImpl.run(Timer.java:290)
"
"java.lang.OutOfMemoryError
at org.apache.http.util.ByteArrayBuffer.<init>(ByteArrayBuffer.java:53)
at org.apache.http.impl.io.AbstractSessionOutputBuffer.init(AbstractSessionOutputBuffer.java:77)
at org.apache.http.impl.io.SocketOutputBuffer.<init>(SocketOutputBuffer.java:76)
at android.net.http.AndroidHttpClientConnection.bind(AndroidHttpClientConnection.java:115)
at android.net.http.HttpConnection.openConnection(HttpConnection.java:61)
at android.net.http.Connection.openHttpConnection(Connection.java:378)
at android.net.http.Connection.processRequests(Connection.java:237)
at android.net.http.ConnectionThread.run(ConnectionThread.java:125)
"
So the main question is..am i leaking somewhere..
or can this be considered normal because in a small % of cases the phone may be out of memory due to other aplications running on it.
A common JVM problem is that only unreferenced objects can be removed by the Garbage Collector. If you have large persistent objects then it's important to set unused variables in those objects to null so that they are dereferenced. A classic problem is keeping something like a HashMap object around with a lot of values in it when you don't need it since every entry in the HashMap is chewing up memory.
Have you used allocation tracker in DDMS? Could help you find unexpected memory leaks.
http://developer.android.com/resources/articles/track-mem.html
(I haven't used it myself so far though)
As Thomas suggested, you really want to use the DDMS to look at your memory usage.
Also, a very common problem for leaks is use of static variables - use them only if you know what you're doing.
Handling bitmaps can also get very expensive on Android. What does your app do? Also, do you have lots references to any UI elements? Any ones defined as static?
There are things that may be out of your control (memory on the phone is an example) but nonetheless you're responsible for the behavior of your application.
How you handle memory issues will influence how users view your application. If it plays well with other applications, users will be more likely to use it. If it doesn't, they won't.
What do you mean by "general java" exceptions and if these are unrelated to your piece of software, then why are you receiving them?
As you probably know, the Dalvik virtual machine only has a small amount of memory allotted to itself (and to your application). This is implemented this way to avoid the possibility of a process growing out of control and draining all of the available resources, making the phone unusable. So if your application is performing many memory-intensive operations (like loading pictures) and you are not careful with your allocations (and clearing them as soon as they are unneeded), then bizarre outcomes may be observed.
About the force close, since you are catching these exceptions, they should not cause a crash of your application, unless you have missed to re-instantiate something after you have caught an exception.
Maybe inspection of your code and elimination of unneeded memory allocations will prove helpful. Also, you can test as my boss does - he just freaks out pushing buttons at random until something crashes :D
EDIT
Since you say that there is nothing memory expensive in your code (sans the ads probably), then you can have a simple check to see if the whole system is being low on memory when the error occurs, or it is your application that causes it. Have a look at the onLowMemory callback. It is called when the whole phone is low on memory.
When you get OutOfMemoryError, you can be sure it is your application and not another one which causes it. Each Android app is run in it's own Dalvik VM with 16Mb of maximum memory allocation.
If you do not use bitmaps (which are a frequent source of memory leaks), you also have to check if you handle orientation changes correctly, that is without keeping in memory any reference to an object relative to the UI.
This question already has answers here:
How to handle OutOfMemoryError in Java? [duplicate]
(14 answers)
Closed 9 years ago.
I'm developing a program that would require huge amount of memory, and I want to catch when out-of-memory exception happens. I had heard this is not possible to do, but curious if there is any development on this end.
It's not an exception; it's an error: java.lang.OutOfMemoryError
You can catch it as it descends from Throwable:
try {
// create lots of objects here and stash them somewhere
} catch (OutOfMemoryError E) {
// release some (all) of the above objects
}
However, unless you're doing some rather specific stuff (allocating tons of things within a specific code section, for example) you likely won't be able to catch it as you won't know where it's going to be thrown from.
It's possible:
try {
// tragic logic created OOME, but we can blame it on lack of memory
} catch(OutOfMemoryError e) {
// but what the hell will you do here :)
} finally {
// get ready to be fired by your boss
}
You can catch and attempt to recover from OutOfMemoryError (OOM) exceptions, BUT IT IS PROBABLY A BAD IDEA ... especially if your aim is for the application to "keep going".
There are a number of reasons for this:
As others have pointed out, there are better ways to manage memory resources than explicitly freeing things; i.e. using SoftReference and WeakReference for objects that could be freed if memory is short.
If you wait until you actually run out of memory before freeing things, your application is likely to spend more time running the garbage collector. Depending on your JVM version and on your GC tuning parameters, the JVM can end up running the GC more and more frequently as it approaches the point at which will throw an OOM. The slowdown (in terms of the application doing useful work) can be significant. You probably want to avoid this.
If the root cause of your problem is a memory leak, then the chances are that catching and recovering from the OOM will not reclaim the leaked memory. You application will keep going for a bit then OOM again, and again, and again at ever reducing intervals.
So my advice is NOT attempt to keep going from an OOM ... unless you know:
where and why the OOM happened,
that there won't have been any "collateral damage", and
that your recovery will release enough memory to continue.
just throwing this out there for all those who ponder why someone might be running out of memory: i'm working on a project that runs out of memory frequently and i have had to implement a solution for this.
the project is a component of a forensics and investigation app. after collecting data in the field (using very low memory footprint, btw) data is opened in our investigation app. one of the features is to perform a CFG traversal of any arbitrary binary image that was captured in the field (applications from physical memory). these traversals can take a long time, but produce very helpful visual representations of the binary that was traversed.
to speed the traversal process, we try to keep as much data in physical memory as possible, but the data structures grow as the binary grows and we cannot keep it ALL in memory (the goal is to use a java heap less than 256m). so what do i do?
i created disk-backed versions of LinkedLists, Hashtables, etc. these are drop-in replacements for their counterparts and implement all the same interfaces so they look identical from the outside world.
the difference? these replacement structures cooperate with each other, catching out of memory errors and requesting that the least recently used elements from the least recently used collection be freed from memory. freeing the element dumps it to disk in temporary file (in the system provided temp directory) and marks a placeholder objects as "paged-out" in the proper collection.
there are PLENTY of reasons you might run out of memory in a java app - the root of most of these reasons is either one or both of:
1. App runs on a resource constrained machine (or attempts to limit resource usage by limiting heap size)
2. App simply requires large amounts of memory (image editing was suggested, but how about audio and video? what about compilers like in my case? how about long-term data collectors without non-volatile storage?)
-bit
It is possible to catch an OutOfMemoryError (It's an Error, not an Exception), but you should be aware, that there is no way to get a defined behaviour.
You may even get another OutOfMemoryError while trying to catch it.
So the better way is to create/use memory aware Caches. There are some frameworks out there (example: JCS), but you can easily build your own by using SoftReference. There is a small article about how to use it here. Follow the links in the article to get more informations.
There is probably at least one good time to catch an OutOfMemoryError, when you are specifically allocating something that might be way too big:
public static int[] decode(InputStream in, int len) throws IOException {
int result[];
try {
result = new int[len];
} catch (OutOfMemoryError e) {
throw new IOException("Result too long to read into memory: " + len);
} catch (NegativeArraySizeException e) {
throw new IOException("Cannot read negative length: " + len);
}
...
}
It is possible, but if you run out of heap its not very useful. If there are resources which can be freed you better off using SoftReference or WeakReference to such resources and their clean-up will be automatic.
I have found it useful if you run out of direct memory before this doesn't trigger a GC automatically for some reason. So I have had cause to force a gc if I fail to allocate a direct buffer.
It is possible to catch Any exception. Just write
try{
// code which you think might throw exception
}catch(java.lang.Throwable t){
// you got the exception. Now what??
}
Ideally you are not supposed to catch java.lang.Error exceptions. Not catching such exceptions, and letting the application to terminate might be the best solution when they occur. If you think that you can very well handle such Error's, then go ahead.
Sure, catching OutOfMemoryError is allowed. Make sure you have a plan for what to do when it happens. You will need to free up some memory (by dropping references to objects) before allocating any more objects, or you will just run out of memory again. Sometimes the mere act of unwinding the stack a few frames will do that for you, some times you need to do something more explicit.