I start my java code (1.6.0_16 in Vista) with the following params (among others) -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=../logs. I run the code and I can see in the logs there are two OOM.
The first one I know cause I can see in the stdout that the hprof file is being created:
java.lang.OutOfMemoryError: Java heap space
Dumping heap to ../logs\java_pid4604.hprof ...
Heap dump file created [37351818 bytes in 1.635 secs]
And then, towards the end of the code I get another OOM, I capture this, but I don't get a second hprof file created. Anybody knows why is that?? Is it because I have captured the OOM exception?
I wouldn't try to recover from an OutOfMemoryError as some objects might end up in an undefined state (just thinking about an ArrayList that couldn't allocate its array to store date for instance).
Regarding your question, I'd suspect that -XX:+HeapDumpOnOutOfMemoryError is only creating a single dump intentionally to prevent multiple heap dumps: just think about several threads throwing an OOME at the same time, causing a heap dump for each thrown exception.
As a summary: don't try to recover from OOME and don't expect the JVM to write more than a single heap dump. However, if you still feel the need to generate a heap dump, you could try to manually handle an OOME exception and call jmap to create a dump or use "-XX:+HeapDumpOnCtrlBreak" (not sure though, how to simulate CtrlBreak programmatically).
Out of memory generates only one dump-file on the first error. If you want to get more you can try jmap or keep jconsole on the jvm (version 6) then you can after everything crashed i.e in the morning create your own dump from jconsole (or your analyser tool of choice).
More on the dumping subject can be read in Eclipse MemoryAnalyser.
Related
In few circumstance, our application is using around 12 GB of memory.
We tried to get the heap dump using jmap utility. Since the application is using some GB of memory it causes the application to stop responding and causes problem in production.
In our case the heap usage suddenly increases from 2-3 GB to 12GB in 6 hours. In an attempt to find teh memory usage trend we tried to collect the heap dump every one hour after restarting the application. But as said since using the jmap causes the application to hang we need to restart it and we are not able to get the trend of memory usage.
Is there a way to get the heap dump without hanging the application or is there a utility other than jmap to collect heap dump.
Thoughts on this highly appreciated, since without getting the trend of memory usage it is highly difficult to fix the issue.
Note: Our application runs in CentOS.
Thanks,
Arun
Try the following. It comes with JDK >= 7:
/usr/lib/jvm/jdk-YOUR-VERSION/bin/jcmd PID GC.heap_dump FILE-PATH-TO-SAVE
Example:
/usr/lib/jvm/jdk1.8.0_91/bin/jcmd 25092 GC.heap_dump /opt/hd/3-19.11-jcmd.hprof
This dumping process is much faster than dumping with jmap! Dumpfiles are much smaller, but it's enough to give your the idea, where the leaks are.
At the time of writing this answer, there are bugs with Memory Analyzer and IBM HeapAnalyzer, that they cannot read dumpfiles from jmap (jdk8, big files). You can use Yourkit to read those files.
First of all, it is (AFAIK) essential to freeze the JVM while a thread dump / snapshot is being taken. If JVM was able to continue running while the snapshot was created, it would be next to impossible to get a coherent snapshot.
So are there other ways to get a heap dump?
You can get a heap dump using VisualVM as described here.
You can get a heap dump using jconsole or Eclipse Memory Analyser as described here.
But all of these are bound to cause the JVM to (at least) pause.
If your application is actually hanging (permanently!) that sounds like a problem with your application itself. My suggestion would be to see if you can track down that problem before looking for the storage leak.
My other suggestion is that you look at a single heap dump, and use the stats to figure out what kind(s) of object are using all of the space ... and why they are reachable. There is a good chance that you don't need the "trend" information at all.
You can use GDB to get the heap dump without running jmap on the target VM however this will still hang the application for the amount of time required to write the heap dump to disk. Assuming a disk speed of 100MB/s (a basic mirrored array or single disk) this is still 2 minutes of downtime.
http://blogs.atlassian.com/2013/03/so-you-want-your-jvms-heap/
The only true way to avoid stopping the JVM is transactional memory and a kernel that takes advantage of it to provide a process snapshot facility. This is one of the dreams of the proponents of STM but it's not available yet. VMWare's hot-migration comes close but depends on your allocation rate not exceeding network bandwidth and it doesn't save snapshots. Petition them to add it for you, it'd be a neat feature.
A heap dump analyzed with the right tool will tell you exactly what is consuming the heap. It is the best tool for tracking down memory leaks. However, collecting a heap dump is slow let alone analyzing it.
With knowledge of the workings of your application, sometimes a histogram is enough to give you a clue of where to look for the problem. For example, if MyClass$Inner is at the top of the histogram and MyClass$Inner is only used in MyClass, then you know exactly which file to look for a problem.
Here's the command for collecting a histogram.
jcmdpidGC.class_histogram filename=histogram.txt
To add to Stephen's answers, you can also trigger a heap dump via API for the most common JVM implementations:
example for the Oracle JVM
API for the IBM JVM
not getting java heap dump on outofmemoryerror:
Tried this (one.exe is my java rcp app):
one.exe -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=C:\Temp
but didn't help, the folder path is accessible. I even tried giving name like heap.hprof, but with no result. Can someone guide me here ?
Some thrown heap overflows are not real overflows, but are thrown directly after calculation. In fact, there is no overflow. such as DirectByteBuffer(use allocateMemory)
-XX:+HeapDumpOnOutOfMemoryError Command-line Option If you specify the -XX:+HeapDumpOnOutOfMemoryError command-line option, and if an OutOfMemoryError is thrown, the VM generates a heap dump
This above argument says it will generate HeapDump when OOM is thrown. If the JVM argument is passed to JVM and no heap dump got generated then it implies the application yet to suffer OutOfMemory condition.If you need confirmation then take native stderr log file of the application to check did application suffers any OOM? or include -verbose:gc jvm argument and collect verbose logs and check the last cycle of the GC to check the free bytes of the java heap.
In my Java application a heap dump file gets generated when I read from the OutputStream of a script. I am sure about a memory leak in my application. But even after the heap dump got generated, the thread which is causing the memory leak is not coming out. I am not catching Throwable, Exception, Error etc in the run method.
I want to know when the Heap Dump file will get generated when I have not specified any special VM argument like
-XX:+HeapDumpOnOutOfMemoryError
AFAIK, heapdumps are only automatically generated if you specify that option, at least in Oracle's JVM (don't know about the others, but I doubt they do it automatically).
In most cases you have to trigger heap dump generation manually.
There are also ways to programmatically create a heap dump, but those are JVM specific and depend on how and when the programmer calls them. If that option is used then you'd have to look for that as it could be anywhere.
My application is deployed on a cluster environment. Recently the server went down with the following stacktrace. It doesn't seem to be coming from the code. It was running all right until recently when this error pop up. No major changes were made to the server. Can someone advise?
java.lang.OutOfMemoryError: Java heap space
at java.lang.AbstractStringBuilder.<init>(AbstractStringBuilder.java:44)
at java.lang.StringBuilder.<init>(StringBuilder.java:69)
at java.io.ObjectStreamClass$FieldReflectorKey.<init>(ObjectStreamClass.java:2106)
at java.io.ObjectStreamClass.getReflector(ObjectStreamClass.java:2039)
at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:586)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1552)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1466)
at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1591)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1299)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:348)
at weblogic.utils.io.ChunkedObjectInputStream.readObject(ChunkedObjectInputStream.java:195)
at weblogic.rjvm.MsgAbbrevInputStream.readObject(MsgAbbrevInputStream.java:565)
at weblogic.utils.io.ChunkedObjectInputStream.readObject(ChunkedObjectInputStream.java:191)
at weblogic.rmi.internal.dgc.DGCServerImpl_WLSkel.invoke(Unknown Source)
at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:589)
at weblogic.rmi.internal.BasicServerRef$1.run(BasicServerRef.java:479)
at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:363)
at weblogic.security.service.SecurityManager.runAs(Unknown Source)
at weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:475)
at weblogic.rmi.internal.BasicServerRef.access$300(BasicServerRef.java:59)
at weblogic.rmi.internal.BasicServerRef$BasicExecuteRequest.run(BasicServerRef.java:1016)
at weblogic.work.SelfTuningWorkManagerImpl.schedule(SelfTuningWorkManagerImpl.java:126)
at weblogic.rmi.internal.BasicServerRef.dispatch(BasicServerRef.java:321)
at weblogic.rmi.internal.BasicServerRef.dispatch(BasicServerRef.java:918)
at weblogic.rjvm.RJVMImpl.dispatchRequest(RJVMImpl.java:1084)
at weblogic.rjvm.RJVMImpl.dispatch(RJVMImpl.java:1001)
at weblogic.rjvm.ConnectionManagerServer.handleRJVM(ConnectionManagerServer.java:240)
at weblogic.rjvm.ConnectionManager.dispatch(ConnectionManager.java:877)
at weblogic.rjvm.MsgAbbrevJVMConnection.dispatch(MsgAbbrevJVMConnection.java:446)
at weblogic.rjvm.t3.MuxableSocketT3.dispatch(MuxableSocketT3.java:368)
at weblogic.socket.AbstractMuxableSocket.dispatch(AbstractMuxableSocket.java:383)
at weblogic.socket.SocketMuxer.readReadySocketOnce(SocketMuxer.java:872)
You are running out of memory, which suggests one of the following:
you need to give your process more memory (with the -Xmx java command line option); or
you have a memory leak
Without more information, it's hard to say which is the case. The stack trace for an OutOfMemoryError is rarely useful, as it only shows the point at which heap was exhausted; it doesn't show you why your heap is being filled up.
The answer by Simon Nickerson is correct
Just to add, your stack trace begins from weblogic.socket.SocketMuxer.readReadySocketOnce which is the internal weblogic class that accepts incoming requests. So this means the server is not having enough memory to accept requests also.
Are you using the JRockit JVM? If you are you can use JRockit Mission Control and monitor the Java heap usage. You can also use the JRockit Flight Recorder to record JVM events for offline analysis. There is an Oracle webcast on this here: http://www.vimeo.com/22109838. You can skip to 4:54 which is where the overview of JRockit, WLDF and JRF starts.
Keep in mind that when the heap is full it is the NEXT operation that fails with the OutOfMemory Exception, and therefore this stack trace may not indicate any cause of the failure. This simply indicates that when this code ran there wasn't enough heap, not that this code caused the heap to fill up.
** Edits...
Clearly the server is out of memory - at the time of this specific operation. The question is... why? This stack trace doesn't tell you -why- it just indicates that whatever was happening at the time could not complete because there was not enough memory available at that time. This does not mean that it is the cause of the problem.
Sure, you can add more memory but that may not fix the problem - it may only take longer for it to appear.
set catalina.sh/bat
find set JAVA_OPTS=%JAVA_OPTS%
what ever your RAM - adjust but don't give above half of RAM
set JAVA_OPTS=%JAVA_OPTS% %LOGGING_CONFIG% -server -Xms512M -Xmx512M -XX:MaxPermSize=256M
It means that the JVM has run out of all the memory that has been allocated to it. You can change the amount of memory allocated for use by your JVM using the -Xms and -Xmx command line parameters.check the root cause here
OutOfMemoryError in Java is a subclass of java.lang.VirtualMachineError and JVM throws java.lang.OutOfMemoryError when it ran out of memory in heap. OutOfMemoryError in Java can come any time in heap mostly while you try to create an object and there is not enough space in heap to allocate that object
What is the best way to debug java.lang.OutOfMemoryError exceptions?
When this happens to our application, our app server (Weblogic) generates a heap dump file. Should we use the heap dump file? Should we generate a Java thread dump? What exactly is the difference?
Update: What is the best way to generate thread dumps? Is kill -3 (our app runs on Solaris) the best way to kill the app and generate a thread dump? Is there a way to generate the thread dump but not kill the app?
Analyzing and fixing out-of-memory errors in Java is very simple.
In Java the objects that occupy memory are all linked to some other objects, forming a giant tree. The idea is to find the largest branches of the tree, which will usually point to a memory leak situation (in Java, you leak memory not when you forget to delete an object, but when you forget to forget the object, i.e. you keep a reference to it somewhere).
Step 1. Enable heap dumps at run time
Run your process with -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp
(It is safe to have these options always enabled. Adjust the path as needed, it must be writable by the java user)
Step 2. Reproduce the error
Let the application run until the OutOfMemoryError occurs.
The JVM will automatically write a file like java_pid12345.hprof.
Step 3. Fetch the dump
Copy java_pid12345.hprof to your PC (it will be at least as big as your maximum heap size, so can get quite big - gzip it if necessary).
Step 4. Open the dump file with IBM's Heap Analyzer or Eclipse's Memory Analyzer
The Heap Analyzer will present you with a tree of all objects that were alive at the time of the error.
Chances are it will point you directly at the problem when it opens.
Note: give HeapAnalyzer enough memory, since it needs to load your entire dump!
java -Xmx10g -jar ha456.jar
Step 5. Identify areas of largest heap use
Browse through the tree of objects and identify objects that are kept around unnecessarily.
Note it can also happen that all of the objects are necessary, which would mean you need a larger heap. Size and tune the heap appropriately.
Step 6. Fix your code
Make sure to only keep objects around that you actually need. Remove items from collections in a timely manner. Make sure to not keep references to objects that are no longer needed, only then can they be garbage-collected.
I've had success using a combination of Eclipse Memory Analyzer (MAT) and Java Visual VM to analyze heap dumps. MAT has some reports that you can run that give you a general idea of where to focus your efforts within your code. VisualVM has a better interface (in my opinion) for actually inspecting the contents of the various objects that you are interested in examining. It has a filter where you can have it display all instances of a particular class and see where they are referenced and what they reference themselves. It has been a while since I've used either tool for this they may have a closer feature set now. At the time using both worked well for me.
What is the best way to debug java.lang.OutOfMemoryError exceptions?
The OutOfMemoryError describes type of error in the message description. You have to check the description of the error message to handle the exception.
There are various root causes for out of memory exceptions. Refer to oracle documentation page for more details.
java.lang.OutOfMemoryError: Java heap space:
Cause: The detail message Java heap space indicates object could not be allocated in the Java heap.
java.lang.OutOfMemoryError: GC Overhead limit exceeded:
Cause: The detail message "GC overhead limit exceeded" indicates that the garbage collector is running all the time and Java program is making very slow progress
java.lang.OutOfMemoryError: Requested array size exceeds VM limit:
Cause: The detail message "Requested array size exceeds VM limit" indicates that the application (or APIs used by that application) attempted to allocate an array that is larger than the heap size.
java.lang.OutOfMemoryError: Metaspace:
Cause: Java class metadata (the virtual machines internal presentation of Java class) is allocated in native memory (referred to here as metaspace)
java.lang.OutOfMemoryError: request size bytes for reason. Out of swap space?:
Cause: The detail message "request size bytes for reason. Out of swap space?" appears to be an OutOfMemoryError exception. However, the Java HotSpot VM code reports this apparent exception when an allocation from the native heap failed and the native heap might be close to exhaustion
java.lang.OutOfMemoryError: Compressed class space
Cause: On 64-bit platforms a pointer to class metadata can be represented by a 32-bit offset (with UseCompressedOops). This is controlled by the command line flag UseCompressedClassPointers (on by default).
If the UseCompressedClassPointers is used, the amount of space available for class metadata is fixed at the amount CompressedClassSpaceSize. If the space needed for UseCompressedClassPointers exceeds CompressedClassSpaceSize, a java.lang.OutOfMemoryError with detail Compressed class space is thrown.
Note: There is more than one kind of class metadata - klass metadata and other metadata. Only klass metadata is stored in the space bounded by CompressedClassSpaceSize. The other metadata is stored in Metaspace.
Should we use the heap dump file? Should we generate a Java thread dump? What exactly is the difference?
Yes. You can use this heap heap dump file to debug the issue using profiling tools like visualvm or mat
You can use Thread dump to get further insight about status of threads.
Refer to this SE question to know the differenes:
Difference between javacore, thread dump and heap dump in Websphere
What is the best way to generate thread dumps? Is kill -3 (our app runs on Solaris) the best way to kill the app and generate a thread dump? Is there a way to generate the thread dump but not kill the app?
kill -3 <process_id> generates Thread dump and this command does not kill java process.
It is generally very difficult to debug OutOfMemoryError problems. I'd recommend using a profiling tool. JProfiler works pretty well. I've used it in the past and it can be very helpful, but I'm sure there are others that are at least as good.
To answer your specific questions:
A heap dump is a complete view of the entire heap, i.e. all objects that have been created with new. If you're running out of memory then this will be rather large. It shows you how many of each type of object you have.
A thread dump shows you the stack for each thread, showing you where in the code each thread is at the time of the dump. Remember that any thread could have caused the JVM to run out of memory but it could be a different thread that actually throws the error. For example, thread 1 allocates a byte array that fills up all available heap space, then thread 2 tries to allocate a 1-byte array and throws an error.
You can also use jmap/jhat to attach to a running Java process. These (family of) tools are really useful if you have to debug a live running application.
You can also leave jmap running as a cron task logging into a file which you can analyse later (It is something which we have found useful to debug a live memory leak)
jmap -histo:live <pid> | head -n <top N things to look for> > <output.log>
Jmap can also be used to generate a heap dump using the -dump option which can be read through the jhat.
See the following link for more details
http://www.lshift.net/blog/2006/03/08/java-memory-profiling-with-jmap-and-jhat
Here is another link to bookmark
http://java.sun.com/developer/technicalArticles/J2SE/monitoring/
It looks like IBM provides a tool for analyzing those heap dumps: http://www.alphaworks.ibm.com/tech/heaproots ; more at http://www-01.ibm.com/support/docview.wss?uid=swg21190476 .
Once you get a tool to look at the heap dump, look at any thread that was in the Running state in the thread stack. Its probably one of those that got the error. Sometimes the heap dump will tell you what thread had the error right at the top.
That should point you in the right direction. Then employ standard debugging techniques (logging, debugger, etc) to hone in on the problem. Use the Runtime class to get the current memory usage and log it as the method in or process in question executes.
I generally use Eclipse Memory Analyzer. It displays the suspected culprits (the objects which are occupying most of the heap dump) and different call hierarchies which is generating those objects. Once that mapping is there we can go back to the code and try to understand if there is any possible memory leak any where in the code path.
However, OOM doesn't always mean that there is a memory leak. It's always possible that the memory needed by an application during the stable state or under load is not available in the hardware/VM. For example, there could be a 32 bit Java process (max memory used ~ 4GB) where as the VM has just 3 GB. In such a case, initially the application may run fine, but OOM may be encountered as and when the memory requirement approaches 3GB.
As mentioned by others, capturing thread dump is not costly, but capturing heap dump is. I have observed that while capturing heap dump application (generally) freezes and only a kill followed by restart helps to recover.