Application is down due to Out of memory error. On checking the Weblogic logs, i got the below exception. what should be done to avoid this?
java.lang.OutOfMemoryError: getNewTla
java.lang.OutOfMemoryError: allocLargeObjectOrArray: [B, size 4K.
java.lang.OutOfMemoryError: allocLargeObjectOrArray: [B, size 4K.
at weblogic.utils.io.Chunk.<init>(Chunk.java:293)
at weblogic.utils.io.Chunk.getChunk(Chunk.java:141)
at weblogic.servlet.internal.ChunkOutput.<init>(ChunkOutput.java:112)
at weblogic.servlet.internal.ChunkOutput.create(ChunkOutput.java:156)
at weblogic.servlet.internal.ServletOutputStreamImpl.<init>(ServletOutputStreamImpl.java:92)
at weblogic.servlet.internal.ServletResponseImpl.<init>(ServletResponseImpl.java:155)
at weblogic.servlet.internal.MuxableSocketHTTP.<init>(MuxableSocketHTTP.java:111)
at weblogic.servlet.internal.ProtocolHandlerHTTP.createSocket(ProtocolHandlerHTTP.java:65)
at weblogic.socket.MuxableSocketDiscriminator.dispatch(MuxableSocketDiscriminator.java:131)
at weblogic.socket.SocketMuxer.readReadySocketOnce(SocketMuxer.java:901)
at weblogic.socket.SocketMuxer.readReadySocket(SocketMuxer.java:840)
at weblogic.socket.EPollSocketMuxer.dataReceived(EPollSocketMuxer.java:215)
at weblogic.socket.EPollSocketMuxer.processSockets(EPollSocketMuxer.java:177)
at weblogic.socket.SocketReaderRequest.run(SocketReaderRequest.java:29)
at weblogic.socket.SocketReaderRequest.execute(SocketReaderRequest.java:42)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:145)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:117)
Your application has a memory leak.
You have to find it and perform the needed changes.
I usually use JProfiler to find memory leaks, but there are a lot of tools that can help you.
Take a look at this video: Find a memory leak using JProfiler
If you are running web logic on JRockit you can use following commands to get some information on heap:
Find pid of web logic node running web app.
Go to jrockit lib directory.
Command to get summary of heap:
jrcmd {pid} heap_diagnostics
Command to generate heapdump (hprof):
jrcmd {pid} hprofdump filename={filename}.hprof
As mentioned in above comments you could start inspecting heap dump using MAT (IBM memory analyzer) to find out memory leaks. Best way is to inspect histogram with filter on your application package/classes. Look for unusually high number of instances of your application classes.
As #jfcorugedo mentioned it might be due to memory leak or simply you are running with far less memory than what your application needs (just trying increase the heap and do a run). First look at your GC log. Also you can use tools like MAT if you have taken memory dump
Related
In our team, we are using a service which has spill over problem. It is caused by long API latency in which GC time took most of the parts. Then I found that the heap memory usage is very high. I got the heap dump using jmap for the service which is about 4.4 GB. I used the Eclipse Memory Analyzer to parse the heap dump. I found that 2.8GB of the heap dump is unreachable objects.
Anyone has the suggestions that what should I do to further debug this problem?
Thank you.
If you have a heap dump from when it run out of memory, I suggest use MAT to find any suspicious dominator trees which is narrow reference path to a large retained set size.
It could be same classes are ending up in different class loaders or could be HTTP session retention if web application or bad cache problem.
I suggest you, start with simple things first.
Quick look at what and where jars are being loaded.
Make sure class unloading is enabled (with CMS).
Use memory analyser tool on the heap dump to see exactly what is being retained and by whom.
Hello i have taken the thread dump for my web application which is giving out of memory again and again using the JSTACK but i am little bit confused that how to find the correct culprit thread can some one give the tips that how to analyze the dump file.
Use VisualVM which is included with the JDK.
When you have a OutOfMemoryError, the first step is to read the associated message. It explains the cause of the error : heap, perm, thread,...
Depending on the cause, you have to check the configuration of the space : -Xms and -Xmx for the heap, -XX:PermSize and -XX:MaxPermSize for the perm (Java 7-), -XX:MaxMetaspaceSize for the metaspace (Java 8+),... The config may be too low for your need.
After that, use tools to understand how the memory is consumed. VisualVM is great, it provides metrics on the memory, helps you to make heap dumps or to profile the memory (not in production). You may add the -XX:+HeapDumpOnOutOfMemoryError option in your startup script so that the heap dump will be automatically generated when you run an OutOfMemoryError.
If you have memory leaks, I suggest to use more advanced (and not free) profiling tools such as JProfiler or YourKit.
I have several applications running in a Tomcat7 instance.
Once in a while, I notice that there are OutOfMemoryErrors in the log.
How can I find out, which application (ideally - which) class causes them?
Update 1 (25.12.2014 11:44 MSK):
I changed something in the application (added a shutdown call to a Quartz scheduler, when the servlet context is destroyed), which may have caused memory leaks.
Now my memory consumption charts look like shown below.
Does any of them indicate memory leaks in the application?
If yes, which one?
There is a good documentation about that http://www.oracle.com/technetwork/java/javase/clopts-139448.html
create a heapdump with the vm parameters described in the link above.
analyze this heapdump, for example use memoryanalzyer(https://eclipse.org/mat/).
OOM can occur because of many reasons.
1.) Memory Leaks
2.) Generation of a large number of local variables etc.
OOM is a common indication of a memory leak. Essentially, the error is thrown when there’s insufficient space to allocate a new object.
Few Exception Messages
java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: PermGen space
java.lang.OutOfMemoryError: Requested array size exceeds VM limit
java.lang.OutOfMemoryError: request bytes for . Out of swap space?
java.lang.OutOfMemoryError: (Native method)
More detailed information here and official
refer this and this
Need to analyze the Heap dump/thread dumps etc.
Detecting Memory leak
You can use jmap . It will give snap shot of java process.
How many objects in memory with size of objects .
jmap -histo #processID
In few circumstance, our application is using around 12 GB of memory.
We tried to get the heap dump using jmap utility. Since the application is using some GB of memory it causes the application to stop responding and causes problem in production.
In our case the heap usage suddenly increases from 2-3 GB to 12GB in 6 hours. In an attempt to find teh memory usage trend we tried to collect the heap dump every one hour after restarting the application. But as said since using the jmap causes the application to hang we need to restart it and we are not able to get the trend of memory usage.
Is there a way to get the heap dump without hanging the application or is there a utility other than jmap to collect heap dump.
Thoughts on this highly appreciated, since without getting the trend of memory usage it is highly difficult to fix the issue.
Note: Our application runs in CentOS.
Thanks,
Arun
Try the following. It comes with JDK >= 7:
/usr/lib/jvm/jdk-YOUR-VERSION/bin/jcmd PID GC.heap_dump FILE-PATH-TO-SAVE
Example:
/usr/lib/jvm/jdk1.8.0_91/bin/jcmd 25092 GC.heap_dump /opt/hd/3-19.11-jcmd.hprof
This dumping process is much faster than dumping with jmap! Dumpfiles are much smaller, but it's enough to give your the idea, where the leaks are.
At the time of writing this answer, there are bugs with Memory Analyzer and IBM HeapAnalyzer, that they cannot read dumpfiles from jmap (jdk8, big files). You can use Yourkit to read those files.
First of all, it is (AFAIK) essential to freeze the JVM while a thread dump / snapshot is being taken. If JVM was able to continue running while the snapshot was created, it would be next to impossible to get a coherent snapshot.
So are there other ways to get a heap dump?
You can get a heap dump using VisualVM as described here.
You can get a heap dump using jconsole or Eclipse Memory Analyser as described here.
But all of these are bound to cause the JVM to (at least) pause.
If your application is actually hanging (permanently!) that sounds like a problem with your application itself. My suggestion would be to see if you can track down that problem before looking for the storage leak.
My other suggestion is that you look at a single heap dump, and use the stats to figure out what kind(s) of object are using all of the space ... and why they are reachable. There is a good chance that you don't need the "trend" information at all.
You can use GDB to get the heap dump without running jmap on the target VM however this will still hang the application for the amount of time required to write the heap dump to disk. Assuming a disk speed of 100MB/s (a basic mirrored array or single disk) this is still 2 minutes of downtime.
http://blogs.atlassian.com/2013/03/so-you-want-your-jvms-heap/
The only true way to avoid stopping the JVM is transactional memory and a kernel that takes advantage of it to provide a process snapshot facility. This is one of the dreams of the proponents of STM but it's not available yet. VMWare's hot-migration comes close but depends on your allocation rate not exceeding network bandwidth and it doesn't save snapshots. Petition them to add it for you, it'd be a neat feature.
A heap dump analyzed with the right tool will tell you exactly what is consuming the heap. It is the best tool for tracking down memory leaks. However, collecting a heap dump is slow let alone analyzing it.
With knowledge of the workings of your application, sometimes a histogram is enough to give you a clue of where to look for the problem. For example, if MyClass$Inner is at the top of the histogram and MyClass$Inner is only used in MyClass, then you know exactly which file to look for a problem.
Here's the command for collecting a histogram.
jcmdpidGC.class_histogram filename=histogram.txt
To add to Stephen's answers, you can also trigger a heap dump via API for the most common JVM implementations:
example for the Oracle JVM
API for the IBM JVM
I have a Web application running on my 64-bit Windows Server 2003, Oracle 11G database and Apache Tomcat 6.0 Web Server.
Application is on live environment and around 3000 of user using the application I have encountered Java Heap Out Of Memory Error. After increasing Heap space it's resolved.
Now again I am facing same issue, below is the error stack trace:
Exeption in thread "http-8080-10" java.lang.OutOfMemoryError: Java
heap space Aug 23, 2013 8:48:00 PM com.SessionClunter
getActiveSessions Exeption in thread "http-8080-11"
java.lang.OutOfMemoryError: Java heap space Exeption in thread
"http-8080-4" Exeption in thread "http-8080-7"
java.lang.OutOfMemoryError: Java heap space
Your problem could be caused by a few things (at a conceptual level):
You could simply have too many simultaneous users or user sessions.
You could be attempting to process too many user requests simultaneously.
You could be attempting to process requests that are too large (in some sense).
You could have a memory leak ... which could be related to some of the above issue, or could be unrelated.
There is no simple solution. (You've tried the only easy solution ... increasing the heap size ... and it hasn't worked.)
The first step in solving this is to change your JVM options to get it to take a heap dump when a OOME occurs. Then you use a memory dump analyser to examine the dump, and figure out what objects are using too much memory. That should give you some evidence that will allow you to narrow down the possible causes ...
If you keep getting OutOfMemoryError no matter how much you increase the max heap, then your application probably has a memory leak, which you must solve by getting into the code and optimizing it. Short of that, you have no other choice but keep increasing the max heap until you can.
You can look for memory leaks and optimize using completely free tools like this:
Create a heap dump of your application when it uses a lot of memory, but before it would crash, using jmap that is part of the Java installation used by your JVM container (= tomcat in your case):
# if your process id is 1234
jmap -dump:format=b,file=/var/tmp/dump.hprof 1234
Open the heap dump using the Eclipse Memory Analyzer (MAT)
MAT gives suggestions about potential memory leaks. Try to follow those.
Look at the histogram tab. It shows all the objects that were in memory at the time of the dump, grouped by their class. You can order by memory use and number of objects. When you have a memory leak, usually there are shockingly too many instances of some objects that clearly don't make sense all. I often tracked down memory leaks based on that info alone.
Another useful free JVM monitoring tool is VisualVM. A non-free but very powerful tool is JProfiler.