Hello i have taken the thread dump for my web application which is giving out of memory again and again using the JSTACK but i am little bit confused that how to find the correct culprit thread can some one give the tips that how to analyze the dump file.
Use VisualVM which is included with the JDK.
When you have a OutOfMemoryError, the first step is to read the associated message. It explains the cause of the error : heap, perm, thread,...
Depending on the cause, you have to check the configuration of the space : -Xms and -Xmx for the heap, -XX:PermSize and -XX:MaxPermSize for the perm (Java 7-), -XX:MaxMetaspaceSize for the metaspace (Java 8+),... The config may be too low for your need.
After that, use tools to understand how the memory is consumed. VisualVM is great, it provides metrics on the memory, helps you to make heap dumps or to profile the memory (not in production). You may add the -XX:+HeapDumpOnOutOfMemoryError option in your startup script so that the heap dump will be automatically generated when you run an OutOfMemoryError.
If you have memory leaks, I suggest to use more advanced (and not free) profiling tools such as JProfiler or YourKit.
Related
I have a Java program that has been running for days, it processes incoming messages and forward them out.
A problem I noticed today is that, the heap size I printed via Runtime.totalMemory() shows only ~200M,but the RES column in top command shows it is occupying 1.2g RAM.
The program is not using direct byte buffer.
How can I find out why JVM is taking this much extra RAM?
Some other info:
I am using openjdk-1.8.0
I did not set any JVM options to limit the heap size, the startup command is simply: java -jar my.jar
I tried heap dump using jcmd, the dump file size is only about 15M.
I tried pmap , but there seemed to be too much info printed and I don't know which of them is useful.
The Java Native Memory Tracking tool is very helpful in situations like this. You enable it by starting the JVM with the flag -XX:NativeMemoryTracking=summary.
Then when your process is running you can get the stats by executing the following command:
jcmd [pid] VM.native_memory
This will produce a detailed output listing e.g. the heap size, metaspace size as well as memory allocated directly on the heap.
You can also use this tool to create a baseline to monitor allocations over time.
As you will be able to see using this tool, the JVM reserves by default about 1GB for the metaspace, even though just a fraction may be used. But this may account for the RSS usage you are seeing.
One thing is that if your heap is not taking much memory, then check from a profiler tool how much has it taken for your non-heap memory. If that amount is high and even after a GC cycle, if its not coming down, then probably you should be looking for a memory leak ( non-heap ).
If the non-heap memory is not taking much and everything looks good when you look into the memory using profiling tools, then I guess its the JVM which holds the memory rather releasing them.
So you better check if your GC hasn't work at all or if GC is being forcefully executed using a profiling tool, whether the memory comes down do does it expands or what is happening.
JVM memory and Heap memory are having 2 different behaviors and JVM could assume that it should expand after a GC cycle based on
-XX:MinHeapFreeRatio=
-XX:MaxHeapFreeRatio=
above parameters. So the basic concept behind this is that after a GC cycle, the JVM starts to get measures of free memory and used memory and starts to expand itself or shrink down based on the values for above JVM flags. By default they are set to 40 and 70, which you may interested in tuning up. This is critical specially in containerized environment.
You can use VisualVM to monitor what is happening inside your JVM. You can also use JConsole for a primary overview. It comes with JDK itself. If your JDK is setup with an environment variable, then start it from teriminal with jconsole. Then select your application and start monitoring.
Getting java.lang.OutOfMemoryError : java Heap space on Jboss 7
The entry in jboss configuration is
set "JAVA_OPTS=-Xms1G -Xmx2G -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=2096M"
The Error was
java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:3332) [rt.jar:1.8.0_231]
at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124) [rt.jar:1.8.0_231]
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448) [rt.jar:1.8.0_231]
at java.lang.StringBuffer.append(StringBuffer.java:270) [rt.jar:1.8.0_231]
at java.io.StringWriter.write(StringWriter.java:112) [rt.jar:1.8.0_231]
at java.io.PrintWriter.write(PrintWriter.java:456) [rt.jar:1.8.0_231]
at java.io.PrintWriter.write(PrintWriter.java:473) [rt.jar:1.8.0_231]
at java.io.PrintWriter.print(PrintWriter.java:603) [rt.jar:1.8.0_231]
at java.io.PrintWriter.println(PrintWriter.java:756) [rt.jar:1.8.0_231]
at java.lang.Throwable$WrappedPrintWriter.println(Throwable.java:765) [rt.jar:1.8.0_231]
at java.lang.Throwable.printEnclosedStackTrace(Throwable.java:698) [rt.jar:1.8.0_231]
at java.lang.Throwable.printEnclosedStackTrace(Throwable.java:710) [rt.jar:1.8.0_231]
You are encountering OutOfMemoryError: Java heap space,in this case you dont have to increase MetaSpace. I will suggest you to increase heap allocation (Xms3G -Xmx3G). Make sure you have same values for Xms and Xmx. If you still encounter same issue with this then add -XX:+HeapDumpOnOutOfMemoryError option. This option will generate heap dump when OOM error occurs. You can analyze this heap dump through tools like eclipse mat to check which objects consume more memory and if there is any memory leak.
Depending on the application deployed on a JBoss, even 2 GB of a heap could be not enough.
Potential problems:
Xmx configuration is not applied (configuration is made in a wrong file)
The application just requires more heap
There is a memory leak in the application
If you run JBoss on Windows, set in standalone.conf.bat file in the JAVA_OPTS variable the following values -Xmx2G -XX:MaxMetaspaceSize=1G.
standalone.conf file is ignored on Windows and applied on *nix systems only.
Verify that these values are applied by connecting to the JVM using JConsole (that is a part of JDK) or JVisualVM.
Using these tools you can monitor heap usage and see if more heap is required.
If the heap size is big enough (e.g. 4+ GB) and heap still constantly grows while garbage collection (GC) doesn't free space, probably there is a memory leak.
For analysis add to the JAVA_OPTS the following flags: -Xloggc:gc.log -XX:+PrintGCTimeStamps -XX:+HeapDumpOnOutOfMemoryError.
With these flags JVM will log GC activity into the gc.log file where you can explicitly see how much space is freed after the GC.
If subsequent GC executions according to the log doesn't free any space, probably there is a memory leak and you should analyze a heap dump created manually using JVisualVM or created by JVM itself on OutOfMemoryError. Heap dumps can be analyzed using JVisualVM or Eclipse Memory Analyzer (MAT).
in case you use IntelliJ use the following image
inside VM option add what you need
java.lang.OutOfMemoryError: OutOfMemoryError usually means that you’re doing something wrong, either configuration
issue(where the specified heap size is insufficient for the application) or holding onto objects too long(this prevents the objects from being garbage collected), or trying to process too much data at a time.
Possible Solutions:
1) Try setting "JAVA_OPTS=-Xms1G -Xmx1G -XX:MaxPermSize=256M " to maximum value and restarting your server.
2) Check your code for any memory leak. Use some heap dump reader to check the same. (There are multiple plugins available for IDE's like Eclipse, IntelliJ, etc)
3) Check your code(90% of the times issues are in the code): Check if you are loading excess data from a database or some other source into your heap memory and if it's really required. If you are calling multiple web-services and multiple DB Read-only operations cross-check the db query(If Joins are perfectly used with right where clauses) and amount of data returned from db query and web service.
4) If the issue is due to some recent code change, then try to analyze the same.
5) Also, check if cache and session elements are cleared once used.
6) To be sure if the issue is not due to Jboss, you can run the same code on some other server for testing purpose(Tomcat, Websphere, etc)
7) Check Java documentation for more understanding on Out of memory error: Documentation Link
Make sure you have provided enough space in your standalone.conf file inside bin directory
JAVA_OPTS="-Xms512m -Xmx1024m -XX:MetaspaceSize=512M -XX:MaxMetaspaceSize=1024m -Djava.net.preferIPv4Stack=true"
You should have to increase MaxMetaSpaceSize upto 1024m and MetaspaceSize to 256m hope it will works.
Try also to monitor your process , try to run jvisualvm ( is included into the jdk ) and a UI will open , from there you will be able to get a lot of info.
Hope this help you.
Usually, this error is thrown when there is insufficient space to allocate an object in the Java heap. In this case, The garbage collector cannot make space available to accommodate a new object, and the heap cannot be expanded further.
Please do the below possible steps for fixing java.lang.OutOfMemoryError.
Step 1: Check JBoss configuration settings. The setting depends on your system configuration, java version and JBoss version. Check configuration here.
Step 2: Check Heap dump settings here.
Step 3: Java code/ Library problem.
Analyze your memory here And Jboss memory profiler.
Other causes for this OutOfMemoryError message are more complex and are caused by a programming error.
I am trying to solve a memory issue I am having with my tomcat servers and I have some questions about memory usage.
When I check my process memory usage with top I see its using 1Gb physical memory, after creating a core dump using gdb, the core file size is 2.5GB , and when analyzing the HPROF file created by jmap , it states that 240MB is used.
So if top shows 1GB why does the hprof file show only 240MB where did 760MB go ?
Have you tried running Jmap using -heap:format option set? JVM usually runs a GC before taking a dump.
Also, JVM memory is not just Heap Memory. It contains Code, Stack, Native method, Direct memory, even Threads are not free to use. you could read more about it here. Just make sure to see if all these also add up to it.
I would suggest using VisualVM or yourkit and compare the memory. Also, which GC are you using? Some GC's don't usually shrink the heap memory after increasing, but if GC got triggered during heapdump it might have freed up some memory(Try G1GC).
I am running Tomcat-6.0.32 on the RHEL 5.4 with JDK-1.6.0_23 version. I am running almost more than 15 applications. Applications are small applications only. My RAM is 8GB and swap is 12GB. I set the heap size from 512Mb to 4GB.
The issue is after a few hours or days of running, the tomcat is not providing service though it is up and running. While I could see the catalina.out log file, it is showing memory leak problem.
Now, my concern is I need to show a solution to that issue or at least I need to highlight the application which is causing the memory leaks.
Could anyone explain how I can discover which application is causing the memory leak issue?
One option is to use heap dumps (see How to get a thread and heap dump of a Java process on Windows that's not running in a console) and analyze heap dump later on.
Or another option is to analyse process directly using tools like jmap, VisualVM and similar.
You may use the combination of jmap/jhat tools (Both these are unsupported as of Java 8) to gather the heap dump (using mmap) and identify the top objects in heap (using jhat). Try to co-relate these objects with the application and identify the rogue one.
In few circumstance, our application is using around 12 GB of memory.
We tried to get the heap dump using jmap utility. Since the application is using some GB of memory it causes the application to stop responding and causes problem in production.
In our case the heap usage suddenly increases from 2-3 GB to 12GB in 6 hours. In an attempt to find teh memory usage trend we tried to collect the heap dump every one hour after restarting the application. But as said since using the jmap causes the application to hang we need to restart it and we are not able to get the trend of memory usage.
Is there a way to get the heap dump without hanging the application or is there a utility other than jmap to collect heap dump.
Thoughts on this highly appreciated, since without getting the trend of memory usage it is highly difficult to fix the issue.
Note: Our application runs in CentOS.
Thanks,
Arun
Try the following. It comes with JDK >= 7:
/usr/lib/jvm/jdk-YOUR-VERSION/bin/jcmd PID GC.heap_dump FILE-PATH-TO-SAVE
Example:
/usr/lib/jvm/jdk1.8.0_91/bin/jcmd 25092 GC.heap_dump /opt/hd/3-19.11-jcmd.hprof
This dumping process is much faster than dumping with jmap! Dumpfiles are much smaller, but it's enough to give your the idea, where the leaks are.
At the time of writing this answer, there are bugs with Memory Analyzer and IBM HeapAnalyzer, that they cannot read dumpfiles from jmap (jdk8, big files). You can use Yourkit to read those files.
First of all, it is (AFAIK) essential to freeze the JVM while a thread dump / snapshot is being taken. If JVM was able to continue running while the snapshot was created, it would be next to impossible to get a coherent snapshot.
So are there other ways to get a heap dump?
You can get a heap dump using VisualVM as described here.
You can get a heap dump using jconsole or Eclipse Memory Analyser as described here.
But all of these are bound to cause the JVM to (at least) pause.
If your application is actually hanging (permanently!) that sounds like a problem with your application itself. My suggestion would be to see if you can track down that problem before looking for the storage leak.
My other suggestion is that you look at a single heap dump, and use the stats to figure out what kind(s) of object are using all of the space ... and why they are reachable. There is a good chance that you don't need the "trend" information at all.
You can use GDB to get the heap dump without running jmap on the target VM however this will still hang the application for the amount of time required to write the heap dump to disk. Assuming a disk speed of 100MB/s (a basic mirrored array or single disk) this is still 2 minutes of downtime.
http://blogs.atlassian.com/2013/03/so-you-want-your-jvms-heap/
The only true way to avoid stopping the JVM is transactional memory and a kernel that takes advantage of it to provide a process snapshot facility. This is one of the dreams of the proponents of STM but it's not available yet. VMWare's hot-migration comes close but depends on your allocation rate not exceeding network bandwidth and it doesn't save snapshots. Petition them to add it for you, it'd be a neat feature.
A heap dump analyzed with the right tool will tell you exactly what is consuming the heap. It is the best tool for tracking down memory leaks. However, collecting a heap dump is slow let alone analyzing it.
With knowledge of the workings of your application, sometimes a histogram is enough to give you a clue of where to look for the problem. For example, if MyClass$Inner is at the top of the histogram and MyClass$Inner is only used in MyClass, then you know exactly which file to look for a problem.
Here's the command for collecting a histogram.
jcmdpidGC.class_histogram filename=histogram.txt
To add to Stephen's answers, you can also trigger a heap dump via API for the most common JVM implementations:
example for the Oracle JVM
API for the IBM JVM