Solving intermittent Garbage Collection problem - Java - java

I have spring enterprise app running on JDK 1.6 under Windows 2008. The app gets slow or unresponsive at random times. I suspect it is memory leak and the GC is kicking into over drive.
How can I troubleshoot this without restarting JVM using java.exe -verbose:gc parameter? I really cannot shutdown this app. I'm planning on doing AppDynamics on it once I can restart it but for know what can I do? What are my options?

Start the application and run jconsole on the PID. While its running look at the heap in the console. When it near maxes get a heap dump. Download Eclipse MAT and parse the heap dump. If you notice the retained heap size is vastly less then the actual binary file parse the heap dump with -keep_unreachable_objects being set.
If the latter is true and you are doing a full GC often you probably have some kind of leak going on. Keep in mind when I say leak I don't mean a leak where the GC cannot retain memory, rather some how you are building large objects and making them unreachable often enough to cause the GC to consume a lot of CPU time.
If you were seeing true memory leaks you would see GC Over head reached errors

Related

Java 8 JVM hangs, but does not crash/ heap dump when out of memory

When running out of memory, Java 8 running Tomcat 8 never stops after a heap dump. Instead it just hangs as it max out memory. The server becomes very slow and non-responsive because of extensive GC as it slowly approaches max memory. The memory graph in JConsole flat lines after hitting max. 64 bit linux/ java version "1.8.0_102"/ Tomcat 8. Jconsole
I have set -XX:HeapDumpOnOutOfMemoryError and -XX:HeapDumpPath. Anyone know how to force heap dump instead of JVM getting into unresponsive/ very slow response mode?
Anyone know how to force heap dump instead of JVM getting into unresponsive/ very slow response mode?
You need to use -XX:+UseGCOverheadLimit. This tells the GC to throw an OOME (or dump the heap if you have configured that) when the percentage time spent garbage collecting gets too high. This should be enabled by default for a recent JVM ... but you might have disabled it.
You can adjust the "overheads" thresholds for the collector giving up using -XX:GCTimeLimit=... and -XX:GCHeapFreeLimit=...; see https://docs.oracle.com/javase/8/docs/technotes/guides/vm/gc-ergonomics.html
The effect of "overhead" limits is that your application gets the GC failures earlier. Hopefully, this avoids the "death spiral" effect as the GC uses a larger and larger proportion of time to collect smaller and smaller amounts of actual garbage.
The other possibility is that your JVM is taking a very long time to dump the heap. That might occur if the real problem is that your JVM is causing virtual memory thrashing because Java's memory usage is significantly greater than the amount of physical memory.
jmap is the utility that will create a heap dump for any running jvm. This will allow you to create a heap dump before a crash
https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/tooldescr014.html
It will be a matter of timing, though, to know when you should create it. You can take subsequent heaps and use tools to compare the heaps. I highly recommend Eclipse Memory Access Tool and it's dominator tree view to identify potential memory issues (https://www.eclipse.org/mat/)

Java : Get heap dump without jmap or without hanging the application

In few circumstance, our application is using around 12 GB of memory.
We tried to get the heap dump using jmap utility. Since the application is using some GB of memory it causes the application to stop responding and causes problem in production.
In our case the heap usage suddenly increases from 2-3 GB to 12GB in 6 hours. In an attempt to find teh memory usage trend we tried to collect the heap dump every one hour after restarting the application. But as said since using the jmap causes the application to hang we need to restart it and we are not able to get the trend of memory usage.
Is there a way to get the heap dump without hanging the application or is there a utility other than jmap to collect heap dump.
Thoughts on this highly appreciated, since without getting the trend of memory usage it is highly difficult to fix the issue.
Note: Our application runs in CentOS.
Thanks,
Arun
Try the following. It comes with JDK >= 7:
/usr/lib/jvm/jdk-YOUR-VERSION/bin/jcmd PID GC.heap_dump FILE-PATH-TO-SAVE
Example:
/usr/lib/jvm/jdk1.8.0_91/bin/jcmd 25092 GC.heap_dump /opt/hd/3-19.11-jcmd.hprof
This dumping process is much faster than dumping with jmap! Dumpfiles are much smaller, but it's enough to give your the idea, where the leaks are.
At the time of writing this answer, there are bugs with Memory Analyzer and IBM HeapAnalyzer, that they cannot read dumpfiles from jmap (jdk8, big files). You can use Yourkit to read those files.
First of all, it is (AFAIK) essential to freeze the JVM while a thread dump / snapshot is being taken. If JVM was able to continue running while the snapshot was created, it would be next to impossible to get a coherent snapshot.
So are there other ways to get a heap dump?
You can get a heap dump using VisualVM as described here.
You can get a heap dump using jconsole or Eclipse Memory Analyser as described here.
But all of these are bound to cause the JVM to (at least) pause.
If your application is actually hanging (permanently!) that sounds like a problem with your application itself. My suggestion would be to see if you can track down that problem before looking for the storage leak.
My other suggestion is that you look at a single heap dump, and use the stats to figure out what kind(s) of object are using all of the space ... and why they are reachable. There is a good chance that you don't need the "trend" information at all.
You can use GDB to get the heap dump without running jmap on the target VM however this will still hang the application for the amount of time required to write the heap dump to disk. Assuming a disk speed of 100MB/s (a basic mirrored array or single disk) this is still 2 minutes of downtime.
http://blogs.atlassian.com/2013/03/so-you-want-your-jvms-heap/
The only true way to avoid stopping the JVM is transactional memory and a kernel that takes advantage of it to provide a process snapshot facility. This is one of the dreams of the proponents of STM but it's not available yet. VMWare's hot-migration comes close but depends on your allocation rate not exceeding network bandwidth and it doesn't save snapshots. Petition them to add it for you, it'd be a neat feature.
A heap dump analyzed with the right tool will tell you exactly what is consuming the heap. It is the best tool for tracking down memory leaks. However, collecting a heap dump is slow let alone analyzing it.
With knowledge of the workings of your application, sometimes a histogram is enough to give you a clue of where to look for the problem. For example, if MyClass$Inner is at the top of the histogram and MyClass$Inner is only used in MyClass, then you know exactly which file to look for a problem.
Here's the command for collecting a histogram.
jcmdpidGC.class_histogram filename=histogram.txt
To add to Stephen's answers, you can also trigger a heap dump via API for the most common JVM implementations:
example for the Oracle JVM
API for the IBM JVM

Access Memory Usage of JVM from within my Application?

I have a Grails/Spring application which runs in a servlet container on a web server like Tomcat. Sometime my app crashes because the JVM reaches its maximal allowed memory (Xmx).
The error which follows is a "java.lang.OutOfMemoryError" because Java heap space is full.
To prevent this error I want to check from within my app how much memory is in use and how much memory the current JVM has remaining.
How can I access these parameters from within my application?
Try to understand when OOM is thrown instead of trying to manipulate it through the application. And also, even if you are able to capture those values from within your application - how would you prevent the error? By calling GC explicitly. Know that,
Java machine specifications says that
OutOfMemoryError: The Java virtual machine implementation has run out of either virtual or physical memory, and the automatic storage manager was unable to reclaim enough memory to satisfy an object creation request.
Therefore, GC is guaranteed to run before a OOM is thrown. Your application is throwing an OOME after it has just run a full garbage collect, and discovered that it still doesn't have enough free heap to proceed.
This would be a memory leak or in general your application could have high memory requirement. Mostly if the OOM is thrown with in short span of starting the application - it is usually that application needs more memory, if your server runs fine for some time and then throw OOM then it is most likely a memory leak.
To discover the memory leak, use the tools mentioned by people above. I use new-relic to monitor my application and check the frequency of GC runs.
PS Scavenge aka minor-GC aka the parallel object collector runs for young generation only, and PS MarkAndSweep aka major GC aka parallel mark and sweep collector is for old generation. When both are run – its considered a full GC. Minor gc runs are pretty frequent – a Full GC is comparatively less frequent. Note the consumption of different heap spaces to analyze your application.
You can also try the following option -
If you get OOM too often, then start java with correct options, get a heap dump and analyze it with jhat or with memory analyzer from eclipse (http://www.eclipse.org/mat/)
-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=path to dump file
You can try the Grails Melody Plugin that display's the info in the url /monitoring relative to your context.
To prevent this error I want to check from within my app how much
memory is in use and how much memory the current JVM has remaining.
I think that it is not the best idea to proceed this way. Much better is to investigate what actually breaks your app and eliminate error or make some limitation there. There could be many different scenarios and your app can become unpredictable. So to sum up - capturing memory level for monitoring purpose is OK (but there are many dedicated tools for that) but in my opinion depending on these values in application logic is not recommended and bad practice
To do this you would use a profiler to profile your application and JVM, rather than having code to monitor such metrics inside your application.
Profiling is a form of dynamic program analysis that measures, for example, the space (memory) or time complexity of a program, the usage of particular instructions, or frequency and duration of function calls
Here are some good java profilers:
http://visualvm.java.net/ (Free)
http://www.ej-technologies.com/products/jprofiler/overview.html (Paid)

How am I getting a "java.lang.OutOfMemoryError: Java heap space" if my heap isn't full?

I'm debugging a fairly large project I've been working on (but did not originally create) and I've noticed that sometimes it crashes with an OutOfMemoryError. The code is loading a lot of data from files so this isn't entirely surprising in general.
However, what confuses me is that I'm using VisualVM 1.3.4 to profile the program, and it behaves inconsistently. Most times I've run it, the heap gradually expands up to about 2GB (the computer has 16GB of RAM; it's for academic research) with the used heap spiking higher and higher underneath it. Around 2GB, it will crash. The program isn't proccessing more information as time goes on though, so it shouldn't grow the heap to 2GB in just a few minutes.
Sometimes, though, I get a sudden crash after about 30 seconds, with a heap size of 250MB and only about 100MB in use. How am I getting a java.lang.OutOfMemoryError: Java heap space if my heap isn't full?
Edit: I'm using Eclipse to run the program, and I have the VisualVM plugin so it gets launched automatically. Also, I'm using Java 7.
Start the application with the VM argument -XX:+HeapDumpOnOutOfMemoryError.
Analyse the Heap Dump and find out what is causing the issue.
Eclipse MAT is an excellent tool for finding out such issues.
you need to setup the JVMs min and max heap memory
set JAVA_OPTS="-Xms128m -Xmx256m"
something like that but with bigger values like 2G, 4G whatever
LE: As you all know you can't force JVM to run the garbage collector (even though you can ask for it), but there are some ways of convincing it to get rid of some items by null-ing their references. Another thing to watch is the database object that might be lazy initialised. That error could appear when you try to create an object exceding the max heap memory.
Another ideea could be some retarded developer that programatically threw the OutOfMemoryError in some method for some retarded reason. When you reach that part of code, that's what you get (search the project)
There can be at least 2 reasons for the application to crash with OutOfMemoryError.
Your java heap is just too small for the amount of data it needs to process. Then you can either increase it as suggested Matei, or analyze heap dump as suggest Ajay.
Your application leaks memory. Which means that it leaves some unneeded data in memory after processing it. Then increasing heap will not help in the long run. And your options are either heap dump analysis (again) or specialised memory leak detection tool, such as Plumbr
Turned out the crash was caused by using the OpenJDK JRE rather than Oracle's JRE. I don't know exactly what the bug is in OpenJDK that makes it crash like this, but changing to Oracle's JRE ultimately solved the problem.
(I was using OpenJDK because I'm on a Linux computer that someone was using for open-source work before me. When I mentioned the crash to him he had the idea that that might be the cause. He was right.)
Do you have a 32bit operative system without large memory support (PAE on win, huge mem kernel on linux...)? If yes, you may encounter the 2GB memory segment limit per process on 32 bit systems.
As a work around, try to set the JVM parameter -Xss192k to dispose 192kb of stack space per thread, and the parameter -Xmx1024m to use no more than 1GB of heap.

Java memory usage stays well within max heap size, but my system memory is slowly being eaten

I'm relatively new to Java programming so please bear with me trying to understand what's going on here.
The application I've developed uses a max heap size of 256MB. With the GC being done, I never run into any problems with this. The used heap builds up when a big image is loaded and gets freed nicely when it is unloaded. Out of memory errors are something that I've yet to see.
However... Running the application for about an hour. I notice that the process uses more and more system memory that never gets freed. So the application starts with around 160MB used, builds up as the heap size grows, but when the heap size shrinks the system memory used just keeps getting more. Up until the process uses 2.5GB and my system starts to become slow.
Now I'm trying to understand the surviving generations bit. It seems the heap size and surviving generations aren't really connected to each other? My application builds up a lot of surviving generations, but I never run out of memory according to the used memory by the application itself. But the JVM keeps eating memory, never giving it back to the system.
I've searching around the web, sometimes finding information that is somewhat useful. But what I don't get is that the application stays well within the heap size boundaries and still my system memory is being eaten up.
What is going on here?
I'm using NetBeans IDE on OSX Lion with the latest 1.6 JDK available.
The best way to start would be jvisualvm from the JDK on the same machine. Attach to your running program and enable profiling.
Another option is to try running the application in debug mode and stop it once in a while to inspect your data structures. This sounds like a broken/weird practice but usually if you have a memory leak it becomes very obvious where it is.
Good luck!

Categories