Analyze Tomcat Heap in detail on a production System - java

Having analyzed a light-load web application running in tomcat, using JMX Console, it turns out the "PS Old Gen" is growing slowly but constant. It starts with 200MB and grows around 80MB/Hour.
CPU is not an issue, it runs at 0-1% on average, but somewhere it leaks memory, so it will become unstable some days after deployment.
How do i find out what objects are allocated on the heap? Are there any good tutorials or tools you know?

You could try jmap, one of the JDK Development Tools. You can use jhat with the output to walk heap dumps using your web browser.
See this answer for a short explanation.
This comes up quite often, so searching SO for those tools should turn up some alternatives.

I've used the HeapAnalyzer tool from IBM's alphaWorks with good success. It takes output from Java's heap profile, hprof, and analyzes it to show you the most likely memory leaks.

You can use NetBeans profiler. It has 2 modes, launching tomcat profiled directly from ide (for localhost) or using a remote profiling with a JAR provided and some run config on server.
I used it in a project for a memory leak and it was useful.

See my answer here:
Strategies for the diagnosis of Java memory issues
And there are also tips here:
How can I figure out what is holding on to unfreed objects?

What you are seeing is normal, unless you can prove otherwise.
You do not need to analyze the heap when the additional "consumed space" disappears when a GC in the old space happens.
At some point, when the used space reaches your maximum heap size you will observe a pause caused by the default GC you use and afterwards the used memory should go down a lot. Only if it does not go down after a GC you might be interested what is still holding onto those objects.

JRockit Mission Control can analyze memory leaks while connected to JVM. No need to take snapshots all the time. This can be useful if you have a server with a large heap.
Just hook the tool up to the JVM and it will give you a trend table where you can see which type of objects that are growing the most, and then you can explore references to those objects. You can also get allocations traces, while the JVM is running, so you can see where in the application the objects are allocated.
You can download it here for free

Related

What could be causing a 'java.lang.outOfMemoryError: java heap space' error in a Spring Boot GraphQL application [duplicate]

How do you find a memory leak in Java (using, for example, JHat)? I have tried to load the heap dump up in JHat to take a basic look. However, I do not understand how I am supposed to be able to find the root reference (ref) or whatever it is called. Basically, I can tell that there are several hundred megabytes of hash table entries ([java.util.HashMap$Entry or something like that), but maps are used all over the place... Is there some way to search for large maps, or perhaps find general roots of large object trees?
[Edit]
Ok, I've read the answers so far but let's just say I am a cheap bastard (meaning I am more interested in learning how to use JHat than to pay for JProfiler). Also, JHat is always available since it is part of the JDK. Unless of course there is no way with JHat but brute force, but I can't believe that can be the case.
Also, I do not think I will be able to actually modify (adding logging of all map sizes) and run it for long enough for me to notice the leak.
I use following approach to finding memory leaks in Java. I've used jProfiler with great success, but I believe that any specialized tool with graphing capabilities (diffs are easier to analyze in graphical form) will work.
Start the application and wait until it get to "stable" state, when all the initialization is complete and the application is idle.
Run the operation suspected of producing a memory leak several times to allow any cache, DB-related initialization to take place.
Run GC and take memory snapshot.
Run the operation again. Depending on the complexity of operation and sizes of data that is processed operation may need to be run several to many times.
Run GC and take memory snapshot.
Run a diff for 2 snapshots and analyze it.
Basically analysis should start from greatest positive diff by, say, object types and find what causes those extra objects to stick in memory.
For web applications that process requests in several threads analysis gets more complicated, but nevertheless general approach still applies.
I did quite a number of projects specifically aimed at reducing memory footprint of the applications and this general approach with some application specific tweaks and trick always worked well.
Questioner here, I have got to say getting a tool that does not take 5 minutes to answer any click makes it a lot easier to find potential memory leaks.
Since people are suggesting several tools ( I only tried visual wm since I got that in the JDK and JProbe trial ) I though I should suggest a free / open source tool built on the Eclipse platform, the Memory Analyzer (sometimes referenced as the SAP memory analyzer) available on http://www.eclipse.org/mat/ .
What is really cool about this tool is that it indexed the heap dump when I first opened it which allowed it to show data like retained heap without waiting 5 minutes for each object (pretty much all operations were tons faster than the other tools I tried).
When you open the dump, the first screen shows you a pie chart with the biggest objects (counting retained heap) and one can quickly navigate down to the objects that are to big for comfort. It also has a Find likely leak suspects which I reccon can come in handy, but since the navigation was enough for me I did not really get into it.
A tool is a big help.
However, there are times when you can't use a tool: the heap dump is so huge it crashes the tool, you are trying to troubleshoot a machine in some production environment to which you only have shell access, etc.
In that case, it helps to know your way around the hprof dump file.
Look for SITES BEGIN. This shows you what objects are using the most memory. But the objects aren't lumped together solely by type: each entry also includes a "trace" ID. You can then search for that "TRACE nnnn" to see the top few frames of the stack where the object was allocated. Often, once I see where the object is allocated, I find a bug and I'm done. Also, note that you can control how many frames are recorded in the stack with the options to -Xrunhprof.
If you check out the allocation site, and don't see anything wrong, you have to start backward chaining from some of those live objects to root objects, to find the unexpected reference chain. This is where a tool really helps, but you can do the same thing by hand (well, with grep). There is not just one root object (i.e., object not subject to garbage collection). Threads, classes, and stack frames act as root objects, and anything they reference strongly is not collectible.
To do the chaining, look in the HEAP DUMP section for entries with the bad trace id. This will take you to an OBJ or ARR entry, which shows a unique object identifier in hexadecimal. Search for all occurrences of that id to find who's got a strong reference to the object. Follow each of those paths backward as they branch until you figure out where the leak is. See why a tool is so handy?
Static members are a repeat offender for memory leaks. In fact, even without a tool, it'd be worth spending a few minutes looking through your code for static Map members. Can a map grow large? Does anything ever clean up its entries?
Most of the time, in enterprise applications the Java heap given is larger than the ideal size of max 12 to 16 GB. I have found it hard to make the NetBeans profiler work directly on these big java apps.
But usually this is not needed. You can use the jmap utility that comes with the jdk to take a "live" heap dump , that is jmap will dump the heap after running GC. Do some operation on the application, wait till the operation is completed, then take another "live" heap dump. Use tools like Eclipse MAT to load the heapdumps, sort on the histogram, see which objects have increased, or which are the highest, This would give a clue.
su proceeuser
/bin/jmap -dump:live,format=b,file=/tmp/2930javaheap.hrpof 2930(pid of process)
There is only one problem with this approach; Huge heap dumps, even with the live option, may be too big to transfer out to development lap, and may need a machine with enough memory/RAM to open.
That is where the class histogram comes into picture. You can dump a live class histogram with the jmap tool. This will give only the class histogram of memory usage.Basically it won't have the information to chain the reference. For example it may put char array at the top. And String class somewhere below. You have to draw the connection yourself.
jdk/jdk1.6.0_38/bin/jmap -histo:live 60030 > /tmp/60030istolive1330.txt
Instead of taking two heap dumps, take two class histograms, like as described above; Then compare the class histograms and see the classes that are increasing. See if you can relate the Java classes to your application classes. This will give a pretty good hint. Here is a pythons script that can help you compare two jmap histogram dumps. histogramparser.py
Finally tools like JConolse and VisualVm are essential to see the memory growth over time, and see if there is a memory leak. Finally sometimes your problem may not be a memory leak , but high memory usage.For this enable GC logging;use a more advanced and new compacting GC like G1GC; and you can use jdk tools like jstat to see the GC behaviour live
jstat -gccause pid <optional time interval>
Other referecences to google for -jhat, jmap, Full GC, Humongous allocation, G1GC
There are tools that should help you find your leak, like JProbe, YourKit, AD4J or JRockit Mission Control. The last is the one that I personally know best. Any good tool should let you drill down to a level where you can easily identify what leaks, and where the leaking objects are allocated.
Using HashTables, Hashmaps or similar is one of the few ways that you can acually leak memory in Java at all. If I had to find the leak by hand I would peridically print the size of my HashMaps, and from there find the one where I add items and forget to delete them.
Well, there's always the low tech solution of adding logging of the size of your maps when you modify them, then search the logs for which maps are growing beyond a reasonable size.
NetBeans has a built-in profiler.
You really need to use a memory profiler that tracks allocations. Take a look at JProfiler - their "heap walker" feature is great, and they have integration with all of the major Java IDEs. It's not free, but it isn't that expensive either ($499 for a single license) - you will burn $500 worth of time pretty quickly struggling to find a leak with less sophisticated tools.
You can find out by measuring memory usage size after calling garbage collector multiple times:
Runtime runtime = Runtime.getRuntime();
while(true) {
...
if(System.currentTimeMillis() % 4000 == 0){
System.gc();
float usage = (float) (runtime.totalMemory() - runtime.freeMemory()) / 1024 / 1024;
System.out.println("Used memory: " + usage + "Mb");
}
}
If the output numbers were equal, there is no memory leak in your application, but if you saw difference between the numbers of memory usage (increasing numbers), there is memory leak in your project. For example:
Used memory: 14.603279Mb
Used memory: 14.737213Mb
Used memory: 14.772224Mb
Used memory: 14.802681Mb
Used memory: 14.840599Mb
Used memory: 14.900841Mb
Used memory: 14.942261Mb
Used memory: 14.976143Mb
Note that sometimes it takes some time to release memory by some actions like streams and sockets. You should not judge by first outputs, You should test it in a specific amount of time.
Checkout this screen cast about finding memory leaks with JProfiler.
It's visual explanation of #Dima Malenko Answer.
Note: Though JProfiler is not freeware, But Trial version can deal with current situation.
As most of us use Eclipse already for writing code, Why not use the Memory Analyser Tool(MAT) in Eclipse. It works great.
The Eclipse MAT is a set of plug-ins for the Eclipse IDE which provides tools to analyze heap dumps from Java application and to identify memory problems in the application.
This helps the developer to find memory leaks with the following features
Acquiring a memory snapshot (Heap Dump)
Histogram
Retained Heap
Dominator Tree
Exploring Paths to the GC Roots
Inspector
Common Memory Anti-Patterns
Object Query Language
I have recently dealt with a memory leak in our application. Sharing my experience here
The garbage collector removes unreferenced objects periodically, but it never collects the objects that are still being referenced. This is where memory leaks can occur.
Here is some options to find out the referenced objects.
Using jvisualvm which is located in JDK/bin folder
Options : Watch Heap space
If you see that the heap space keep increasing, definitely there is a memory leak.
To find out the cause, you can use memory sampler under sampler .
Get a Java heap histogram by using jmap ( which is also available in JDK/bin folder) in different time span of the application
jmap -histo <pid> > histo1.txt
Here the object reference can be analyzed. If some of the objects are never garbage collected, that is the potential memory leak.
You can read the some of the most common cause of memory leak here in this article : Understanding Memory Leaks in Java

java.lang.OutOfMemoryError: Java heap space - Java 1.8 application running on Linux server [duplicate]

How do you find a memory leak in Java (using, for example, JHat)? I have tried to load the heap dump up in JHat to take a basic look. However, I do not understand how I am supposed to be able to find the root reference (ref) or whatever it is called. Basically, I can tell that there are several hundred megabytes of hash table entries ([java.util.HashMap$Entry or something like that), but maps are used all over the place... Is there some way to search for large maps, or perhaps find general roots of large object trees?
[Edit]
Ok, I've read the answers so far but let's just say I am a cheap bastard (meaning I am more interested in learning how to use JHat than to pay for JProfiler). Also, JHat is always available since it is part of the JDK. Unless of course there is no way with JHat but brute force, but I can't believe that can be the case.
Also, I do not think I will be able to actually modify (adding logging of all map sizes) and run it for long enough for me to notice the leak.
I use following approach to finding memory leaks in Java. I've used jProfiler with great success, but I believe that any specialized tool with graphing capabilities (diffs are easier to analyze in graphical form) will work.
Start the application and wait until it get to "stable" state, when all the initialization is complete and the application is idle.
Run the operation suspected of producing a memory leak several times to allow any cache, DB-related initialization to take place.
Run GC and take memory snapshot.
Run the operation again. Depending on the complexity of operation and sizes of data that is processed operation may need to be run several to many times.
Run GC and take memory snapshot.
Run a diff for 2 snapshots and analyze it.
Basically analysis should start from greatest positive diff by, say, object types and find what causes those extra objects to stick in memory.
For web applications that process requests in several threads analysis gets more complicated, but nevertheless general approach still applies.
I did quite a number of projects specifically aimed at reducing memory footprint of the applications and this general approach with some application specific tweaks and trick always worked well.
Questioner here, I have got to say getting a tool that does not take 5 minutes to answer any click makes it a lot easier to find potential memory leaks.
Since people are suggesting several tools ( I only tried visual wm since I got that in the JDK and JProbe trial ) I though I should suggest a free / open source tool built on the Eclipse platform, the Memory Analyzer (sometimes referenced as the SAP memory analyzer) available on http://www.eclipse.org/mat/ .
What is really cool about this tool is that it indexed the heap dump when I first opened it which allowed it to show data like retained heap without waiting 5 minutes for each object (pretty much all operations were tons faster than the other tools I tried).
When you open the dump, the first screen shows you a pie chart with the biggest objects (counting retained heap) and one can quickly navigate down to the objects that are to big for comfort. It also has a Find likely leak suspects which I reccon can come in handy, but since the navigation was enough for me I did not really get into it.
A tool is a big help.
However, there are times when you can't use a tool: the heap dump is so huge it crashes the tool, you are trying to troubleshoot a machine in some production environment to which you only have shell access, etc.
In that case, it helps to know your way around the hprof dump file.
Look for SITES BEGIN. This shows you what objects are using the most memory. But the objects aren't lumped together solely by type: each entry also includes a "trace" ID. You can then search for that "TRACE nnnn" to see the top few frames of the stack where the object was allocated. Often, once I see where the object is allocated, I find a bug and I'm done. Also, note that you can control how many frames are recorded in the stack with the options to -Xrunhprof.
If you check out the allocation site, and don't see anything wrong, you have to start backward chaining from some of those live objects to root objects, to find the unexpected reference chain. This is where a tool really helps, but you can do the same thing by hand (well, with grep). There is not just one root object (i.e., object not subject to garbage collection). Threads, classes, and stack frames act as root objects, and anything they reference strongly is not collectible.
To do the chaining, look in the HEAP DUMP section for entries with the bad trace id. This will take you to an OBJ or ARR entry, which shows a unique object identifier in hexadecimal. Search for all occurrences of that id to find who's got a strong reference to the object. Follow each of those paths backward as they branch until you figure out where the leak is. See why a tool is so handy?
Static members are a repeat offender for memory leaks. In fact, even without a tool, it'd be worth spending a few minutes looking through your code for static Map members. Can a map grow large? Does anything ever clean up its entries?
Most of the time, in enterprise applications the Java heap given is larger than the ideal size of max 12 to 16 GB. I have found it hard to make the NetBeans profiler work directly on these big java apps.
But usually this is not needed. You can use the jmap utility that comes with the jdk to take a "live" heap dump , that is jmap will dump the heap after running GC. Do some operation on the application, wait till the operation is completed, then take another "live" heap dump. Use tools like Eclipse MAT to load the heapdumps, sort on the histogram, see which objects have increased, or which are the highest, This would give a clue.
su proceeuser
/bin/jmap -dump:live,format=b,file=/tmp/2930javaheap.hrpof 2930(pid of process)
There is only one problem with this approach; Huge heap dumps, even with the live option, may be too big to transfer out to development lap, and may need a machine with enough memory/RAM to open.
That is where the class histogram comes into picture. You can dump a live class histogram with the jmap tool. This will give only the class histogram of memory usage.Basically it won't have the information to chain the reference. For example it may put char array at the top. And String class somewhere below. You have to draw the connection yourself.
jdk/jdk1.6.0_38/bin/jmap -histo:live 60030 > /tmp/60030istolive1330.txt
Instead of taking two heap dumps, take two class histograms, like as described above; Then compare the class histograms and see the classes that are increasing. See if you can relate the Java classes to your application classes. This will give a pretty good hint. Here is a pythons script that can help you compare two jmap histogram dumps. histogramparser.py
Finally tools like JConolse and VisualVm are essential to see the memory growth over time, and see if there is a memory leak. Finally sometimes your problem may not be a memory leak , but high memory usage.For this enable GC logging;use a more advanced and new compacting GC like G1GC; and you can use jdk tools like jstat to see the GC behaviour live
jstat -gccause pid <optional time interval>
Other referecences to google for -jhat, jmap, Full GC, Humongous allocation, G1GC
There are tools that should help you find your leak, like JProbe, YourKit, AD4J or JRockit Mission Control. The last is the one that I personally know best. Any good tool should let you drill down to a level where you can easily identify what leaks, and where the leaking objects are allocated.
Using HashTables, Hashmaps or similar is one of the few ways that you can acually leak memory in Java at all. If I had to find the leak by hand I would peridically print the size of my HashMaps, and from there find the one where I add items and forget to delete them.
Well, there's always the low tech solution of adding logging of the size of your maps when you modify them, then search the logs for which maps are growing beyond a reasonable size.
NetBeans has a built-in profiler.
You really need to use a memory profiler that tracks allocations. Take a look at JProfiler - their "heap walker" feature is great, and they have integration with all of the major Java IDEs. It's not free, but it isn't that expensive either ($499 for a single license) - you will burn $500 worth of time pretty quickly struggling to find a leak with less sophisticated tools.
You can find out by measuring memory usage size after calling garbage collector multiple times:
Runtime runtime = Runtime.getRuntime();
while(true) {
...
if(System.currentTimeMillis() % 4000 == 0){
System.gc();
float usage = (float) (runtime.totalMemory() - runtime.freeMemory()) / 1024 / 1024;
System.out.println("Used memory: " + usage + "Mb");
}
}
If the output numbers were equal, there is no memory leak in your application, but if you saw difference between the numbers of memory usage (increasing numbers), there is memory leak in your project. For example:
Used memory: 14.603279Mb
Used memory: 14.737213Mb
Used memory: 14.772224Mb
Used memory: 14.802681Mb
Used memory: 14.840599Mb
Used memory: 14.900841Mb
Used memory: 14.942261Mb
Used memory: 14.976143Mb
Note that sometimes it takes some time to release memory by some actions like streams and sockets. You should not judge by first outputs, You should test it in a specific amount of time.
Checkout this screen cast about finding memory leaks with JProfiler.
It's visual explanation of #Dima Malenko Answer.
Note: Though JProfiler is not freeware, But Trial version can deal with current situation.
As most of us use Eclipse already for writing code, Why not use the Memory Analyser Tool(MAT) in Eclipse. It works great.
The Eclipse MAT is a set of plug-ins for the Eclipse IDE which provides tools to analyze heap dumps from Java application and to identify memory problems in the application.
This helps the developer to find memory leaks with the following features
Acquiring a memory snapshot (Heap Dump)
Histogram
Retained Heap
Dominator Tree
Exploring Paths to the GC Roots
Inspector
Common Memory Anti-Patterns
Object Query Language
I have recently dealt with a memory leak in our application. Sharing my experience here
The garbage collector removes unreferenced objects periodically, but it never collects the objects that are still being referenced. This is where memory leaks can occur.
Here is some options to find out the referenced objects.
Using jvisualvm which is located in JDK/bin folder
Options : Watch Heap space
If you see that the heap space keep increasing, definitely there is a memory leak.
To find out the cause, you can use memory sampler under sampler .
Get a Java heap histogram by using jmap ( which is also available in JDK/bin folder) in different time span of the application
jmap -histo <pid> > histo1.txt
Here the object reference can be analyzed. If some of the objects are never garbage collected, that is the potential memory leak.
You can read the some of the most common cause of memory leak here in this article : Understanding Memory Leaks in Java

JConsole heap dump much smaller than memory usage

We have a few containers running java processes with docker. One thing we've been noticing is a huge amount of memory that is taken up just by running a simple spring-boot app without even including our own code (just to try and get some kind of memory profile independent of any issues we might introduce).
What I saw was the memory consumed by docker/the JVM was hovering around 2.5. We did have a decent amount of extra deps included in it (camel, hibernate, some spring-boot deps) but that wasn't what really threw me off. What I saw was that despite docker saying it consumed 2.5GB of memory for the app, running jconsole against it read that it was consuming up to 1GB (down to ~200MB after a GC and slowly climbing). The memory footprint on docker remained where it was after the GC as well (2.5GB).
Furthermore, when I dumped the heap to see what kinds of object are taking up that space, it looks like the heap was only 33MB large after I loaded the .hprof file into MAT. None of this makes much sense to me. Currently, I'm looking at the non-heap space in jconsole reported at 115MB while the heap space is at 331MB.
I've already read a ton (on SO and other sites) about the JVM memory regions and some things specifically reporting that the heap dumps might be smaller but none of them were this far off that I could tell and beyond that, many of the suggested things to watch for were that the GC is run whenever a heap dump is taken and that MAT has a setting to show or hide unreachable objects. All of this was taken into account before posting here and now I just feel like something else is at play that I can't capture myself and I haven't found online.
I fully expect that the numbers might be a little off but it seems extreme that they're off by a factor of 10 in the best case scenario and off by nearly a factor of 100 when looking at the docker-reported memory usage.
Does anyone know what I might be missing here?
EDIT: This is also an app running with Java 8, not yet running with Java 11. It's on the JIRA board to do but not yet planned for.
EDIT2: Adding screenshots. Spike in the JConsole screen shot is from running GC.
JConsole gives you the amount of committed memory: 3311616 KiB ~= 3GiB
This is how much memory your java process consumes, as seen by the OS.
It is unrelated to how much heap is currently in use to hold Java objects, also reported by JConsole as 130237 kbyte ~= 130 MiB.
This is also unrelated to how many Objects are actually alive: By default MAT will remove unreachable Objects when you load the heap dump. You can enable the option by going to Preferences -> Memory Analyzer -> Keep Unreachable Objects (See the MAT documentation). So if you have a lot of short lived objects, the difference can be quite massive.
I see that it also reports a Max Heap of about 9GiB. It means that you have set Xmx parameter to a large value.
Hotspot GC's are not very good at reclaiming unused memory. They tend to use all the space available to them (the Max heap size, set by Xmx) and then never decommit the heap, effectively keeping it reserved for the Java process instead of releasing it to the OS.
If you want to minimize the memory footprint of your process from the OS perspective, I recommend that you set a lower Xmx, maybe -Xmx1g, so as to not allow Java to grow too much (of course, Xmx will also need to be high enough to accomodate for your application workload!).
If really want an adaptative heap, you can also switch to G1 (-XX:+UseG1GC) and a more recent Java, as the hotspot team has delivered some improvements recently.
Dave
OS monitoring tools will show to you the amount of memory that is allocated by a process. So this:
mean that your java process have 2.664G of memory allocated (java heap + meta space)
JConsole shows to you the memory that your code is "consuming" (ignoring the meta space)
I see 2 possible explanations:
You have set -Xms with a huge value
You have a lot of static
code (or other content) loaded on your meta space.

Memory leaks in my Java server application [duplicate]

How do you find a memory leak in Java (using, for example, JHat)? I have tried to load the heap dump up in JHat to take a basic look. However, I do not understand how I am supposed to be able to find the root reference (ref) or whatever it is called. Basically, I can tell that there are several hundred megabytes of hash table entries ([java.util.HashMap$Entry or something like that), but maps are used all over the place... Is there some way to search for large maps, or perhaps find general roots of large object trees?
[Edit]
Ok, I've read the answers so far but let's just say I am a cheap bastard (meaning I am more interested in learning how to use JHat than to pay for JProfiler). Also, JHat is always available since it is part of the JDK. Unless of course there is no way with JHat but brute force, but I can't believe that can be the case.
Also, I do not think I will be able to actually modify (adding logging of all map sizes) and run it for long enough for me to notice the leak.
I use following approach to finding memory leaks in Java. I've used jProfiler with great success, but I believe that any specialized tool with graphing capabilities (diffs are easier to analyze in graphical form) will work.
Start the application and wait until it get to "stable" state, when all the initialization is complete and the application is idle.
Run the operation suspected of producing a memory leak several times to allow any cache, DB-related initialization to take place.
Run GC and take memory snapshot.
Run the operation again. Depending on the complexity of operation and sizes of data that is processed operation may need to be run several to many times.
Run GC and take memory snapshot.
Run a diff for 2 snapshots and analyze it.
Basically analysis should start from greatest positive diff by, say, object types and find what causes those extra objects to stick in memory.
For web applications that process requests in several threads analysis gets more complicated, but nevertheless general approach still applies.
I did quite a number of projects specifically aimed at reducing memory footprint of the applications and this general approach with some application specific tweaks and trick always worked well.
Questioner here, I have got to say getting a tool that does not take 5 minutes to answer any click makes it a lot easier to find potential memory leaks.
Since people are suggesting several tools ( I only tried visual wm since I got that in the JDK and JProbe trial ) I though I should suggest a free / open source tool built on the Eclipse platform, the Memory Analyzer (sometimes referenced as the SAP memory analyzer) available on http://www.eclipse.org/mat/ .
What is really cool about this tool is that it indexed the heap dump when I first opened it which allowed it to show data like retained heap without waiting 5 minutes for each object (pretty much all operations were tons faster than the other tools I tried).
When you open the dump, the first screen shows you a pie chart with the biggest objects (counting retained heap) and one can quickly navigate down to the objects that are to big for comfort. It also has a Find likely leak suspects which I reccon can come in handy, but since the navigation was enough for me I did not really get into it.
A tool is a big help.
However, there are times when you can't use a tool: the heap dump is so huge it crashes the tool, you are trying to troubleshoot a machine in some production environment to which you only have shell access, etc.
In that case, it helps to know your way around the hprof dump file.
Look for SITES BEGIN. This shows you what objects are using the most memory. But the objects aren't lumped together solely by type: each entry also includes a "trace" ID. You can then search for that "TRACE nnnn" to see the top few frames of the stack where the object was allocated. Often, once I see where the object is allocated, I find a bug and I'm done. Also, note that you can control how many frames are recorded in the stack with the options to -Xrunhprof.
If you check out the allocation site, and don't see anything wrong, you have to start backward chaining from some of those live objects to root objects, to find the unexpected reference chain. This is where a tool really helps, but you can do the same thing by hand (well, with grep). There is not just one root object (i.e., object not subject to garbage collection). Threads, classes, and stack frames act as root objects, and anything they reference strongly is not collectible.
To do the chaining, look in the HEAP DUMP section for entries with the bad trace id. This will take you to an OBJ or ARR entry, which shows a unique object identifier in hexadecimal. Search for all occurrences of that id to find who's got a strong reference to the object. Follow each of those paths backward as they branch until you figure out where the leak is. See why a tool is so handy?
Static members are a repeat offender for memory leaks. In fact, even without a tool, it'd be worth spending a few minutes looking through your code for static Map members. Can a map grow large? Does anything ever clean up its entries?
Most of the time, in enterprise applications the Java heap given is larger than the ideal size of max 12 to 16 GB. I have found it hard to make the NetBeans profiler work directly on these big java apps.
But usually this is not needed. You can use the jmap utility that comes with the jdk to take a "live" heap dump , that is jmap will dump the heap after running GC. Do some operation on the application, wait till the operation is completed, then take another "live" heap dump. Use tools like Eclipse MAT to load the heapdumps, sort on the histogram, see which objects have increased, or which are the highest, This would give a clue.
su proceeuser
/bin/jmap -dump:live,format=b,file=/tmp/2930javaheap.hrpof 2930(pid of process)
There is only one problem with this approach; Huge heap dumps, even with the live option, may be too big to transfer out to development lap, and may need a machine with enough memory/RAM to open.
That is where the class histogram comes into picture. You can dump a live class histogram with the jmap tool. This will give only the class histogram of memory usage.Basically it won't have the information to chain the reference. For example it may put char array at the top. And String class somewhere below. You have to draw the connection yourself.
jdk/jdk1.6.0_38/bin/jmap -histo:live 60030 > /tmp/60030istolive1330.txt
Instead of taking two heap dumps, take two class histograms, like as described above; Then compare the class histograms and see the classes that are increasing. See if you can relate the Java classes to your application classes. This will give a pretty good hint. Here is a pythons script that can help you compare two jmap histogram dumps. histogramparser.py
Finally tools like JConolse and VisualVm are essential to see the memory growth over time, and see if there is a memory leak. Finally sometimes your problem may not be a memory leak , but high memory usage.For this enable GC logging;use a more advanced and new compacting GC like G1GC; and you can use jdk tools like jstat to see the GC behaviour live
jstat -gccause pid <optional time interval>
Other referecences to google for -jhat, jmap, Full GC, Humongous allocation, G1GC
There are tools that should help you find your leak, like JProbe, YourKit, AD4J or JRockit Mission Control. The last is the one that I personally know best. Any good tool should let you drill down to a level where you can easily identify what leaks, and where the leaking objects are allocated.
Using HashTables, Hashmaps or similar is one of the few ways that you can acually leak memory in Java at all. If I had to find the leak by hand I would peridically print the size of my HashMaps, and from there find the one where I add items and forget to delete them.
Well, there's always the low tech solution of adding logging of the size of your maps when you modify them, then search the logs for which maps are growing beyond a reasonable size.
NetBeans has a built-in profiler.
You really need to use a memory profiler that tracks allocations. Take a look at JProfiler - their "heap walker" feature is great, and they have integration with all of the major Java IDEs. It's not free, but it isn't that expensive either ($499 for a single license) - you will burn $500 worth of time pretty quickly struggling to find a leak with less sophisticated tools.
You can find out by measuring memory usage size after calling garbage collector multiple times:
Runtime runtime = Runtime.getRuntime();
while(true) {
...
if(System.currentTimeMillis() % 4000 == 0){
System.gc();
float usage = (float) (runtime.totalMemory() - runtime.freeMemory()) / 1024 / 1024;
System.out.println("Used memory: " + usage + "Mb");
}
}
If the output numbers were equal, there is no memory leak in your application, but if you saw difference between the numbers of memory usage (increasing numbers), there is memory leak in your project. For example:
Used memory: 14.603279Mb
Used memory: 14.737213Mb
Used memory: 14.772224Mb
Used memory: 14.802681Mb
Used memory: 14.840599Mb
Used memory: 14.900841Mb
Used memory: 14.942261Mb
Used memory: 14.976143Mb
Note that sometimes it takes some time to release memory by some actions like streams and sockets. You should not judge by first outputs, You should test it in a specific amount of time.
Checkout this screen cast about finding memory leaks with JProfiler.
It's visual explanation of #Dima Malenko Answer.
Note: Though JProfiler is not freeware, But Trial version can deal with current situation.
As most of us use Eclipse already for writing code, Why not use the Memory Analyser Tool(MAT) in Eclipse. It works great.
The Eclipse MAT is a set of plug-ins for the Eclipse IDE which provides tools to analyze heap dumps from Java application and to identify memory problems in the application.
This helps the developer to find memory leaks with the following features
Acquiring a memory snapshot (Heap Dump)
Histogram
Retained Heap
Dominator Tree
Exploring Paths to the GC Roots
Inspector
Common Memory Anti-Patterns
Object Query Language
I have recently dealt with a memory leak in our application. Sharing my experience here
The garbage collector removes unreferenced objects periodically, but it never collects the objects that are still being referenced. This is where memory leaks can occur.
Here is some options to find out the referenced objects.
Using jvisualvm which is located in JDK/bin folder
Options : Watch Heap space
If you see that the heap space keep increasing, definitely there is a memory leak.
To find out the cause, you can use memory sampler under sampler .
Get a Java heap histogram by using jmap ( which is also available in JDK/bin folder) in different time span of the application
jmap -histo <pid> > histo1.txt
Here the object reference can be analyzed. If some of the objects are never garbage collected, that is the potential memory leak.
You can read the some of the most common cause of memory leak here in this article : Understanding Memory Leaks in Java

Java : Get heap dump without jmap or without hanging the application

In few circumstance, our application is using around 12 GB of memory.
We tried to get the heap dump using jmap utility. Since the application is using some GB of memory it causes the application to stop responding and causes problem in production.
In our case the heap usage suddenly increases from 2-3 GB to 12GB in 6 hours. In an attempt to find teh memory usage trend we tried to collect the heap dump every one hour after restarting the application. But as said since using the jmap causes the application to hang we need to restart it and we are not able to get the trend of memory usage.
Is there a way to get the heap dump without hanging the application or is there a utility other than jmap to collect heap dump.
Thoughts on this highly appreciated, since without getting the trend of memory usage it is highly difficult to fix the issue.
Note: Our application runs in CentOS.
Thanks,
Arun
Try the following. It comes with JDK >= 7:
/usr/lib/jvm/jdk-YOUR-VERSION/bin/jcmd PID GC.heap_dump FILE-PATH-TO-SAVE
Example:
/usr/lib/jvm/jdk1.8.0_91/bin/jcmd 25092 GC.heap_dump /opt/hd/3-19.11-jcmd.hprof
This dumping process is much faster than dumping with jmap! Dumpfiles are much smaller, but it's enough to give your the idea, where the leaks are.
At the time of writing this answer, there are bugs with Memory Analyzer and IBM HeapAnalyzer, that they cannot read dumpfiles from jmap (jdk8, big files). You can use Yourkit to read those files.
First of all, it is (AFAIK) essential to freeze the JVM while a thread dump / snapshot is being taken. If JVM was able to continue running while the snapshot was created, it would be next to impossible to get a coherent snapshot.
So are there other ways to get a heap dump?
You can get a heap dump using VisualVM as described here.
You can get a heap dump using jconsole or Eclipse Memory Analyser as described here.
But all of these are bound to cause the JVM to (at least) pause.
If your application is actually hanging (permanently!) that sounds like a problem with your application itself. My suggestion would be to see if you can track down that problem before looking for the storage leak.
My other suggestion is that you look at a single heap dump, and use the stats to figure out what kind(s) of object are using all of the space ... and why they are reachable. There is a good chance that you don't need the "trend" information at all.
You can use GDB to get the heap dump without running jmap on the target VM however this will still hang the application for the amount of time required to write the heap dump to disk. Assuming a disk speed of 100MB/s (a basic mirrored array or single disk) this is still 2 minutes of downtime.
http://blogs.atlassian.com/2013/03/so-you-want-your-jvms-heap/
The only true way to avoid stopping the JVM is transactional memory and a kernel that takes advantage of it to provide a process snapshot facility. This is one of the dreams of the proponents of STM but it's not available yet. VMWare's hot-migration comes close but depends on your allocation rate not exceeding network bandwidth and it doesn't save snapshots. Petition them to add it for you, it'd be a neat feature.
A heap dump analyzed with the right tool will tell you exactly what is consuming the heap. It is the best tool for tracking down memory leaks. However, collecting a heap dump is slow let alone analyzing it.
With knowledge of the workings of your application, sometimes a histogram is enough to give you a clue of where to look for the problem. For example, if MyClass$Inner is at the top of the histogram and MyClass$Inner is only used in MyClass, then you know exactly which file to look for a problem.
Here's the command for collecting a histogram.
jcmdpidGC.class_histogram filename=histogram.txt
To add to Stephen's answers, you can also trigger a heap dump via API for the most common JVM implementations:
example for the Oracle JVM
API for the IBM JVM

Categories