Related
How do you find a memory leak in Java (using, for example, JHat)? I have tried to load the heap dump up in JHat to take a basic look. However, I do not understand how I am supposed to be able to find the root reference (ref) or whatever it is called. Basically, I can tell that there are several hundred megabytes of hash table entries ([java.util.HashMap$Entry or something like that), but maps are used all over the place... Is there some way to search for large maps, or perhaps find general roots of large object trees?
[Edit]
Ok, I've read the answers so far but let's just say I am a cheap bastard (meaning I am more interested in learning how to use JHat than to pay for JProfiler). Also, JHat is always available since it is part of the JDK. Unless of course there is no way with JHat but brute force, but I can't believe that can be the case.
Also, I do not think I will be able to actually modify (adding logging of all map sizes) and run it for long enough for me to notice the leak.
I use following approach to finding memory leaks in Java. I've used jProfiler with great success, but I believe that any specialized tool with graphing capabilities (diffs are easier to analyze in graphical form) will work.
Start the application and wait until it get to "stable" state, when all the initialization is complete and the application is idle.
Run the operation suspected of producing a memory leak several times to allow any cache, DB-related initialization to take place.
Run GC and take memory snapshot.
Run the operation again. Depending on the complexity of operation and sizes of data that is processed operation may need to be run several to many times.
Run GC and take memory snapshot.
Run a diff for 2 snapshots and analyze it.
Basically analysis should start from greatest positive diff by, say, object types and find what causes those extra objects to stick in memory.
For web applications that process requests in several threads analysis gets more complicated, but nevertheless general approach still applies.
I did quite a number of projects specifically aimed at reducing memory footprint of the applications and this general approach with some application specific tweaks and trick always worked well.
Questioner here, I have got to say getting a tool that does not take 5 minutes to answer any click makes it a lot easier to find potential memory leaks.
Since people are suggesting several tools ( I only tried visual wm since I got that in the JDK and JProbe trial ) I though I should suggest a free / open source tool built on the Eclipse platform, the Memory Analyzer (sometimes referenced as the SAP memory analyzer) available on http://www.eclipse.org/mat/ .
What is really cool about this tool is that it indexed the heap dump when I first opened it which allowed it to show data like retained heap without waiting 5 minutes for each object (pretty much all operations were tons faster than the other tools I tried).
When you open the dump, the first screen shows you a pie chart with the biggest objects (counting retained heap) and one can quickly navigate down to the objects that are to big for comfort. It also has a Find likely leak suspects which I reccon can come in handy, but since the navigation was enough for me I did not really get into it.
A tool is a big help.
However, there are times when you can't use a tool: the heap dump is so huge it crashes the tool, you are trying to troubleshoot a machine in some production environment to which you only have shell access, etc.
In that case, it helps to know your way around the hprof dump file.
Look for SITES BEGIN. This shows you what objects are using the most memory. But the objects aren't lumped together solely by type: each entry also includes a "trace" ID. You can then search for that "TRACE nnnn" to see the top few frames of the stack where the object was allocated. Often, once I see where the object is allocated, I find a bug and I'm done. Also, note that you can control how many frames are recorded in the stack with the options to -Xrunhprof.
If you check out the allocation site, and don't see anything wrong, you have to start backward chaining from some of those live objects to root objects, to find the unexpected reference chain. This is where a tool really helps, but you can do the same thing by hand (well, with grep). There is not just one root object (i.e., object not subject to garbage collection). Threads, classes, and stack frames act as root objects, and anything they reference strongly is not collectible.
To do the chaining, look in the HEAP DUMP section for entries with the bad trace id. This will take you to an OBJ or ARR entry, which shows a unique object identifier in hexadecimal. Search for all occurrences of that id to find who's got a strong reference to the object. Follow each of those paths backward as they branch until you figure out where the leak is. See why a tool is so handy?
Static members are a repeat offender for memory leaks. In fact, even without a tool, it'd be worth spending a few minutes looking through your code for static Map members. Can a map grow large? Does anything ever clean up its entries?
Most of the time, in enterprise applications the Java heap given is larger than the ideal size of max 12 to 16 GB. I have found it hard to make the NetBeans profiler work directly on these big java apps.
But usually this is not needed. You can use the jmap utility that comes with the jdk to take a "live" heap dump , that is jmap will dump the heap after running GC. Do some operation on the application, wait till the operation is completed, then take another "live" heap dump. Use tools like Eclipse MAT to load the heapdumps, sort on the histogram, see which objects have increased, or which are the highest, This would give a clue.
su proceeuser
/bin/jmap -dump:live,format=b,file=/tmp/2930javaheap.hrpof 2930(pid of process)
There is only one problem with this approach; Huge heap dumps, even with the live option, may be too big to transfer out to development lap, and may need a machine with enough memory/RAM to open.
That is where the class histogram comes into picture. You can dump a live class histogram with the jmap tool. This will give only the class histogram of memory usage.Basically it won't have the information to chain the reference. For example it may put char array at the top. And String class somewhere below. You have to draw the connection yourself.
jdk/jdk1.6.0_38/bin/jmap -histo:live 60030 > /tmp/60030istolive1330.txt
Instead of taking two heap dumps, take two class histograms, like as described above; Then compare the class histograms and see the classes that are increasing. See if you can relate the Java classes to your application classes. This will give a pretty good hint. Here is a pythons script that can help you compare two jmap histogram dumps. histogramparser.py
Finally tools like JConolse and VisualVm are essential to see the memory growth over time, and see if there is a memory leak. Finally sometimes your problem may not be a memory leak , but high memory usage.For this enable GC logging;use a more advanced and new compacting GC like G1GC; and you can use jdk tools like jstat to see the GC behaviour live
jstat -gccause pid <optional time interval>
Other referecences to google for -jhat, jmap, Full GC, Humongous allocation, G1GC
There are tools that should help you find your leak, like JProbe, YourKit, AD4J or JRockit Mission Control. The last is the one that I personally know best. Any good tool should let you drill down to a level where you can easily identify what leaks, and where the leaking objects are allocated.
Using HashTables, Hashmaps or similar is one of the few ways that you can acually leak memory in Java at all. If I had to find the leak by hand I would peridically print the size of my HashMaps, and from there find the one where I add items and forget to delete them.
Well, there's always the low tech solution of adding logging of the size of your maps when you modify them, then search the logs for which maps are growing beyond a reasonable size.
NetBeans has a built-in profiler.
You really need to use a memory profiler that tracks allocations. Take a look at JProfiler - their "heap walker" feature is great, and they have integration with all of the major Java IDEs. It's not free, but it isn't that expensive either ($499 for a single license) - you will burn $500 worth of time pretty quickly struggling to find a leak with less sophisticated tools.
You can find out by measuring memory usage size after calling garbage collector multiple times:
Runtime runtime = Runtime.getRuntime();
while(true) {
...
if(System.currentTimeMillis() % 4000 == 0){
System.gc();
float usage = (float) (runtime.totalMemory() - runtime.freeMemory()) / 1024 / 1024;
System.out.println("Used memory: " + usage + "Mb");
}
}
If the output numbers were equal, there is no memory leak in your application, but if you saw difference between the numbers of memory usage (increasing numbers), there is memory leak in your project. For example:
Used memory: 14.603279Mb
Used memory: 14.737213Mb
Used memory: 14.772224Mb
Used memory: 14.802681Mb
Used memory: 14.840599Mb
Used memory: 14.900841Mb
Used memory: 14.942261Mb
Used memory: 14.976143Mb
Note that sometimes it takes some time to release memory by some actions like streams and sockets. You should not judge by first outputs, You should test it in a specific amount of time.
Checkout this screen cast about finding memory leaks with JProfiler.
It's visual explanation of #Dima Malenko Answer.
Note: Though JProfiler is not freeware, But Trial version can deal with current situation.
As most of us use Eclipse already for writing code, Why not use the Memory Analyser Tool(MAT) in Eclipse. It works great.
The Eclipse MAT is a set of plug-ins for the Eclipse IDE which provides tools to analyze heap dumps from Java application and to identify memory problems in the application.
This helps the developer to find memory leaks with the following features
Acquiring a memory snapshot (Heap Dump)
Histogram
Retained Heap
Dominator Tree
Exploring Paths to the GC Roots
Inspector
Common Memory Anti-Patterns
Object Query Language
I have recently dealt with a memory leak in our application. Sharing my experience here
The garbage collector removes unreferenced objects periodically, but it never collects the objects that are still being referenced. This is where memory leaks can occur.
Here is some options to find out the referenced objects.
Using jvisualvm which is located in JDK/bin folder
Options : Watch Heap space
If you see that the heap space keep increasing, definitely there is a memory leak.
To find out the cause, you can use memory sampler under sampler .
Get a Java heap histogram by using jmap ( which is also available in JDK/bin folder) in different time span of the application
jmap -histo <pid> > histo1.txt
Here the object reference can be analyzed. If some of the objects are never garbage collected, that is the potential memory leak.
You can read the some of the most common cause of memory leak here in this article : Understanding Memory Leaks in Java
How do you find a memory leak in Java (using, for example, JHat)? I have tried to load the heap dump up in JHat to take a basic look. However, I do not understand how I am supposed to be able to find the root reference (ref) or whatever it is called. Basically, I can tell that there are several hundred megabytes of hash table entries ([java.util.HashMap$Entry or something like that), but maps are used all over the place... Is there some way to search for large maps, or perhaps find general roots of large object trees?
[Edit]
Ok, I've read the answers so far but let's just say I am a cheap bastard (meaning I am more interested in learning how to use JHat than to pay for JProfiler). Also, JHat is always available since it is part of the JDK. Unless of course there is no way with JHat but brute force, but I can't believe that can be the case.
Also, I do not think I will be able to actually modify (adding logging of all map sizes) and run it for long enough for me to notice the leak.
I use following approach to finding memory leaks in Java. I've used jProfiler with great success, but I believe that any specialized tool with graphing capabilities (diffs are easier to analyze in graphical form) will work.
Start the application and wait until it get to "stable" state, when all the initialization is complete and the application is idle.
Run the operation suspected of producing a memory leak several times to allow any cache, DB-related initialization to take place.
Run GC and take memory snapshot.
Run the operation again. Depending on the complexity of operation and sizes of data that is processed operation may need to be run several to many times.
Run GC and take memory snapshot.
Run a diff for 2 snapshots and analyze it.
Basically analysis should start from greatest positive diff by, say, object types and find what causes those extra objects to stick in memory.
For web applications that process requests in several threads analysis gets more complicated, but nevertheless general approach still applies.
I did quite a number of projects specifically aimed at reducing memory footprint of the applications and this general approach with some application specific tweaks and trick always worked well.
Questioner here, I have got to say getting a tool that does not take 5 minutes to answer any click makes it a lot easier to find potential memory leaks.
Since people are suggesting several tools ( I only tried visual wm since I got that in the JDK and JProbe trial ) I though I should suggest a free / open source tool built on the Eclipse platform, the Memory Analyzer (sometimes referenced as the SAP memory analyzer) available on http://www.eclipse.org/mat/ .
What is really cool about this tool is that it indexed the heap dump when I first opened it which allowed it to show data like retained heap without waiting 5 minutes for each object (pretty much all operations were tons faster than the other tools I tried).
When you open the dump, the first screen shows you a pie chart with the biggest objects (counting retained heap) and one can quickly navigate down to the objects that are to big for comfort. It also has a Find likely leak suspects which I reccon can come in handy, but since the navigation was enough for me I did not really get into it.
A tool is a big help.
However, there are times when you can't use a tool: the heap dump is so huge it crashes the tool, you are trying to troubleshoot a machine in some production environment to which you only have shell access, etc.
In that case, it helps to know your way around the hprof dump file.
Look for SITES BEGIN. This shows you what objects are using the most memory. But the objects aren't lumped together solely by type: each entry also includes a "trace" ID. You can then search for that "TRACE nnnn" to see the top few frames of the stack where the object was allocated. Often, once I see where the object is allocated, I find a bug and I'm done. Also, note that you can control how many frames are recorded in the stack with the options to -Xrunhprof.
If you check out the allocation site, and don't see anything wrong, you have to start backward chaining from some of those live objects to root objects, to find the unexpected reference chain. This is where a tool really helps, but you can do the same thing by hand (well, with grep). There is not just one root object (i.e., object not subject to garbage collection). Threads, classes, and stack frames act as root objects, and anything they reference strongly is not collectible.
To do the chaining, look in the HEAP DUMP section for entries with the bad trace id. This will take you to an OBJ or ARR entry, which shows a unique object identifier in hexadecimal. Search for all occurrences of that id to find who's got a strong reference to the object. Follow each of those paths backward as they branch until you figure out where the leak is. See why a tool is so handy?
Static members are a repeat offender for memory leaks. In fact, even without a tool, it'd be worth spending a few minutes looking through your code for static Map members. Can a map grow large? Does anything ever clean up its entries?
Most of the time, in enterprise applications the Java heap given is larger than the ideal size of max 12 to 16 GB. I have found it hard to make the NetBeans profiler work directly on these big java apps.
But usually this is not needed. You can use the jmap utility that comes with the jdk to take a "live" heap dump , that is jmap will dump the heap after running GC. Do some operation on the application, wait till the operation is completed, then take another "live" heap dump. Use tools like Eclipse MAT to load the heapdumps, sort on the histogram, see which objects have increased, or which are the highest, This would give a clue.
su proceeuser
/bin/jmap -dump:live,format=b,file=/tmp/2930javaheap.hrpof 2930(pid of process)
There is only one problem with this approach; Huge heap dumps, even with the live option, may be too big to transfer out to development lap, and may need a machine with enough memory/RAM to open.
That is where the class histogram comes into picture. You can dump a live class histogram with the jmap tool. This will give only the class histogram of memory usage.Basically it won't have the information to chain the reference. For example it may put char array at the top. And String class somewhere below. You have to draw the connection yourself.
jdk/jdk1.6.0_38/bin/jmap -histo:live 60030 > /tmp/60030istolive1330.txt
Instead of taking two heap dumps, take two class histograms, like as described above; Then compare the class histograms and see the classes that are increasing. See if you can relate the Java classes to your application classes. This will give a pretty good hint. Here is a pythons script that can help you compare two jmap histogram dumps. histogramparser.py
Finally tools like JConolse and VisualVm are essential to see the memory growth over time, and see if there is a memory leak. Finally sometimes your problem may not be a memory leak , but high memory usage.For this enable GC logging;use a more advanced and new compacting GC like G1GC; and you can use jdk tools like jstat to see the GC behaviour live
jstat -gccause pid <optional time interval>
Other referecences to google for -jhat, jmap, Full GC, Humongous allocation, G1GC
There are tools that should help you find your leak, like JProbe, YourKit, AD4J or JRockit Mission Control. The last is the one that I personally know best. Any good tool should let you drill down to a level where you can easily identify what leaks, and where the leaking objects are allocated.
Using HashTables, Hashmaps or similar is one of the few ways that you can acually leak memory in Java at all. If I had to find the leak by hand I would peridically print the size of my HashMaps, and from there find the one where I add items and forget to delete them.
Well, there's always the low tech solution of adding logging of the size of your maps when you modify them, then search the logs for which maps are growing beyond a reasonable size.
NetBeans has a built-in profiler.
You really need to use a memory profiler that tracks allocations. Take a look at JProfiler - their "heap walker" feature is great, and they have integration with all of the major Java IDEs. It's not free, but it isn't that expensive either ($499 for a single license) - you will burn $500 worth of time pretty quickly struggling to find a leak with less sophisticated tools.
You can find out by measuring memory usage size after calling garbage collector multiple times:
Runtime runtime = Runtime.getRuntime();
while(true) {
...
if(System.currentTimeMillis() % 4000 == 0){
System.gc();
float usage = (float) (runtime.totalMemory() - runtime.freeMemory()) / 1024 / 1024;
System.out.println("Used memory: " + usage + "Mb");
}
}
If the output numbers were equal, there is no memory leak in your application, but if you saw difference between the numbers of memory usage (increasing numbers), there is memory leak in your project. For example:
Used memory: 14.603279Mb
Used memory: 14.737213Mb
Used memory: 14.772224Mb
Used memory: 14.802681Mb
Used memory: 14.840599Mb
Used memory: 14.900841Mb
Used memory: 14.942261Mb
Used memory: 14.976143Mb
Note that sometimes it takes some time to release memory by some actions like streams and sockets. You should not judge by first outputs, You should test it in a specific amount of time.
Checkout this screen cast about finding memory leaks with JProfiler.
It's visual explanation of #Dima Malenko Answer.
Note: Though JProfiler is not freeware, But Trial version can deal with current situation.
As most of us use Eclipse already for writing code, Why not use the Memory Analyser Tool(MAT) in Eclipse. It works great.
The Eclipse MAT is a set of plug-ins for the Eclipse IDE which provides tools to analyze heap dumps from Java application and to identify memory problems in the application.
This helps the developer to find memory leaks with the following features
Acquiring a memory snapshot (Heap Dump)
Histogram
Retained Heap
Dominator Tree
Exploring Paths to the GC Roots
Inspector
Common Memory Anti-Patterns
Object Query Language
I have recently dealt with a memory leak in our application. Sharing my experience here
The garbage collector removes unreferenced objects periodically, but it never collects the objects that are still being referenced. This is where memory leaks can occur.
Here is some options to find out the referenced objects.
Using jvisualvm which is located in JDK/bin folder
Options : Watch Heap space
If you see that the heap space keep increasing, definitely there is a memory leak.
To find out the cause, you can use memory sampler under sampler .
Get a Java heap histogram by using jmap ( which is also available in JDK/bin folder) in different time span of the application
jmap -histo <pid> > histo1.txt
Here the object reference can be analyzed. If some of the objects are never garbage collected, that is the potential memory leak.
You can read the some of the most common cause of memory leak here in this article : Understanding Memory Leaks in Java
I'm writing a web spider. It works well except there seems to be a memory leak. The program will run fine for about 15 minutes and then it will crash.
If I monitor it using the "Profile" function in NetBeans, I can see that the memory is increasing over time until eventually I get a java.lang.OutOfMemoryError and the program crashes completely.
The image below shows snapshots of the memory of objects used after one minute and after 15 minutes (right before it crashes.) Is there any way to tell where these objects (my main culprits are byte[] and char[]) are being created or what is still referencing them (and therefor preventing them from being destroyed by the garbage collector)?
Or do I have no idea what I'm talking about?
Thanks, I appreciate the help.
You're probably right on track with your assumption, but maybe not using the right tool?
I don't know NetBeans, but I know both Yourkit Profiler and JProfiler to be very powerful tools for these kinds of analyses. You can walk the heap and analyse "hot spots". Both tools have a trial license, so you can try them out to see which one suits you best.
When you use the profiler to analyze the memory you can turn on recording of the stack traces for object allocation.
When you run in that mode, you can right click on the class and display the stack trace(s) where the objects are created. That should give you enough information to track down the memory leak.
You might want to read this article as well:
http://netbeans.org/community/magazine/html/04/profiler.html
We have been facing Out of Memory errors in our App server for sometime. We see the used heap size increasing gradually until finally it reaches the available heap in size. This happens every 3 weeks after which a server restart is needed to fix this.
Upon analysis of the heap dumps we find the problem to be objects used in JSPs.
Can JSP objects be the real cause of Appserver memory issues? How do we free up JSP objects (Objects which are being instantiated using usebean or other tags)?
We have a clustered Websphere appserver with 2 nodes and an IHS.
EDIT: The findings above are based on the heap-dump and nativestderr log analysis given below using the IBM support assistant
nativestd err log analysis:
alt text http://saregos.com/wp-content/uploads/2010/03/chart.jpg
Heap dump analysis:
![alt text][2]
Heap dump analysis showing the immediate dominators (2 levels up of hastable entry in the image above)
![alt text][3]
The last image shows that the immediate dominators are in fact objects being used in JSPs.
EDIT2: More info available at http://saregos.com/?p=43
I'd first attach a profile tool to tell you what these "Objects" are that are taking up all the memory.
Eclipse has TPTP,
or there is JProfiler
or JProbe.
Any of these should show the object heap creaping up and allow you to inspect it to see what is on the heap.
Then search the code base to find who is creating these.
Maybe you have a cache or tree/map object with elements in and you have only implemented the "equals()" method on these objects, and you need to implement "hashcode()".
This would then result in the map/cache/tree getting bigger and bigger till it falls over.
This is only a guess though.
JProfiler would be my first call
Javaworld has example screen shot of what is in memory...
(source: javaworld.com)
And a screen shot of object heap building up and being cleaned up (hence the saw edge)
(source: javaworld.com)
UPDATE *************************************************
Ok, I'd look at...
http://www-01.ibm.com/support/docview.wss?uid=swg1PK38940
Heap usage increases over time which leads to an OutOfMemory
condition. Analysis of a heapdump shows that the following
objects are taking up an increasing amount of space:
40,543,128 [304] 47 class
com/ibm/wsspi/rasdiag/DiagnosticConfigHome
40,539,056 [56] 2 java/util/Hashtable 0xa8089170
40,539,000 [2,064] 511 array of java/util/Hashtable$Entry
6,300,888 [40] 3 java/util/Hashtable$HashtableCacheHashEntry
Triggering the garbage collection manually doesn't solve your problem - it won't free resources that are still in use.
You should use a profiling tool (like jProfiler) to find your leaks. You problably use code that stores references in lists or maps that are not released during runtime - propably static references.
If you run under the Sun 6 JVM strongly consider to use the jvisualvm program in the JDK to get an inital overview of what actually goes on inside the program. The snapshot comparison is really good to help you get further in which objects sneak in.
If Sun 6 JVM is not an option, then investigate which profiling tools you have. Trials can get you really far.
It can be something as simple as gigantic character arrays underlying a substring you are collecting in a list, for e.g. housekeeping.
I suggest reading Effective Java, chapter 2. Following it, together with a profiler, will help you identify the places where your application produces memory leaks.
Freeing up memory isn't the way to solve extensive memory consumption. The extensive memory consumption may be a result of two things:
not properly written code - the solution is to write it properly, so that it does not consume more than is needed - Effective Java will help here.
the application simply needs this much memory. Then you should increase the VM memory using Xmx, Xms, XX:MaxHeapSize,...
There is no specific to free up objects allocated in JSPs, at least as far as I know. Rather than investigationg such options, I'd rather focus on finding the actual problem in your application codes and fix it.
Some hints that might help:
Check the scope of your beans. Aren't
you e.g. storing something user or
request specific into "application"
scope (by mistake)?
Check settings of web session timeout in your web application and
appserver settings.
You mentioned the heap consumption grows gradually. If it's indeed so,
try to see by how much the heap size
grows with various user scenarios:
Grab a heapdump, run a test, let the
session data timeout, grab another
dump, compare the two. That might
give you some idea where do the objects on heap come from
Check your beans for any obvious memory leaks, for sure :)
EDIT: Checking for unreleased static resources that Daniel mentions is another worthwhile thing :)
As I understand those top-level memory-eaters are cache storage and objects stored in it. Probably you should make sure that your cache is going to free objects when it takes too much memory. You may want to use weak-ref if you need cache for live objects only.
Having analyzed a light-load web application running in tomcat, using JMX Console, it turns out the "PS Old Gen" is growing slowly but constant. It starts with 200MB and grows around 80MB/Hour.
CPU is not an issue, it runs at 0-1% on average, but somewhere it leaks memory, so it will become unstable some days after deployment.
How do i find out what objects are allocated on the heap? Are there any good tutorials or tools you know?
You could try jmap, one of the JDK Development Tools. You can use jhat with the output to walk heap dumps using your web browser.
See this answer for a short explanation.
This comes up quite often, so searching SO for those tools should turn up some alternatives.
I've used the HeapAnalyzer tool from IBM's alphaWorks with good success. It takes output from Java's heap profile, hprof, and analyzes it to show you the most likely memory leaks.
You can use NetBeans profiler. It has 2 modes, launching tomcat profiled directly from ide (for localhost) or using a remote profiling with a JAR provided and some run config on server.
I used it in a project for a memory leak and it was useful.
See my answer here:
Strategies for the diagnosis of Java memory issues
And there are also tips here:
How can I figure out what is holding on to unfreed objects?
What you are seeing is normal, unless you can prove otherwise.
You do not need to analyze the heap when the additional "consumed space" disappears when a GC in the old space happens.
At some point, when the used space reaches your maximum heap size you will observe a pause caused by the default GC you use and afterwards the used memory should go down a lot. Only if it does not go down after a GC you might be interested what is still holding onto those objects.
JRockit Mission Control can analyze memory leaks while connected to JVM. No need to take snapshots all the time. This can be useful if you have a server with a large heap.
Just hook the tool up to the JVM and it will give you a trend table where you can see which type of objects that are growing the most, and then you can explore references to those objects. You can also get allocations traces, while the JVM is running, so you can see where in the application the objects are allocated.
You can download it here for free