How can I investigate a memory leak better in NetBeans - java

I'm writing a web spider. It works well except there seems to be a memory leak. The program will run fine for about 15 minutes and then it will crash.
If I monitor it using the "Profile" function in NetBeans, I can see that the memory is increasing over time until eventually I get a java.lang.OutOfMemoryError and the program crashes completely.
The image below shows snapshots of the memory of objects used after one minute and after 15 minutes (right before it crashes.) Is there any way to tell where these objects (my main culprits are byte[] and char[]) are being created or what is still referencing them (and therefor preventing them from being destroyed by the garbage collector)?
Or do I have no idea what I'm talking about?
Thanks, I appreciate the help.

You're probably right on track with your assumption, but maybe not using the right tool?
I don't know NetBeans, but I know both Yourkit Profiler and JProfiler to be very powerful tools for these kinds of analyses. You can walk the heap and analyse "hot spots". Both tools have a trial license, so you can try them out to see which one suits you best.

When you use the profiler to analyze the memory you can turn on recording of the stack traces for object allocation.
When you run in that mode, you can right click on the class and display the stack trace(s) where the objects are created. That should give you enough information to track down the memory leak.
You might want to read this article as well:
http://netbeans.org/community/magazine/html/04/profiler.html

Related

How to analyze heapdump with common leak suspect

The application hitting slowness issue and generate some heapdump file, the heapdump file is 1.2GB, and I need to run my ha456.jar using 8.4GB RAM only can open the heapdump.
Before this, when I analyze the heapdump, I will try to see the Bigger LeakSize and check for the Leak Suspect value, and I can see that which class or which method of my application holding the big memory. And then I will try to fix the code so that it can run with better performance.
For this time, I cant really get the point that which module/method of my application causing the out of memory issue. The following are some of my screen shot of my HeapAnalyzer:
For me, its just common class, for example java/lang/object, java/lang/Long, or java/util/HashMap. I cant really know which method of my application causing the out of memory.
Appreciate your advise on how to analyze on this.
Finding memory leak is always very difficult for anyone in front of the code, let alone from so far. So I can only give you some suggestions:
you got an heap dump, filter by your own objects and analyze who creates the most numerous
run your application and monitor it with VisualVM, use the application a little bit and then force a GC run... 9 times out of 10 the objects whose number does not decrease significantly or do not completely reset are your memory leak
This maybe happening due to a lot of records are read from somewhere like database, queue which is of type Long. There could be a cartesian join or something of that sort. Once i had a ton of strings causing oom and the culprit was a logger accumulating logs.
A couple of thoughts-
When you get oom error trace that back to the suspect method.
Get thread dump and see which threads are active and what they are executing.

Java : Get heap dump without jmap or without hanging the application

In few circumstance, our application is using around 12 GB of memory.
We tried to get the heap dump using jmap utility. Since the application is using some GB of memory it causes the application to stop responding and causes problem in production.
In our case the heap usage suddenly increases from 2-3 GB to 12GB in 6 hours. In an attempt to find teh memory usage trend we tried to collect the heap dump every one hour after restarting the application. But as said since using the jmap causes the application to hang we need to restart it and we are not able to get the trend of memory usage.
Is there a way to get the heap dump without hanging the application or is there a utility other than jmap to collect heap dump.
Thoughts on this highly appreciated, since without getting the trend of memory usage it is highly difficult to fix the issue.
Note: Our application runs in CentOS.
Thanks,
Arun
Try the following. It comes with JDK >= 7:
/usr/lib/jvm/jdk-YOUR-VERSION/bin/jcmd PID GC.heap_dump FILE-PATH-TO-SAVE
Example:
/usr/lib/jvm/jdk1.8.0_91/bin/jcmd 25092 GC.heap_dump /opt/hd/3-19.11-jcmd.hprof
This dumping process is much faster than dumping with jmap! Dumpfiles are much smaller, but it's enough to give your the idea, where the leaks are.
At the time of writing this answer, there are bugs with Memory Analyzer and IBM HeapAnalyzer, that they cannot read dumpfiles from jmap (jdk8, big files). You can use Yourkit to read those files.
First of all, it is (AFAIK) essential to freeze the JVM while a thread dump / snapshot is being taken. If JVM was able to continue running while the snapshot was created, it would be next to impossible to get a coherent snapshot.
So are there other ways to get a heap dump?
You can get a heap dump using VisualVM as described here.
You can get a heap dump using jconsole or Eclipse Memory Analyser as described here.
But all of these are bound to cause the JVM to (at least) pause.
If your application is actually hanging (permanently!) that sounds like a problem with your application itself. My suggestion would be to see if you can track down that problem before looking for the storage leak.
My other suggestion is that you look at a single heap dump, and use the stats to figure out what kind(s) of object are using all of the space ... and why they are reachable. There is a good chance that you don't need the "trend" information at all.
You can use GDB to get the heap dump without running jmap on the target VM however this will still hang the application for the amount of time required to write the heap dump to disk. Assuming a disk speed of 100MB/s (a basic mirrored array or single disk) this is still 2 minutes of downtime.
http://blogs.atlassian.com/2013/03/so-you-want-your-jvms-heap/
The only true way to avoid stopping the JVM is transactional memory and a kernel that takes advantage of it to provide a process snapshot facility. This is one of the dreams of the proponents of STM but it's not available yet. VMWare's hot-migration comes close but depends on your allocation rate not exceeding network bandwidth and it doesn't save snapshots. Petition them to add it for you, it'd be a neat feature.
A heap dump analyzed with the right tool will tell you exactly what is consuming the heap. It is the best tool for tracking down memory leaks. However, collecting a heap dump is slow let alone analyzing it.
With knowledge of the workings of your application, sometimes a histogram is enough to give you a clue of where to look for the problem. For example, if MyClass$Inner is at the top of the histogram and MyClass$Inner is only used in MyClass, then you know exactly which file to look for a problem.
Here's the command for collecting a histogram.
jcmdpidGC.class_histogram filename=histogram.txt
To add to Stephen's answers, you can also trigger a heap dump via API for the most common JVM implementations:
example for the Oracle JVM
API for the IBM JVM

Whats a good java debugger?

I'm trying to find memory leaks and performance issues with my java application. Is there a program out there that can help me debug my application and display performance results?
Thanks.
Have a look at jvisualvm in the JDK - a subset of the Netbeans profiler - which can attach to a running Java 6 process and allow you to profile it and do memory analysis.
https://visualvm.dev.java.net/gettingstarted.html
I used a lot of tools to find why my program eats 100+ Mb of ram, polished the code to remove any possible memory leaks. Later I found that once jvm took some memory from the OS, I doesn't always return it, even if that memory is not used, which often looks like a memory leak. This depends on -Xmx and -XX:MaxHeapFreeRatio. I set Xmx to 40 which is roughly how much memory my app should use, and memory usage stays within 10-15 Mb of this range instead of increasing uncontrollably.
Also, jconsole is a great tool. It comes with jdk.
Eclipse has a good memory dump analyzer; but finding a memory leak can be very challenging and requires you to dive deeply into the way the objects are allocated by your application.
It took me 2 full days to figure out that one of my custom JTable cell editor classes was allocating a JDialog upon instantiation, without actually opening it, and the native part of the dialog kept the cell editor instance locked, thus the table, thus the screen and thus all entity objects that were associated with it.
You can try performance inspector tool.Following is the URL.
http://perfinsp.sourceforge.net/
Java Application performance is directly proportional to how JVM is running your application. This tool gives very good profiling information about JVM.But its not a graphical tool,you need to go through the text file generated. But its one time effort and you can get handy with this tool.I used it many time for performance related issues and it helped me lot.

How to free up memory?

We have been facing Out of Memory errors in our App server for sometime. We see the used heap size increasing gradually until finally it reaches the available heap in size. This happens every 3 weeks after which a server restart is needed to fix this.
Upon analysis of the heap dumps we find the problem to be objects used in JSPs.
Can JSP objects be the real cause of Appserver memory issues? How do we free up JSP objects (Objects which are being instantiated using usebean or other tags)?
We have a clustered Websphere appserver with 2 nodes and an IHS.
EDIT: The findings above are based on the heap-dump and nativestderr log analysis given below using the IBM support assistant
nativestd err log analysis:
alt text http://saregos.com/wp-content/uploads/2010/03/chart.jpg
Heap dump analysis:
![alt text][2]
Heap dump analysis showing the immediate dominators (2 levels up of hastable entry in the image above)
![alt text][3]
The last image shows that the immediate dominators are in fact objects being used in JSPs.
EDIT2: More info available at http://saregos.com/?p=43
I'd first attach a profile tool to tell you what these "Objects" are that are taking up all the memory.
Eclipse has TPTP,
or there is JProfiler
or JProbe.
Any of these should show the object heap creaping up and allow you to inspect it to see what is on the heap.
Then search the code base to find who is creating these.
Maybe you have a cache or tree/map object with elements in and you have only implemented the "equals()" method on these objects, and you need to implement "hashcode()".
This would then result in the map/cache/tree getting bigger and bigger till it falls over.
This is only a guess though.
JProfiler would be my first call
Javaworld has example screen shot of what is in memory...
(source: javaworld.com)
And a screen shot of object heap building up and being cleaned up (hence the saw edge)
(source: javaworld.com)
UPDATE *************************************************
Ok, I'd look at...
http://www-01.ibm.com/support/docview.wss?uid=swg1PK38940
Heap usage increases over time which leads to an OutOfMemory
condition. Analysis of a heapdump shows that the following
objects are taking up an increasing amount of space:
40,543,128 [304] 47 class
com/ibm/wsspi/rasdiag/DiagnosticConfigHome
40,539,056 [56] 2 java/util/Hashtable 0xa8089170
40,539,000 [2,064] 511 array of java/util/Hashtable$Entry
6,300,888 [40] 3 java/util/Hashtable$HashtableCacheHashEntry
Triggering the garbage collection manually doesn't solve your problem - it won't free resources that are still in use.
You should use a profiling tool (like jProfiler) to find your leaks. You problably use code that stores references in lists or maps that are not released during runtime - propably static references.
If you run under the Sun 6 JVM strongly consider to use the jvisualvm program in the JDK to get an inital overview of what actually goes on inside the program. The snapshot comparison is really good to help you get further in which objects sneak in.
If Sun 6 JVM is not an option, then investigate which profiling tools you have. Trials can get you really far.
It can be something as simple as gigantic character arrays underlying a substring you are collecting in a list, for e.g. housekeeping.
I suggest reading Effective Java, chapter 2. Following it, together with a profiler, will help you identify the places where your application produces memory leaks.
Freeing up memory isn't the way to solve extensive memory consumption. The extensive memory consumption may be a result of two things:
not properly written code - the solution is to write it properly, so that it does not consume more than is needed - Effective Java will help here.
the application simply needs this much memory. Then you should increase the VM memory using Xmx, Xms, XX:MaxHeapSize,...
There is no specific to free up objects allocated in JSPs, at least as far as I know. Rather than investigationg such options, I'd rather focus on finding the actual problem in your application codes and fix it.
Some hints that might help:
Check the scope of your beans. Aren't
you e.g. storing something user or
request specific into "application"
scope (by mistake)?
Check settings of web session timeout in your web application and
appserver settings.
You mentioned the heap consumption grows gradually. If it's indeed so,
try to see by how much the heap size
grows with various user scenarios:
Grab a heapdump, run a test, let the
session data timeout, grab another
dump, compare the two. That might
give you some idea where do the objects on heap come from
Check your beans for any obvious memory leaks, for sure :)
EDIT: Checking for unreleased static resources that Daniel mentions is another worthwhile thing :)
As I understand those top-level memory-eaters are cache storage and objects stored in it. Probably you should make sure that your cache is going to free objects when it takes too much memory. You may want to use weak-ref if you need cache for live objects only.

Analyze Tomcat Heap in detail on a production System

Having analyzed a light-load web application running in tomcat, using JMX Console, it turns out the "PS Old Gen" is growing slowly but constant. It starts with 200MB and grows around 80MB/Hour.
CPU is not an issue, it runs at 0-1% on average, but somewhere it leaks memory, so it will become unstable some days after deployment.
How do i find out what objects are allocated on the heap? Are there any good tutorials or tools you know?
You could try jmap, one of the JDK Development Tools. You can use jhat with the output to walk heap dumps using your web browser.
See this answer for a short explanation.
This comes up quite often, so searching SO for those tools should turn up some alternatives.
I've used the HeapAnalyzer tool from IBM's alphaWorks with good success. It takes output from Java's heap profile, hprof, and analyzes it to show you the most likely memory leaks.
You can use NetBeans profiler. It has 2 modes, launching tomcat profiled directly from ide (for localhost) or using a remote profiling with a JAR provided and some run config on server.
I used it in a project for a memory leak and it was useful.
See my answer here:
Strategies for the diagnosis of Java memory issues
And there are also tips here:
How can I figure out what is holding on to unfreed objects?
What you are seeing is normal, unless you can prove otherwise.
You do not need to analyze the heap when the additional "consumed space" disappears when a GC in the old space happens.
At some point, when the used space reaches your maximum heap size you will observe a pause caused by the default GC you use and afterwards the used memory should go down a lot. Only if it does not go down after a GC you might be interested what is still holding onto those objects.
JRockit Mission Control can analyze memory leaks while connected to JVM. No need to take snapshots all the time. This can be useful if you have a server with a large heap.
Just hook the tool up to the JVM and it will give you a trend table where you can see which type of objects that are growing the most, and then you can explore references to those objects. You can also get allocations traces, while the JVM is running, so you can see where in the application the objects are allocated.
You can download it here for free

Categories