I need help finding my memory leak using MAT - java

I'm using the MAT to compare two heap dumps. I've been taking a heap dump each day and it's growing by about 200 megs each day. I think the leak is associated with java.util.zip because of what the table shows and also because we added a new process recently that zips and unzips a lot of files. (see image)
At this point I open the dominator and filtered for .Inflater. That produced a large list of java.util.zip.Inflater. Now I want to see what's holding these open so I picked one and ran the Path to GC root excluding weak and soft references (see image).
It looks like this has to do with the jar inflation and nothing to do with my process. At this point I'm stuck and need some suggestions.
EDIT 1
Sean asked about the ThreadLocals. If you look at the dominator_tree with no filter you see that java.lang.ApplicationShutdownHooks is 58% of the heap. If I expand some of those entries you can see that they seem to be in the ThreadLocalMap. How would I find what put them there?
EDIT 2
Sean's comment put me on the correct track. I'm using Glassfish v 2.0 and it has a memory leak. It continually creates new LogManagers and adds them to the ApplicationShutdownHooks collection.
I worked around the issue by cracking open the ApplicationShutdownHooks and manually removing the objects from the collection.

Sean's comment put me on the correct track. I'm using Glassfish v 2.0 and it has a memory leak. It continually creates new LogManagers and adds them to the ApplicationShutdownHooks collection.
I worked around the issue by cracking open the ApplicationShutdownHooks and manually removing the objects from the collection.

Related

What can I do if I require more memory than there is on the heap in Java?

I have a graph algorithm that generates intermediate results associated to different nodes. Currently, I have solved this by using a ConcurrentHashMap<Node, List<Result> (I am running multithreaded). So at first I add new results with map.get(node).add(result) and then I consume all results for a node at once with map.get(node).
However, I need to run on a pretty large graph where the number of intermediate results wan't fit into memory (good old OutOfMemory Exception). So I require some solution to write out the results on disk—because that's where there is still space.
Having looked at a lot of different "off-heap" maps and caches as well as MapDB I figured they are all not a fit for me. All of them don't seem to support Multimaps (which I guess you can call my map) or mutable values (which the list would be). Additionally, MapDB has been very slow for me when trying to create a new collection for every node (even with a custom serializer based on FST).
I can barely imagine, though, that I am the first and only to have such a problem. All I need is a mapping from a key to a list which I only need to extend or read as a whole. What would an elegant and simple solution look like? Or are there any existing libraries that I can use for this?
Thanks in advance for saving my week :).
EDIT
I have seen many good answers, however, I have two important constraints: I don't want to depend on an external database (e.g. Redis) and I can't influence the heap size.
You can increase the size of heap. The size of heap can be
configured to larger than physical memory size of your server while
you make sure the condition is right:
the size of heap + the size of other applications < the size of physical memory + the size of swap space
For instance, if the physical memory is 4G and the swap space is 4G,
the heap size can be configured to 6G.
But the program will suffer from page swapping.
You can use some database like Redis. Redis is key-value
database and has List structure.
I think this is the simplest way to solve your problem.
You can compress the Result instance. First, you serialize the
instance and compress that. And define the class:
class CompressResult {
byte[] result;
//...
}
And replace the Result to CompressResult. But you should deserialize
the result when you want to use it.
It will work well if the class Result has many fields and is very
complicated.
My recollection is that the JVM runs with a small initial max heap size. If you use the -Xmx10000m you can tell the JVM to run with a 10,000 MB (or whatever number you selected) heap. If your underlying OS resources support a larger heap that might work.

Using Java Swing SystemLookAndFeel on Windows machines leads to MemoryLeaks in CachedPainter with JTextPanes

I am struggling with a nasty problem for many days now. In my Java Swing application, I use two JTextPanes extended with Syntax Highlighting for XML-Text as described in this example with some little changes:
XML Syntax Highlighting in JTextPanes
These two JTextPanes are placed in two JScollPanes in a JSplitPane that is placed directly in the ContentPane of a JFrame. The first TextPane is editable (like a simple XML-Request-Editor), the second TextPane displays XML-Responses from my server backend.
Everything works as expected as long as I don't try to put "many lines" in those XmlTextPanes. This results in a pretty fast increase of memory used (going from < 100 MB to 1,000 MB after just a few lines inserted to one or both of the TextPanes).
The strange thing is that even resetting the TextPanes and/or removing them (or disposing the Frame that holds the Components) will not change the memory used at all! Forcing a garbage collection won't change anything, too. Something must still be holding references to the allocated stuff....
In order to see what exactly is consuming all that memory I tried to analyze the application with the Eclipse MATS resulting in this:
This clearly shows that the CachedPainter is holding a lot of stuff...
Asking Google it seems that I am not the only one having memory issues with the CachePainter but I was unable to find a reason - and even more important - and solution for this.
After messing around with this for many many hours now I found out that this problem does not occur when I set my application to use
UIManager.setLookAndFeel(UIManager.getCrossPlatformLookAndFeelClassName());
instead of
UIManager.setLookAndFeel(UIManager.getSystemLookAndFeelClassName());
With the cross platform Look and Feel, I was able to put thousands of lines of XML content in my TextPanes without getting over 200 MB memory used.
With the same code but set to platform Look and Feel (Windows 7), I reach over 2k MB memory usage after ~200 lines
I can reproduce this behavior compiling against JDK7 and JDK8 :(.
What is causing this and how can I fix it?
€dit:
Some additional Informations:
on further research it seems like some LAFs have problems with d3d buffers. This int[] in the MATS screenshot could be some kind of rendering buffer, too pointing in the same possible direction....
My application already sets this in order to prevent some rendering performance issues (frame resizing for example!):
System.setProperty("sun.java2d.noddraw", Boolean.TRUE.toString());
I could try to add this flag to the startparameters, too:
-Dsun.java2d.d3d=false
Or do you think that this won't help?
Don't worry. It's not memory leak.
The ImageCache is based on SoftReferences. From the sources
public class ImageCache {
private int maxCount;
private final LinkedList<SoftReference<ImageCache.Entry>> entries;
....
From Javadoc
All soft references to softly-reachable objects are guaranteed to have been cleared before the virtual machine throws an OutOfMemoryError.
So when you have not enough memory, the cache is cleared to free enough memory.

Solving heapdump issue for webapplication(JSP+SPRING MVC+JPA-HIBERNATE)

I have been supporting webapplication which uses JSP+SPRING MVC+JPA-HIBERNATE. Recently we have heapdump issue in WAS server.Now we need to change some code in the application to prevent the heapdump.Otherwise deployment team wont move it to live environment.
I have loaded heampdump files(.phd) in IBM heapanalyser where it i givig the list of leak suspects
I am keeping the same data in the image captuted in HeapAnalser below.
There are two leak supsects given by hep anlayser.
1)97.499.936 bytes (52,48 %) of Java heap is used by 6 instances of
java/util/WeakHashMap$Entry Contains an instance) of the leak suspect:
-com/ibm/ws/wswebcontainer/webapp/WebApp holding 22.950.680 bytes at
0x822ac78
2)Responsible for 22.950.680 bytes (12,35 %) of Java heap
-Contained under array of java/util/WeakHashMap$Entry holding 97.499.936
bytes at 0x145bb10
I dont know how i have to proceed further on this issue and I need to modify the code from our end to avoid this issue.For that i need to find which classes of my application creating the above instances.Please suggest me how to proceed on this issue.

neo4j "empty" database takes up a lot of disk space

I've inserted ~2M nodes (via Java API), and deleted them after a day or two of usage (through java too). Now my db has got 16k nodes, and weights 6 GB.
Why this space wasn't freed?
What may be the cause?
The data/graph.db directory contains multiple items:
Store itself, split into multiple files
Indexes
Transaction log files
Log files (messages.log)
All your operations are stored in the transaction logs and then expire according to the keep_logical_logs setting. Not sure what the default value is, by I presume that you might have quite some space in use there.
I'd suggest to check what is taking up the space.
Also, we have sometimes seen that the space in use (reported with du for example) differs when Neo4j is running and stopped.
In addition to Alberto's answer, the store is not compacted. It leaves the empty records for reuse, and they will stay there forever. As far as I know, there is no available tool to compact the store (I've considered writing one myself, but usually convince myself that there aren't that many use cases affected by this).
If you do have a lot of churn where you are inserting and deleting records often, it's a good idea to restart your database often so it will reuse the records that it has marked as deleted.
As Alberto mentions, one of the first things I set (the other being the heap size) when I install a new neo4j is the keep_logical_logs to something like 1-7 days. If you let them grow forever (the default), they will get quite large.

How to find culprit class/object by looking at memory profiler result in visualVM

I am profiling my Java application using VisualVM
and I have gone through
profiling_with_visualvm_part_1
profiling_with_visualvm_part_2
When I see memory profile result, I see millions of Objects[], Char[], String and other such fundamental objects created which is taking all the memory. I want to know which of my classes (or my code) are actually responsible for creating those Objects[] and String etc, so far I couldn't find it. Once I know the culprit class I can dive-in the code and fix it.
I put a filter com.mypackage.*, but I see all of them are many times smaller (sometimes 0byte) compared to the total size of Objects[],Char[], String objects.
I believe, there should be a way to find the culprit code. Otherwise, profiler won't be of much use.
Let me know if my question is not clear, I will try to clarify further.
If you want to see, which code allocates those instances, go to 'Memory settings' and enable 'Record allocations stack traces'. 'Record allocations stack traces' option is explained 'Profiling with VisualVM part 2'. Once you turned it on, profile your application, take snapshot of profiling results. In the snapshot right-click on the particular class and invoke 'Show allocation stacktraces'.

Categories