Solving heapdump issue for webapplication(JSP+SPRING MVC+JPA-HIBERNATE) - java

I have been supporting webapplication which uses JSP+SPRING MVC+JPA-HIBERNATE. Recently we have heapdump issue in WAS server.Now we need to change some code in the application to prevent the heapdump.Otherwise deployment team wont move it to live environment.
I have loaded heampdump files(.phd) in IBM heapanalyser where it i givig the list of leak suspects
I am keeping the same data in the image captuted in HeapAnalser below.
There are two leak supsects given by hep anlayser.
1)97.499.936 bytes (52,48 %) of Java heap is used by 6 instances of
java/util/WeakHashMap$Entry Contains an instance) of the leak suspect:
-com/ibm/ws/wswebcontainer/webapp/WebApp holding 22.950.680 bytes at
0x822ac78
2)Responsible for 22.950.680 bytes (12,35 %) of Java heap
-Contained under array of java/util/WeakHashMap$Entry holding 97.499.936
bytes at 0x145bb10
I dont know how i have to proceed further on this issue and I need to modify the code from our end to avoid this issue.For that i need to find which classes of my application creating the above instances.Please suggest me how to proceed on this issue.

Related

How to resolve java.lang.OutOfMemoryError: Java heap space without increasing the heap memory size

I know there all lot of questions on java.lang.OutOfMemoryError: Java heap space.
like question 1
But none of the links are not proper answers to my question.
We are generating reports in spreadsheet format, where huge data is coming from database side. We increased the heap memory size from 2 GB to 4 GB, no use.
May be because of some extra space in database column, so I trimmed all the getters and setters using trim() method, this is also no use.
For Example :
String s = "Hai ";
s.trim();
If any one having how to resolve this issue from Java coding side, without increasing the size of the heap space. Because client told, they will not increase the heap space any more.
When calling this method getting exception
private CrossTabResult mergeAllListsSameNdc(CrossTabResult crt, CrossTabResult res) {
crt.setFormularyTier(crt.getFormularyTier()==null ? "":((crt.getFormularyTier().contains(crt.getListId())? crt.getFormularyTier(): crt.getListId()+crt.getFormularyTier()) +"~"+res.getListId()+res.getFormularyTier()));
crt.setFormularyTierDesc(crt.getFormularyTierDesc()==null ? "":((crt.getFormularyTierDesc().contains(crt.getListId())? crt.getFormularyTierDesc(): crt.getListId()+crt.getFormularyTierDesc()) +"~"+res.getListId()+res.getFormularyTierDesc()));}
Can't share more code, due to confidential. By looking above code, if you people have any alternative solution means inform me. We are merging two String based on the same id.
We are generating reports in spreadsheet format, where huge data is
coming from database side.
In this kind of use cases, you have at least two things to study that may improve the consumed memory but first you have to identify the culprits.
Mainly causes identified by monitoring tools in this use case are generally :
1) If data loaded from the DB is identified as a big consumer of memory, you should probably not load all data in one shot.
Keeping all theses objects in memory and in the same time creating the spreadsheet from these data may consume a lot of memory.
Overall if the application is parallely used for other use cases.
You should rather divide the retrieval in several retrievals.
Then invoke one to retrieve some objects, populate the spreadsheat and free these objects as not required any longer. And so on...
2) During the spreadsheet creation, if the object spreadsheet created with the library was identified as a big consumer of memory, you should favor streaming API or event API to write the spreadsheet over API that loads the whole spreadsheet in memory.
For example POI provides a DOM-like API : XSSF and a streaming API : SXSSF.
You don't specify the library use to create the spreadsheet but it doesn't matter as the logic to use should be the same for anyone.

How to handle large XM File Java Around 5 GB

My application needs to use data in a XML file which is up to 5 GB in size. I Load data in Image Classed from the XML. The Image class has many attributes, Like Path, Name, MD5, Hash, and many other information like that.
The 5 GB file has around 50 Million of Image data in it, When i parse the xml the data is loaded inside the app and same amount of image classes is created inside the app, and i perform different operation and calculation on it.
My Problem is when i parse such a hugh file my memory get eat up. I guess all the data is loading inside the ram. Due to complexity of the code, I'm unable to provide the whole code. I there an efficient way to handle such a hugh number of classes. I have done research all night, but didn't get success, Can some one point me in right direction?
Thanks
You need some sort of pipeline to pass the data on to its actual destination without ever store it all in memory at once
I don't know how your code doing the parsing but you you don't need to store all data in the memory.
Here is a very good answer for implementation for reading large XML files
If you're using SAX, but you are eating up memory, then you are doing something wrong, and there is no way we can tell you what you are doing wrong without seeing your code.
I suggest using JVisualVM to get a heap dump and see what objects are using up the memory, and then investigating the part of your application that creates those objects.

Fastest way to create a trie (JSON) from a 4GB file, using only 1GB of ram?

Perhaps I'm doing this the wrong way:
I have a 4GB (33million lines of text) file, where each line has a string in it.
I'm trying to create a trie -> The algorithm works.
The problem is that Node.js has a process memory limit of 1.4GB, so the moment I process 5.5 million lines, it crashes.
To get around this, I tried the following:
Instead of 1 Trie, I create many Tries, each having a range of the alphabet.
For example:
aTrie ---> all words starting with a
bTrie ---> all words starting with b...
etc...
But the problem is, I still can't keep all the objects in memory while reading the file, so each time I read a line, I load / unload a trie from disk. When there is a change I delete the old file, and write the updated trie from memory to disk.
This is SUPER SLOW! Even on my macbook pro with SSD.
I've considered writing this in Java, but then the problem of converting JAVA objects to json comes up (same problem with using C++ etc).
Any suggestions ?
You may extend memory size limit that the node process uses by specifying the option below;
ps: size in mb's.
node --max_old_space_size=4096
for more options please see:
https://github.com/thlorenz/v8-flags/blob/master/flags-0.11.md
Instead of using 26 Tries you could use a hash function to create an arbitrary number of sub-Tries. This way, the amount of data you have to read from disk is limited to the size of your sub-Trie that you determine. In addition, you could cache the recently used sub-Tries in memory and then persist the changes to disk asynchronously in the background if IO is still a problem.

I need help finding my memory leak using MAT

I'm using the MAT to compare two heap dumps. I've been taking a heap dump each day and it's growing by about 200 megs each day. I think the leak is associated with java.util.zip because of what the table shows and also because we added a new process recently that zips and unzips a lot of files. (see image)
At this point I open the dominator and filtered for .Inflater. That produced a large list of java.util.zip.Inflater. Now I want to see what's holding these open so I picked one and ran the Path to GC root excluding weak and soft references (see image).
It looks like this has to do with the jar inflation and nothing to do with my process. At this point I'm stuck and need some suggestions.
EDIT 1
Sean asked about the ThreadLocals. If you look at the dominator_tree with no filter you see that java.lang.ApplicationShutdownHooks is 58% of the heap. If I expand some of those entries you can see that they seem to be in the ThreadLocalMap. How would I find what put them there?
EDIT 2
Sean's comment put me on the correct track. I'm using Glassfish v 2.0 and it has a memory leak. It continually creates new LogManagers and adds them to the ApplicationShutdownHooks collection.
I worked around the issue by cracking open the ApplicationShutdownHooks and manually removing the objects from the collection.
Sean's comment put me on the correct track. I'm using Glassfish v 2.0 and it has a memory leak. It continually creates new LogManagers and adds them to the ApplicationShutdownHooks collection.
I worked around the issue by cracking open the ApplicationShutdownHooks and manually removing the objects from the collection.

Method for finding memory leak in large Java heap dumps

I have to find a memory leak in a Java application. I have some experience with this but would like advice on a methodology/strategy for this. Any reference and advice is welcome.
About our situation:
Heap dumps are larger than 1 GB
We have heap dumps from 5 occasions.
We don't have any test case to provoke this. It only happens in the (massive) system test environment after at least a weeks usage.
The system is built on a internally developed legacy framework with so many design flaws that they are impossible to count them all.
Nobody understands the framework in depth. It has been transfered to one guy in India who barely keeps up with answering e-mails.
We have done snapshot heap dumps over time and concluded that there is not a single component increasing over time. It is everything that grows slowly.
The above points us in the direction that it is the frameworks homegrown ORM system that increases its usage without limits. (This system maps objects to files?! So not really a ORM)
Question: What is the methodology that helped you succeed with hunting down leaks in a enterprise scale application?
It's almost impossible without some understanding of the underlying code. If you understand the underlying code, then you can better sort the wheat from chaff of the zillion bits of information you are getting in your heap dumps.
Also, you can't know if something is a leak or not without know why the class is there in the first place.
I just spent the past couple of weeks doing exactly this, and I used an iterative process.
First, I found the heap profilers basically useless. They can't analyze the enormous heaps efficiently.
Rather, I relied almost solely on jmap histograms.
I imagine you're familiar with these, but for those not:
jmap -histo:live <pid> > histogram.out
creates a histogram of the live heap. In a nutshell, it tells you the class names, and how many instances of each class are in the heap.
I was dumping out heap regularly, every 5 minutes, 24hrs a day. That may well be too granular for you, but the gist is the same.
I ran several different analyses on this data.
I wrote a script to take two histograms, and dump out the difference between them. So, if java.lang.String was 10 in the first dump, and 15 in the second, my script would spit out "5 java.lang.String", telling me it went up by 5. If it had gone down, the number would be negative.
I would then take several of these differences, strip out all classes that went down from run to run, and take a union of the result. At the end, I'd have a list of classes that continually grew over a specific time span. Obviously, these are prime candidates for leaking classes.
However, some classes have some preserved while others are GC'd. These classes could easily go up and down in overall, yet still leak. So, they could fall out of the "always rising" category of classes.
To find these, I converted the data in to a time series and loaded it in a database, Postgres specifically. Postgres is handy because it offers statistical aggregate functions, so you can do simple linear regression analysis on the data, and find classes that trend up, even if they aren't always on top of the charts. I used the regr_slope function, looking for classes with a positive slope.
I found this process very successful, and really efficient. The histograms files aren't insanely large, and it was easy to download them from the hosts. They weren't super expensive to run on the production system (they do force a large GC, and may block the VM for a bit). I was running this on a system with a 2G Java heap.
Now, all this can do is identify potentially leaking classes.
This is where understanding how the classes are used, and whether they should or should not be their comes in to play.
For example, you may find that you have a lot of Map.Entry classes, or some other system class.
Unless you're simply caching String, the fact is these system classes, while perhaps the "offenders", are not the "problem". If you're caching some application class, THAT class is a better indicator of where your problem lies. If you don't cache com.app.yourbean, then you won't have the associated Map.Entry tied to it.
Once you have some classes, you can start crawling the code base looking for instances and references. Since you have your own ORM layer (for good or ill), you can at least readily look at the source code to it. If you ORM is caching stuff, it's likely caching ORM classes wrapping your application classes.
Finally, another thing you can do, is once you know the classes, you can start up a local instance of the server, with a much smaller heap and smaller dataset, and using one of the profilers against that.
In this case, you can do unit test that affects only 1 (or small number) of the things you think may be leaking. For example, you could start up the server, run a histogram, perform a single action, and run the histogram again. You leaking class should have increased by 1 (or whatever your unit of work is).
A profiler may be able to help you track the owners of that "now leaked" class.
But, in the end, you're going to have to have some understanding of your code base to better understand what's a leak, and what's not, and why an object exists in the heap at all, much less why it may be being retained as a leak in your heap.
Take a look at Eclipse Memory Analyzer. It's a great tool (and self contained, does not require Eclipse itself installed) which 1) can open up very large heaps very fast and 2) has some pretty good automatic detection tools. The latter isn't perfect, but EMA provides a lot of really nice ways to navigate through and query the objects in the dump to find any possible leaks.
I've used it in the past to help hunt down suspicious leaks.
This answer expands upon #Will-Hartung's. I applied to same process to diagnose one of my memory leaks and thought that sharing the details would save other people time.
The idea is to have postgres 'plot' time vs. memory usage of each class, draw a line that summarizes the growth and identify the objects that are growing the fastest:
^
|
s | Legend:
i | * - data point
z | -- - trend
e |
( |
b | *
y | --
t | --
e | * -- *
s | --
) | *-- *
| -- *
| -- *
--------------------------------------->
time
Convert your heap dumps (need multiple) into a format this is convenient for consumption by postgres from the heap dump format:
num #instances #bytes class name
----------------------------------------------
1: 4632416 392305928 [C
2: 6509258 208296256 java.util.HashMap$Node
3: 4615599 110774376 java.lang.String
5: 16856 68812488 [B
6: 278914 67329632 [Ljava.util.HashMap$Node;
7: 1297968 62302464
...
To a csv file with a the datetime of each heap dump:
2016.09.20 17:33:40,[C,4632416,392305928
2016.09.20 17:33:40,java.util.HashMap$Node,6509258,208296256
2016.09.20 17:33:40,java.lang.String,4615599,110774376
2016.09.20 17:33:40,[B,16856,68812488
...
Using this script:
# Example invocation: convert.heap.hist.to.csv.pl -f heap.2016.09.20.17.33.40.txt -dt "2016.09.20 17:33:40" >> heap.csv
my $file;
my $dt;
GetOptions (
"f=s" => \$file,
"dt=s" => \$dt
) or usage("Error in command line arguments");
open my $fh, '<', $file or die $!;
my $last=0;
my $lastRotation=0;
while(not eof($fh)) {
my $line = <$fh>;
$line =~ s/\R//g; #remove newlines
# 1: 4442084 369475664 [C
my ($instances,$size,$class) = ($line =~ /^\s*\d+:\s+(\d+)\s+(\d+)\s+([\$\[\w\.]+)\s*$/) ;
if($instances) {
print "$dt,$class,$instances,$size\n";
}
}
close($fh);
Create a table to put the data in
CREATE TABLE heap_histogram (
histwhen timestamp without time zone NOT NULL,
class character varying NOT NULL,
instances integer NOT NULL,
bytes integer NOT NULL
);
Copy the data into your new table
\COPY heap_histogram FROM 'heap.csv' WITH DELIMITER ',' CSV ;
Run the slop query against size (num of bytes) query:
SELECT class, REGR_SLOPE(bytes,extract(epoch from histwhen)) as slope
FROM public.heap_histogram
GROUP BY class
HAVING REGR_SLOPE(bytes,extract(epoch from histwhen)) > 0
ORDER BY slope DESC
;
Interpret the results:
class | slope
---------------------------+----------------------
java.util.ArrayList | 71.7993806279174
java.util.HashMap | 49.0324576155785
java.lang.String | 31.7770770326123
joe.schmoe.BusinessObject | 23.2036817108056
java.lang.ThreadLocal | 20.9013528767851
The slope is bytes added per second (since the unit of epoch is in seconds). If you use instances instead of size, then that's the number of instances added per second.
My one of the lines of code creating this joe.schmoe.BusinessObject was responsible for the memory leak. It was creating the object, appending it to an array without checking if it already existed. The other objects were also created along with the BusinessObject near the leaking code.
Can you accelerate time? i.e. can you write a dummy test client that forces it to do a weeks worth of calls/requests etc in a few minutes or hours? These are your biggest friend and if you don't have one - write one.
We used Netbeans a while ago to analyse heap dumps. It can be a bit slow but it was effective. Eclipse just crashed and the 32bit Windows tools did as well.
If you have access to a 64bit system or a Linux system with 3GB or more you will find it easier to analyse the heap dumps.
Do you have access to change logs and incident reports? Large scale enterprises will normally have change management and incident management teams and this may be useful in tracking down when problems started happening.
When did it start going wrong? Talk to people and try and get some history. You may get someone saying, "Yeah, it was after they fixed XYZ in patch 6.43 that we got weird stuff happening".
I've had success with IBM Heap Analyzer. It offers several views of the heap, including largest drop-off in object size, most frequently occurring objects, and objects sorted by size.
There are great tools like Eclipse MAT and Heap Hero to analyze heap dumps. However, you need to provide these tools with heap dumps captured in the correct format and correct point in time.
This article gives you multiple options to capture heap dumps. However, in my opinion, first 3 are effective options to use and others are good options to be aware.
1. jmap
2. HeapDumpOnOutOfMemoryError
3. jcmd
4. JVisualVM
5. JMX
6. Programmatic Approach
7. IBM Administrative Console
7 Options to capture Java Heap dumps
If it's happening after a week's usage, and your application is as byzantine as you describe, perhaps you're better off restarting it every week ?
I know it's not fixing the problem, but it may be a time-effective solution. Are there time windows when you can have outages ? Can you load balance and fail over one instance whilst keeping the second up ? Perhaps you can trigger a restart when memory consumption breaches a certain limit (perhaps monitoring via JMX or similar).
I've used jhat, this is a bit harsh, but it depends on the kind of framework you had.

Categories