Running out of memory while analyzing a Java Heap Dump - java

I have a curious problem, I need to analyze a Java heap dump (from an IBM JRE) which has 1.5GB in size, the problem is that while analyzing the dump (I've tried HeapAnalyzer and the IBM Memory Analyzer 0.5) the tools runs out of memory I can't really analyze the dump. I have 3GB of RAM in my machine, but seems like it's not enough to analyze the 1.5 GB dump,
My question is, do you know a specific tool for heap dump analysis (supporting IBM JRE dumps) that I could run with the amount of memory I have?
Thanks.

Try the SAP memory analyzer tool, which also has an eclipse plugin. This tool creates index files on disk as it processes the dump file and requires much less memory than your other options. I'm pretty sure it supports the newer IBM JRE's. That being said - with a 1.5 GB dump file, you might have no other option but to run a 64-bit JVM to analyze this file - I usually estimate that a heap dump file of size n takes 5*n memory to open using standard tools, and 3*n memory to open using MAT, but your milage will vary depending on what the dump actually contains.

It's going to difficult to analyze 1.5GB heap dump on a 3GB RAM. Because in that 3GB your OS, other processes, services,... easily would occupy 0.5 GB. So you are left with only 2.5GB. heapHero tool is efficient in analyzing heap dumps. It should take only 0.5GB more than the size of heap dump to analyze. You can give it try. But best recommendation is to analyze heap dump on a machine which has adequate memory OR you can get an AWS ec2 instance just for the period of analyzing heap dumps. After analyzing heap dumps, you can terminate the instance.

Related

Understanding java memory usage

I am trying to solve a memory issue I am having with my tomcat servers and I have some questions about memory usage.
When I check my process memory usage with top I see its using 1Gb physical memory, after creating a core dump using gdb, the core file size is 2.5GB , and when analyzing the HPROF file created by jmap , it states that 240MB is used.
So if top shows 1GB why does the hprof file show only 240MB where did 760MB go ?
Have you tried running Jmap using -heap:format option set? JVM usually runs a GC before taking a dump.
Also, JVM memory is not just Heap Memory. It contains Code, Stack, Native method, Direct memory, even Threads are not free to use. you could read more about it here. Just make sure to see if all these also add up to it.
I would suggest using VisualVM or yourkit and compare the memory. Also, which GC are you using? Some GC's don't usually shrink the heap memory after increasing, but if GC got triggered during heapdump it might have freed up some memory(Try G1GC).

Java heap dump and the heap size after the heap analysis differs

I am experiencing memory leak and here goes some detail.
At the time of after-leak,
top shows 50GB memory as residential
heap dump file size is 25GB
eclipse MAT analyzer tells me the heap size is 10GB
At the time of before-leak,
top shows 30GB memory as residential
heap dump file size is 20GB
eclipse MAT analyzer tells me the heap size is 10GB
I am pretty surprised that the difference between top, heap-dump size, and the actual heap size.
I am guessing that the difference between top and heap is the possibility of garbage collector heap and native heap areas.
But, how come the heap dump file size and the actual heap size (from eclipse MAT analyzer) could differ?
Any insight on this problem?
UPDATE / ANSWER
Some of suggestions are to use jcmd (https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/tooldescr007.html) as the website tells "Native Memory Tracking". But, if you read the page carefully, you will see
Since NMT doesn't track memory allocations by non-JVM code, you may
have to use tools supported by the operating system to detect memory
leaks in native code.
So, in case of the leak inside the native library, jcmd is not an option.
After crawling the Internets for days and trying out various profilers, most effective for this problem is using jemalloc profiler.
This page helped me a lot!
https://gdstechnology.blog.gov.uk/2015/12/11/using-jemalloc-to-get-to-the-bottom-of-a-memory-leak/
I have experienced similar situation. The difference (HPROF file size - Size of the heap indicated by MAT) is effectively garbage (unreachable objects). Unreachable object Histogram in MAT should help here.
jmap -F -dump:live,format=b,file=<file_name.hprof> <process_id> will only dump live objects and NOT garbages.
top and other OS level tools show how much system memory does your JVM process consume. Java heap, defined by -Xmx command line option, is only a part of that memory. Apart from heap JVM needs some memory for itself. Then there are java threads, each requiring a certain amount of memory. And Metaspace/Permanent Generation. And several others. You can read this blog post and this SO answer for more information.
About the size of the dump file and the actual heap size the answer of #arnab-biswas is certainly true. MAT reports the size of actually used heap, consumed by live objects. But heap dump contains the whole of the heap, including garbage.
In order to monitor the native memory you need to start your application with -XX:NativeMemoryTracking=summary or -XX:NativeMemoryTracking=detail. Note that there is a performance penalty, so think twice before doing it in production.
When memory tracking is active you can use jcmd <pid> VM.native_memory summary. There are other commands available as well, check https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/tooldescr007.html or search for native memory tracking.
EDIT: I didn't follow the links before answering, you may be looking for something like https://github.com/jeffgriffith/native-jvm-leaks instead.
You are asking for an answer drawing from credible/official sources. Let me give it a try.
1) why is the memory consumed by my JVM process (shown by Top) larger than the heap
size?
Because the total memory consumption of the JVM process consists of more things than just the Java heap. A few examples:
Generated (JIT:ed) code
Loaded libraries (including jar and class files)
Control structures for the java heap
Thread Stacks
User native memory (malloc:ed in JNI)
Credible/official sources: Run-Time Data Areas and this blog post
2) why is the heap dump size much bigger than what MAT reports?
Because MAT does not show the complete heap. During the index creation, the Memory Analyzer removes unreachable objects because the various garbage collector algorithms tend to leave some garbage behind.
Credible/official sources: MemoryAnalyzer/FAQ
Heap dump :
A heap dump is a snapshot of the memory of a Java process at a certain point of time. There are different formats for persisting this data, and depending on the format it may contain different pieces of information, but in general the snapshot contains information about the java objects and classes in the heap at the moment the snapshot was triggered. Usually a full GC is triggered before the heap dump is written so it contains information about the remaining objects.
For information related to MAT found here http://help.eclipse.org/neon/index.jsp?topic=/org.eclipse.mat.ui.help/welcome.html

Memory used by application is larger than actual heap dump size

I am running Apache Ignite application. When I see memory usage using linux free I am getting used memory as 9.8 GB. But when I take heap dump using eclipse MAT, its size is only about 1.8 GB. Why is this happening? Default heap memory allocated in ignite is 21 GB. I have also not done any GC tuning.
When Eclipse takes the heap dump, it most likely forces full garbage collection so that you only see objects that are actually in memory. JVM itself does not do this because the heap is very large and there is still a lot of available memory. BTW, this will happen eventually making your Ignite node unresponsive for significant period of time. I would recommend not to allocate more than 10-12GB per node.

Analysing large Java heap dumps - memory error

I have a very peculiar problem. I have a heap dump of 30 GB and I want to analyze the same on my laptop (which has 8 GB of RAM). I tried doing that with MAT and IBM Heap analyzer, but as per their recommendation the Xmx size should be more than the dump size. I also tried to analyze the heap dump with the heapDumpParser.bat file of MAT but received memory error.
Any suggestions on how I can analyze the dump on my laptop successfully?
Thanks in advance!
Memory Analyzer is probably the best tool for analysing out of memory issues but it does require a lot of memory.
If you are unable to find a machine large enough to run to handle your dump you could try using the jdmpview command line tool that ships with the IBM SDK to perform some basic investigation.
It will work best with the core dumps generated on out of memory rather than the phd files as it does not need to load the contents into memory.
You can find it in jre/bin and need to run:
jdmpview -core core_file_name
You should probably start by running the command:
info class
as that will generate a basic list of object types, instance counts and sizes.
There are full docs here:
http://www-01.ibm.com/support/knowledgecenter/SSYKE2_8.0.0/com.ibm.java.win.80.doc/diag/tools/dump_viewer_dtfjview/dump_viewer.html

Analyze/track down potential native memory leak in JVM

We're running an application on Linux using Java 1.6 (OpenJDK as well as Oracle JDK). The JVM itself has a maximum of 3.5 GB heap and 512 MB permgen space. However, after running a while top reports the process is using about 8 GB of virtual memory and smem -s swap p reports about 3.5 GB being swapped.
After running a bigger import of thousands of image files on one server, almost no swap space is left and calls to native applications (in our case Im4java calls to Image Magick) fail due to the OS failing to allocate memory for those applications.
In another case the swap space filled over the course of several weeks resulting in the OS killing the JVM due to being out of swap space.
I understand that the JVM will need more than 4 GB of memory for heap (max 3.5 GB), permgen (max 512 MB), code cache, loaded libraries, JNI frames etc.
The problem I'm having is how to find out what is actually using how much of the memory. If the JVM was out of heap memory, I'd get a dump which I could analyze, but in our case it's the OS memory that is eaten up and thus the JVM doesn't generate a dump.
I know there's jrcmd for JRockit, but unfortunately we can't just switch the JVM.
There also seem to be a couple of libraries that allow to track native memory usage but most of those seem to need native code to be recompiled - and besides Im4java (which AFAIK just runs a native process, we don't use DLL/SO-integration here) and the JVM there's no other native code involved that we know of.
Besides that, we can't use a library/tool that might have a huge impact on performance or stability in order to track memory usage on a production system over a long period (several weeks).
So the question is:
How can we get information on what the JVM is actually needing all that memory for, ideally with some detailed information?
You may find references to "zlib/gzip" (pdf handling or http encoding since Java 7), "java2d" or "jai" when replacing memory allocator (jemalloc or tcmalloc) in JVM.
But to really diagnose native memory leak, JIT code symbol mapping and Linux recent profiling tools are required: perf, perf-map-agent and bcc.
Please refer to details in related answer https://stackoverflow.com/a/52767721/737790
Many thanks to Brendan Gregg

Categories