Java memory usage breakdown - native library (DLL) usage - java

My Java program is taking up huge amounts of memory ~3GB and I have set xmx300MB. Additionally different tools report different memory usage.
jcmd: 576MB
Task Manager: 967MB
Resource Monitor: 3478MB
When I close the program Task Manager shows the memory usage dropping by about 3GB. My question is how can I see what is using this memory? It seems like it is not java due to the output I see by jcmd. I suspect it might be a DLL that my Java program is using. Are there any tools which can be used here?

Here's a toolset and instructions.
You can first confirm your suspicions that it is the DLL by enabling Native Memory Tracking with -XX:NativeMemoryTracking=summary (or detail), then re-check with jcmd VM.native_memory to verify it's not Java doing native memory allocation.
If you don't see a large obvious chunk under the Native Memory Tracking part, you'll be out of Java land and have to try the tools listed in "Native Memory Leaks from Outside the JVM" (jemalloc, valgrind, Purify, etc.).

Related

How do I analyze a Java heap dump when local memory is less than the size of the dumped heap? [duplicate]

I have a HotSpot JVM heap dump that I would like to analyze. The VM ran with -Xmx31g, and the heap dump file is 48 GB large.
I won't even try jhat, as it requires about five times the heap memory (that would be 240 GB in my case) and is awfully slow.
Eclipse MAT crashes with an ArrayIndexOutOfBoundsException after analyzing the heap dump for several hours.
What other tools are available for that task? A suite of command line tools would be best, consisting of one program that transforms the heap dump into efficient data structures for analysis, combined with several other tools that work on the pre-structured data.
Normally, what I use is ParseHeapDump.sh included within Eclipse Memory Analyzer and described here, and I do that onto one our more beefed up servers (download and copy over the linux .zip distro, unzip there). The shell script needs less resources than parsing the heap from the GUI, plus you can run it on your beefy server with more resources (you can allocate more resources by adding something like -vmargs -Xmx40g -XX:-UseGCOverheadLimit to the end of the last line of the script.
For instance, the last line of that file might look like this after modification
./MemoryAnalyzer -consolelog -application org.eclipse.mat.api.parse "$#" -vmargs -Xmx40g -XX:-UseGCOverheadLimit
Run it like ./path/to/ParseHeapDump.sh ../today_heap_dump/jvm.hprof
After that succeeds, it creates a number of "index" files next to the .hprof file.
After creating the indices, I try to generate reports from that and scp those reports to my local machines and try to see if I can find the culprit just by that (not just the reports, not the indices). Here's a tutorial on creating the reports.
Example report:
./ParseHeapDump.sh ../today_heap_dump/jvm.hprof org.eclipse.mat.api:suspects
Other report options:
org.eclipse.mat.api:overview and org.eclipse.mat.api:top_components
If those reports are not enough and if I need some more digging (i.e. let's say via oql), I scp the indices as well as hprof file to my local machine, and then open the heap dump (with the indices in the same directory as the heap dump) with my Eclipse MAT GUI. From there, it does not need too much memory to run.
EDIT:
I just liked to add two notes :
As far as I know, only the generation of the indices is the memory intensive part of Eclipse MAT. After you have the indices, most of your processing from Eclipse MAT would not need that much memory.
Doing this on a shell script means I can do it on a headless server (and I normally do it on a headless server as well, because they're normally the most powerful ones). And if you have a server that can generate a heap dump of that size, chances are, you have another server out there that can process that much of a heap dump as well.
First step: increase the amount of RAM you are allocating to MAT. By default it's not very much and it can't open large files.
In case of using MAT on MAC (OSX) you'll have file MemoryAnalyzer.ini file in MemoryAnalyzer.app/Contents/MacOS. It wasn't working for me to make adjustments to that file and have them "take". You can instead create a modified startup command/shell script based on content of this file and run it from that directory. In my case I wanted 20 GB heap:
./MemoryAnalyzer -vmargs -Xmx20g --XX:-UseGCOverheadLimit ... other params desired
Just run this command/script from Contents/MacOS directory via terminal, to start the GUI with more RAM available.
I suggest trying YourKit. It usually needs a little less memory than the heap dump size (it indexes it and uses that information to retrieve what you want)
The accepted answer to this related question should provide a good start for you (if you have access to the running process, generates live jmap histograms instead of heap dumps, it's very fast):
Method for finding memory leak in large Java heap dumps
Most other heap analysers (I use IBM http://www.alphaworks.ibm.com/tech/heapanalyzer) require at least a percentage of RAM more than the heap if you're expecting a nice GUI tool.
Other than that, many developers use alternative approaches, like live stack analysis to get an idea of what's going on.
Although I must question why your heaps are so large? The effect on allocation and garbage collection must be massive. I'd bet a large percentage of what's in your heap should actually be stored in a database / a persistent cache etc etc.
This person http://blog.ragozin.info/2015/02/programatic-heapdump-analysis.html
wrote a custom "heap analyzer" that just exposes a "query style" interface through the heap dump file, instead of actually loading the file into memory.
https://github.com/aragozin/heaplib
Though I don't know if "query language" is better than the eclipse OQL mentioned in the accepted answer here.
The latest snapshot build of Eclipse Memory Analyzer has a facility to randomly discard a certain percentage of objects to reduce memory consumption and allow the remaining objects to be analyzed. See Bug 563960 and the nightly snapshot build to test this facility before it is included in the next release of MAT. Update: it is now included in released version 1.11.0.
A not so well known tool - http://dr-brenschede.de/bheapsampler/ works well for large heaps. It works by sampling so it doesn't have to read the entire thing, though a bit finicky.
This is not a command line solution, however I like the tools:
Copy the heap dump to a server large enough to host it. It is very well possible that the original server can be used.
Enter the server via ssh -X to run the graphical tool remotely and use jvisualvm from the Java binary directory to load the .hprof file of the heap dump.
The tool does not load the complete heap dump into memory at once, but loads parts when they are required. Of course, if you look around enough in the file the required memory will finally reach the size of the heap dump.
I came across an interesting tool called JXray. It provides limited evaluation trial license. Found it very useful to find memory leaks. You may give it a shot.
Try using jprofiler , its works good in analyzing large .hprof, I have tried with file sized around 22 GB.
https://www.ej-technologies.com/products/jprofiler/overview.html
$499/dev license but has a free 10 day evaluation
When the problem can be "easily" reproduced, one unmentioned alternative is to take heap dumps before memory grows that big (e.g., jmap -dump:format=b,file=heap.bin <pid>).
In many cases you will already get an idea of what's going on without waiting for an OOM.
In addition, MAT provides a feature to compare different snapshots, which can come handy (see https://stackoverflow.com/a/55926302/898154 for instructions and a description).

Memory Leak Detection in DLLs

I use third party DLL's in my java application to access native methods written in C. My application often gets crashed with malloc failed or out of swap space error message. There is no memory leak in my java application (Verified with profilers). Now I doubt that memory leak in third party DLL's. Is there any way to find out leak in DLL's.
I've used a C/C++ tool to detect memory leaks in my dlls several months ago:
http://www.codeproject.com/Articles/8448/Memory-Leak-Detection
And you also have:
http://vld.codeplex.com/
my first choice to detect memory issues is valgrind. with java and JIT it might however not always work.
but still worth to give it a shot. try running
valgrind --smc-check=all --trace-children=yes --show-reachable=yes --leak-check=full [your command]
cheers,

Java performance tuning, JNI memory leak

I have a Java application. It is a Linux platform. and we are using Java 6. It is normal sdk java plus some JNI.
We using visualvm to monitor the memory leak. We notice from visualvm application does not consume heap continuously. But the whole process memory increases all the time up to linux killing the process.
Then we are suspecting the JNI part. Since JNI part memory leak could not be seen by visualvm. Could someone drop some hints on how to check JNI memory leak when do Java Performance testing?
Oracle has some documentation on how you can create your own leak tracker in such a case. The dbx command is mentioned as one alternative available on Linux.

How to calculate the max memory that can be used by a java app

I have a java app that has a max heap of 1024M,it has perm gen space of 256M.
Does it guarantee that this app will never use more than 1280M (1024+256) ?
Does the stack memory also come from the heap size above or is it extra memory consumption?
What if the java app uses native code that consumes memory then where does this memory come from? heap/perm gen / more ram?
I am interested to know how java uses memory.
please comment.
Any links that can provide a clear picture are also welcome
thankyou
An executing Java app uses more memory than the main heap and permgen space. For example:
There is the memory that holds the executable code of the java program and any shared libraries that are dynamically linked by the executable.
There is the memory used to represent out-of-heap data structures, buffers, etc that are created by the java executable, by its native libraries or by the application's native libraries.
There is the memory used to represent Java thread stacks.
And there's probably more.
There is no recommended way to predict the total memory usage of a Java application. Even measuring it is tricky, especially when you consider that some of that memory may be shared with other JVMs or even other non-Java applications.
From your question I see how confused you are about memory management in Java.
Please go through this white paper for a better understanding: Memory Management in the JavaHotSpotâ„¢ Virtual Machine.

Memory footprint issues with JAVA, JNI, and C application

I have a piece of an application that is written in C, it spawns a JVM and uses JNI to interact with a Java application. My memory footprint via Process Explorer gets upto 1GB and runs out of memory. Now as far as I know it should be able to get upto 2GB. One thing I believe is that the memory the JVM is using isn't visible in the Process Explorer. My xmx is set to 256, I added some statements to watch the java side memory and it is peaking at 256 and GC is doing its job and it is all good on that side. So my question is, where is the other 700+ MB being consumed? Anyone out there a Java/JNI/C Memory expert?
There could be a leak in the JNI code.
Remember to use (*jni)->DeleteLocalRef() for any object references you get once you are done with them. If you use any native C buffers to create new Java objects, make sure you free them off once the object is created. Check the JNI Specification for further guidelines.
Depending on the VM you are using you might be able to turn on JNI checking. For example, on the IBM JDK you can specify "-Xcheck:jni".
Try a test app in C that doesn't spawn the JVM but instead tries to allocate more and more memory. See whether the test app can reach the 2 GB barrier.
The C and JNI code can allocate memory as well (malloc/free/new/etc), which is outside of the VM's 256m. The xMX only restricts what the VM will allocate itself. Depending on what you're allocating in the C code, and what other things are loaded in memory you may or may not be able to get up to 2GB.
If you say that it's the Windows process that runs out of memory as opposed to the JVM, then my initial guess is that you probably invoke some (your own) native methods from the JVM and those native methods leak memory. So, I concur with #John Gardner here.
Well thanks to all of your help especially #alexander I have discovered that all the extra memory that isn't visible via Process Explorer is being used by the Java Heap. In fact via other tests that I have run the JVM's memory consumption is included in what I see from the Process Explorer. So the heap is taking large amounts of memory, I will have to do some more research about that and maybe ask a separate question.
Write a C test harness and use valgrind/alleyoop to check for leakage in your C code, and similarly use the java jvisualvm tool.

Categories