I use third party DLL's in my java application to access native methods written in C. My application often gets crashed with malloc failed or out of swap space error message. There is no memory leak in my java application (Verified with profilers). Now I doubt that memory leak in third party DLL's. Is there any way to find out leak in DLL's.
I've used a C/C++ tool to detect memory leaks in my dlls several months ago:
http://www.codeproject.com/Articles/8448/Memory-Leak-Detection
And you also have:
http://vld.codeplex.com/
my first choice to detect memory issues is valgrind. with java and JIT it might however not always work.
but still worth to give it a shot. try running
valgrind --smc-check=all --trace-children=yes --show-reachable=yes --leak-check=full [your command]
cheers,
Related
My Java program is taking up huge amounts of memory ~3GB and I have set xmx300MB. Additionally different tools report different memory usage.
jcmd: 576MB
Task Manager: 967MB
Resource Monitor: 3478MB
When I close the program Task Manager shows the memory usage dropping by about 3GB. My question is how can I see what is using this memory? It seems like it is not java due to the output I see by jcmd. I suspect it might be a DLL that my Java program is using. Are there any tools which can be used here?
Here's a toolset and instructions.
You can first confirm your suspicions that it is the DLL by enabling Native Memory Tracking with -XX:NativeMemoryTracking=summary (or detail), then re-check with jcmd VM.native_memory to verify it's not Java doing native memory allocation.
If you don't see a large obvious chunk under the Native Memory Tracking part, you'll be out of Java land and have to try the tools listed in "Native Memory Leaks from Outside the JVM" (jemalloc, valgrind, Purify, etc.).
I have a Java application. It is a Linux platform. and we are using Java 6. It is normal sdk java plus some JNI.
We using visualvm to monitor the memory leak. We notice from visualvm application does not consume heap continuously. But the whole process memory increases all the time up to linux killing the process.
Then we are suspecting the JNI part. Since JNI part memory leak could not be seen by visualvm. Could someone drop some hints on how to check JNI memory leak when do Java Performance testing?
Oracle has some documentation on how you can create your own leak tracker in such a case. The dbx command is mentioned as one alternative available on Linux.
Having analyzed a light-load web application running in tomcat, using JMX Console, it turns out the "PS Old Gen" is growing slowly but constant. It starts with 200MB and grows around 80MB/Hour.
CPU is not an issue, it runs at 0-1% on average, but somewhere it leaks memory, so it will become unstable some days after deployment.
How do i find out what objects are allocated on the heap? Are there any good tutorials or tools you know?
You could try jmap, one of the JDK Development Tools. You can use jhat with the output to walk heap dumps using your web browser.
See this answer for a short explanation.
This comes up quite often, so searching SO for those tools should turn up some alternatives.
I've used the HeapAnalyzer tool from IBM's alphaWorks with good success. It takes output from Java's heap profile, hprof, and analyzes it to show you the most likely memory leaks.
You can use NetBeans profiler. It has 2 modes, launching tomcat profiled directly from ide (for localhost) or using a remote profiling with a JAR provided and some run config on server.
I used it in a project for a memory leak and it was useful.
See my answer here:
Strategies for the diagnosis of Java memory issues
And there are also tips here:
How can I figure out what is holding on to unfreed objects?
What you are seeing is normal, unless you can prove otherwise.
You do not need to analyze the heap when the additional "consumed space" disappears when a GC in the old space happens.
At some point, when the used space reaches your maximum heap size you will observe a pause caused by the default GC you use and afterwards the used memory should go down a lot. Only if it does not go down after a GC you might be interested what is still holding onto those objects.
JRockit Mission Control can analyze memory leaks while connected to JVM. No need to take snapshots all the time. This can be useful if you have a server with a large heap.
Just hook the tool up to the JVM and it will give you a trend table where you can see which type of objects that are growing the most, and then you can explore references to those objects. You can also get allocations traces, while the JVM is running, so you can see where in the application the objects are allocated.
You can download it here for free
We have a java program that requires a large amount of heap space - we start it with (among other command line arguments) the argument -Xmx1500m, which specifies a maximum heap space of 1500 MB. When starting this program on a Windows XP box that has been freshly rebooted, it will start and run without issues. But if the program has run several times, the computer has been up for a while, etc., when it tries to start I get this error:
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.
I suspect that Windows itself is suffering from memory fragmentation, but I don't know how to confirm this suspicion. At the time that this happens, Task manager and sysinternals procexp report 2000MB free memory. I have looked at this question related to internal fragmentation
So the first question is, How do I confirm my suspicion?
The second question is, if my suspicions are correct, does anyone know of any tools to solve this problem? I've looked around quite a bit, but I haven't found anything that helps, other than periodic reboots of the machine.
ps - changing operating systems is also not currently a viable option.
Agree with Torlack, a lot of this is because other DLLs are getting loaded and go into certain spots, breaking up the amount of memory you can get for the VM in one big chunk.
You can do some work on WinXP if you have more than 3G of memory to get some of the windows stuff moved around, look up PAE here:
http://www.microsoft.com/whdc/system/platform/server/PAE/PAEdrv.mspx
Your best bet, if you really need more than 1.2G of memory for your java app, is to look at 64 bit windows or linux or OSX. If you're using any kind of native libraries with your app you'll have to recompile them for 64 bit, but its going to be a lot easier than trying to rebase dlls and stuff to maximize the memory you can get on 32 bit windows.
Another option would be to split your program up into multiple VMs and have them communicate with eachother via RMI or messaging or something. That way each VM can have some subset of the memory you need. Without knowing what your app does, i'm not sure that this will help in any way, though...
Unless you are running out of page file space, this issue isn't that the computer is running out of memory. The whole point of virtual memory is to allow the processes to use more virtual memory than is physically available.
Not knowing how the JVM handles the heap, it is a bit hard to say exactly what the problem is, but one of the common issues is that there isn't enough contiguous free address space available in your process to allow the heap to be extended. Why this would be a problem after the machine has been running a while is a bit confusing.
I've been working on a similar problem at work. I have found that running the program using WinDBG and using the "!address" and "!address -summary" commands have been invaluable in tracking down why a processes' virtual address space has become fragmented. You can also try running the program after reboot and using the "!address" command to take a picture of the address space and then do the same when the program no longer runs. This might clue you in on the problem. Maybe something simple as an extra DLL getting loading might cause the problem.
I suspect that the problem is Windows memory fragmentation. There is another question here on StackOverflow called Java Maximum Memory on Windows XP that mentions using Process Explorer to look at where DLLs are mapped into memory, and then to address the problem by rebasing the DLLs so that load into memory in a more compact way.
Using Minimem (http://minimem.kerkia.net/) for that application might fix your problem. However, I'm not sure this is the answer you are looking for. I hope it helps.
Maybe you should consider to start the program and reserving the memory and not
end the VM after each run. Look for different GC options and release your objects.
Use vmmap from Microsoft's SysInternals tools to view the fragmentation of the virtual address space, and identify what's breaking up the space
I have a piece of an application that is written in C, it spawns a JVM and uses JNI to interact with a Java application. My memory footprint via Process Explorer gets upto 1GB and runs out of memory. Now as far as I know it should be able to get upto 2GB. One thing I believe is that the memory the JVM is using isn't visible in the Process Explorer. My xmx is set to 256, I added some statements to watch the java side memory and it is peaking at 256 and GC is doing its job and it is all good on that side. So my question is, where is the other 700+ MB being consumed? Anyone out there a Java/JNI/C Memory expert?
There could be a leak in the JNI code.
Remember to use (*jni)->DeleteLocalRef() for any object references you get once you are done with them. If you use any native C buffers to create new Java objects, make sure you free them off once the object is created. Check the JNI Specification for further guidelines.
Depending on the VM you are using you might be able to turn on JNI checking. For example, on the IBM JDK you can specify "-Xcheck:jni".
Try a test app in C that doesn't spawn the JVM but instead tries to allocate more and more memory. See whether the test app can reach the 2 GB barrier.
The C and JNI code can allocate memory as well (malloc/free/new/etc), which is outside of the VM's 256m. The xMX only restricts what the VM will allocate itself. Depending on what you're allocating in the C code, and what other things are loaded in memory you may or may not be able to get up to 2GB.
If you say that it's the Windows process that runs out of memory as opposed to the JVM, then my initial guess is that you probably invoke some (your own) native methods from the JVM and those native methods leak memory. So, I concur with #John Gardner here.
Well thanks to all of your help especially #alexander I have discovered that all the extra memory that isn't visible via Process Explorer is being used by the Java Heap. In fact via other tests that I have run the JVM's memory consumption is included in what I see from the Process Explorer. So the heap is taking large amounts of memory, I will have to do some more research about that and maybe ask a separate question.
Write a C test harness and use valgrind/alleyoop to check for leakage in your C code, and similarly use the java jvisualvm tool.