Java Process consumes more then 2 GB of memory - java

The Java Process of JacORB notification service consumes about 2 GB of memory in Windows 2008. From YourKit I came to know that the Java Heap does not exceed 30 MB. So I concluded that there is no leak in the Java Heap. I would like to know how to find where is memory getting consumed. I read a few articles on the internet talking about the Java Native Heap. How to conclude if there is a leak in the Native Heap? We are using JRE 1.6 from Oracle (sun).

Exceptionally good tool for analyzing memory leaks in Java is Memory Analyzer. It could be downloaded as an Eclipse plugin, or as a standalone application. However, I recommend the latter. Check it out. It also has very good help pages.

Related

how can i find which application is causing memory leaks

I am running Tomcat-6.0.32 on the RHEL 5.4 with JDK-1.6.0_23 version. I am running almost more than 15 applications. Applications are small applications only. My RAM is 8GB and swap is 12GB. I set the heap size from 512Mb to 4GB.
The issue is after a few hours or days of running, the tomcat is not providing service though it is up and running. While I could see the catalina.out log file, it is showing memory leak problem.
Now, my concern is I need to show a solution to that issue or at least I need to highlight the application which is causing the memory leaks.
Could anyone explain how I can discover which application is causing the memory leak issue?
One option is to use heap dumps (see How to get a thread and heap dump of a Java process on Windows that's not running in a console) and analyze heap dump later on.
Or another option is to analyse process directly using tools like jmap, VisualVM and similar.
You may use the combination of jmap/jhat tools (Both these are unsupported as of Java 8) to gather the heap dump (using mmap) and identify the top objects in heap (using jhat). Try to co-relate these objects with the application and identify the rogue one.

Analyze/track down potential native memory leak in JVM

We're running an application on Linux using Java 1.6 (OpenJDK as well as Oracle JDK). The JVM itself has a maximum of 3.5 GB heap and 512 MB permgen space. However, after running a while top reports the process is using about 8 GB of virtual memory and smem -s swap p reports about 3.5 GB being swapped.
After running a bigger import of thousands of image files on one server, almost no swap space is left and calls to native applications (in our case Im4java calls to Image Magick) fail due to the OS failing to allocate memory for those applications.
In another case the swap space filled over the course of several weeks resulting in the OS killing the JVM due to being out of swap space.
I understand that the JVM will need more than 4 GB of memory for heap (max 3.5 GB), permgen (max 512 MB), code cache, loaded libraries, JNI frames etc.
The problem I'm having is how to find out what is actually using how much of the memory. If the JVM was out of heap memory, I'd get a dump which I could analyze, but in our case it's the OS memory that is eaten up and thus the JVM doesn't generate a dump.
I know there's jrcmd for JRockit, but unfortunately we can't just switch the JVM.
There also seem to be a couple of libraries that allow to track native memory usage but most of those seem to need native code to be recompiled - and besides Im4java (which AFAIK just runs a native process, we don't use DLL/SO-integration here) and the JVM there's no other native code involved that we know of.
Besides that, we can't use a library/tool that might have a huge impact on performance or stability in order to track memory usage on a production system over a long period (several weeks).
So the question is:
How can we get information on what the JVM is actually needing all that memory for, ideally with some detailed information?
You may find references to "zlib/gzip" (pdf handling or http encoding since Java 7), "java2d" or "jai" when replacing memory allocator (jemalloc or tcmalloc) in JVM.
But to really diagnose native memory leak, JIT code symbol mapping and Linux recent profiling tools are required: perf, perf-map-agent and bcc.
Please refer to details in related answer https://stackoverflow.com/a/52767721/737790
Many thanks to Brendan Gregg

java.lang.OutOfMemoryError: PermGen space error, possible memory leak with Tomcat or PHP-Java Bridge?

OS: Windows Server 2008 R2 SP1
Web Server: IIS 7.5
JSP/Servlet Engine: Tomcat 5.5.28 (32-bit)
PHP: 5.4.14
Java: JRE SE 1.6.0_20 (32-bit)
Apache Isapi Connector hooks into Tomcat from IIS
PHP-Java Bridge 6.2.1
BMC AR System 7.5 Patch 6
Tomcat Initial and Max Memory: 1024 MB, 1024 MB
I am using a Java web application called AR System. After installing the PHP-Java Bridge, I started seeing java.lang.OutOfMemoryError: PermGen space error in the Tomcat logs. (I see in Windows Task Manager that there are 6 PHP-CGI.exe processes, all similar in memory footprint, give or take 5 MB). It would occur every other day or so and then shortened to every day, sometimes twice a day. Consequently, the application hangs and I have to restart it. And I added a Windows Task to restart Tomcat during non-peak hours to give me some cushion. I suspected a memory leak and started doing some research. Normally, Tomcat sits at around 300-350 MB. With the PHP-Java Bridge, memory jumped up significantly. In fact, the error has occurred anywhere from 450-600 MB.
I learned that default PermGen is 64MB and PermGen should be set to 1/4, up to 1/3 of Tomcat memory (sorry, I don't recall the link). Tomcat is running under Windows Services at this point, and I added the following to its properties:
-XX:+UseConcMarkSweepGC
-XX:+CMSPermGenSweepingEnabled
-XX:+CMSClassUnloadingEnabled
-XX:PermSize=128M
-XX:MaxPermSize=256M
I enforced GC on PermGen memory and increased the size from the default 64 MB size to 128-256 MB. Memory went up all the way to 800-850 MB, slowly, but it wasn't hanging during peak hours, albeit I still had Tomcat intentionally restart during non-peak hours, via a Windows Task. If I take off the restart, it MIGHT eventually hang but I haven't tried it.
I still suspected a memory leak. I installed a trial version of AppDynamics to monitor the application, its memory, and run leak detection. Additionally, to use tools like VisualVM and Memory Analyzer (MAT), I disabled The Tomcat Windows service and ran Tomcat from the Windows Command Line, via catalina.bat. I appended Java Options to the file; I made sure Tomcat memory was 1024 MB, Perm Gen was 128/256 MB, and ensured PHP-Java Bridge and AppDynamics was running. As of right now, PermGen is holding at 163 MB used, and AppDynamic's Automatic Leak Detection did not detect any leaks with any Java Collections.
I fired up MAT, created a heap dump and analyzed for leaks. When I ran it yesterday, it found three possible suspects:
net.sf.ehcache.Cache
net.sf.ehcache.store.DiskStore
org.apache.catalina.loader.WebappClassLoader
When I ran it today, it found 2 possible suspects:
java.util.HashMap
org.apache.jasper.servlet.JasperLoader
So, with MAT and AppDynamics, it appears that no memory leaks were detected for classes directly related to the PHP-Java Bridge JAR files. I haven't tried using Plumbr, but I can't find the free beta version. The free version detects leaks, but you have to pay to see it.
Again, I don't have a source link at this time, but I recall reading that Tomcat 5.x can have performance and memory leak issues. Of course, that doesn't mean everybody will have those issues, just a select number. I know Tomcat 6 and Tomcat 7 redesigned their memory management or how they structure memory. I also did speak with someone from BMC, the maker of AR System, and they said the current version of AR System I'm using could suffer from performance and memory issues. But, again, none of this was a problem before the PHP-Java Bridge. It was only after I installed it that this PermGen memory issue started.
Since the tools above did not report any leaks, does that mean there are no leaks and PHP-Java Bridge just needed more than 64 MB PermGen memory? Or, is there an inherit problem with my version of Tomcat and installing the PHP-Java Bridge just broke the proverbial camel's back?
Upgrading to a newer version of AR System and Tomcat is not an option. If there is a leak, I can uninstall the PHP-Java Bridge or continue trying to find a leak and fix it.
Any help would be appreciated.
Thank you.
Update 1
With MAT, I looked at the thread overview and stacks and you can see below that the PHP-Java Bridge contributes about 2/3 of the total heap memory of Tomcat. That's a lot of memory! I think there is a leak, I do. I can't find any information on the PHP-Java Bridge having inherit memory leak issues. But, to me, it appears that the problem is not that Tomcat is leaking. Ideas?
AppDynamics couldn't find any leaks, even when I manually added classes that were suspected in MAT. What I'm wondering is perhaps the PermGen error is a symptom of that case where the program has no leak and needs more PermGen memory allotted. It would be helpful to know if the PHP-Java Bridge is designed to eat a lot of memory, this much memory; maybe it's optimized for 64-bit, since the current setup is a 32-bit Java Web application. If I knew that this bridge needs a lot of memory, I would say OK, fine, and go from there. But it certainly appears as if there is a memory leak somewhere in the chain.
Update 2
I've been running Plumbr now for 2 hours and almost 10 minutes. I see that Tomcat memory is shooting up to 960 MB and probably will continue to climb. For those familiar with the program, the Java web application has been analyzed 3 times. So far, no leaks have been reported. If it stays this way, then the two conclusions I've arrived at are a) there are no leaks or b) there is a leak and, somehow, both AppDynamics and Plumbr missed it. If there are truly no leaks with this set of applications working together, then it must be that the Bridge uses a lot of memory and needs more PermGen memory than Tomcat's default, 64 MB -- at the very least, for 32-bit Java web applications.

Java performance tuning, JNI memory leak

I have a Java application. It is a Linux platform. and we are using Java 6. It is normal sdk java plus some JNI.
We using visualvm to monitor the memory leak. We notice from visualvm application does not consume heap continuously. But the whole process memory increases all the time up to linux killing the process.
Then we are suspecting the JNI part. Since JNI part memory leak could not be seen by visualvm. Could someone drop some hints on how to check JNI memory leak when do Java Performance testing?
Oracle has some documentation on how you can create your own leak tracker in such a case. The dbx command is mentioned as one alternative available on Linux.

Analyze Tomcat Heap in detail on a production System

Having analyzed a light-load web application running in tomcat, using JMX Console, it turns out the "PS Old Gen" is growing slowly but constant. It starts with 200MB and grows around 80MB/Hour.
CPU is not an issue, it runs at 0-1% on average, but somewhere it leaks memory, so it will become unstable some days after deployment.
How do i find out what objects are allocated on the heap? Are there any good tutorials or tools you know?
You could try jmap, one of the JDK Development Tools. You can use jhat with the output to walk heap dumps using your web browser.
See this answer for a short explanation.
This comes up quite often, so searching SO for those tools should turn up some alternatives.
I've used the HeapAnalyzer tool from IBM's alphaWorks with good success. It takes output from Java's heap profile, hprof, and analyzes it to show you the most likely memory leaks.
You can use NetBeans profiler. It has 2 modes, launching tomcat profiled directly from ide (for localhost) or using a remote profiling with a JAR provided and some run config on server.
I used it in a project for a memory leak and it was useful.
See my answer here:
Strategies for the diagnosis of Java memory issues
And there are also tips here:
How can I figure out what is holding on to unfreed objects?
What you are seeing is normal, unless you can prove otherwise.
You do not need to analyze the heap when the additional "consumed space" disappears when a GC in the old space happens.
At some point, when the used space reaches your maximum heap size you will observe a pause caused by the default GC you use and afterwards the used memory should go down a lot. Only if it does not go down after a GC you might be interested what is still holding onto those objects.
JRockit Mission Control can analyze memory leaks while connected to JVM. No need to take snapshots all the time. This can be useful if you have a server with a large heap.
Just hook the tool up to the JVM and it will give you a trend table where you can see which type of objects that are growing the most, and then you can explore references to those objects. You can also get allocations traces, while the JVM is running, so you can see where in the application the objects are allocated.
You can download it here for free

Categories