Memory leak running Apache Thrift server - java

I'm running a Java server using Apache Thrift and profiling it I found memory (Old Gen) is always growing, as shown by this graph:
The sharp drop at the end of the graph is because I clicked "Perform GC".
I understand there's a memory leak here. So I ran a leak detector (MAT) and it reported as follows:
One instance of "com.sun.jmx.remote.internal.ArrayNotificationBuffer"
loaded by "" occupies 7,844,208 (77.22%) bytes.
I never use this class myself, so I assume Apache Thrift uses this internally. I also found that ArrayNotificationBuffer memory leak this actually is an old known fixed JDK bug.
So I have a some questions about this:
Why when I click "Perform GC" there's such a drop in the allocated memory? Isn't the GC ran automatically the same? Why it doesn't garbage-collect this memory then?
I use OpenJDK (7u55-2.4.7-1ubuntu1~0.12.04.2) and as far as I can see all bugs relating to ArrayNotificationBuffer are quite old and fixed, so why is this happening? How to fix it?

The fact that the allocation was cleared when you ran GC() just means a legitimate chunk of memory that would eventually have been released. If your heap is large and other allocation requests do not fail, old gen could be deferred for a while.
As for the buffer, I would speculate that a JMX notification listener was registered but is not handling emitted notifications in a timely manner, but it's hard to say.

Related

Understanding Groovy/Grails classloader leak

Yesterday I deployed my first Grails (2.3.6) app to a dev server and began monitoring it. I just got an automated monitor stating that CPU was pinned on this machine, and so I SSHed into it. I ran top and discovered that it was my Java app's PID that was pinning the server. I also noticed memory was at 40%. After a few seconds, the CPU stopped pinning, went down to a normal level, and memory went back down into the ~20% range. Classic major GC.
While it was collecting, I did a heap dump. After the GC, I then opened the dump in JVisualVM and saw that most of the memory was being allocated for an org.codehaus.groovy.runtime.metaclass.MetaMethodIndex.Entry class. There were almost 250,000 instances of these in total, eating up about 25 MB of memory.
I googled this class and took a look at it's ultra helpful Javadocs. So I still have no idea what this class does.
But googling it also brought up about a dozen or so related articles (some of them SO questions) involving this class and a PermGen/classloader leak with Grails/Groovy apps. And while it seems that my app did in fact clean up these 250K instance with a GC, it still is troubling that there were so many instances of it, and that the GC pinned CPU for over 5 minutes.
My questions:
What is this class and what is Groovy doing with it?
Can someone explain this answer to me? Why would -XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled help this particular problem?
Why is this class particularly troublesome for the PermGen?
Groovy is a dynamic language, every method call is dispatched dynamically. To optimise that Groovy creates a MetaClass for every java.lang.Class in the MetaClassRegistry. These MetaClass instances are created on-demand and stored using Weak references.
The reason you see a lot of org.codehaus.groovy.runtime.metaclass.MetaMethodIndex.Entry is because Groovy is storing a map of classes and methods in memory so that they can be quickly dispatched by the runtime. Depending on the size of the application this can be as you have discovered thousands of classes as each class can have dozens sometimes hundreds of methods.
However, there is no "memory leak" in Groovy and Grails, what you are seeing is normal behaviour. Your application is running low on memory, probably because it hasn't been allocated enough memory, this in turn causes MetaClass instances to be garbage collected. Now say for example you have a loop:
for(str in strings) {
println str.toUpperCase()
}
In this case we are calling a method on the String class. If you are running low on memory what will happen is that for each iteration of the loop the MetaClass will be garbage collected and then recreated again for the next iteration. This can dramatically slow down an application and lead to the CPU being pinned as you have seen. This state is commonly referred to as "metaclass churn" and is a sign your application is running low on heap memory.
If Groovy was not garbage collecting these MetaClass instances then yes that would mean there is a memory leak in Groovy, but the fact that it is garbage collecting these classes is a sign that all is well, except for the fact that you have not allocated enough heap memory in the first place. That is not to say that there may be a memory leak in another part of the application that is eating up all the available memory and leaving not enough for Groovy to operate correctly.
As for the other answer you refer to, adding class unloading and PermGen tweaks won't actually do anything to resolve your memory issues unless you dynamically parsing classes at runtime. PermGen space is used by the JVM to store dynamically created classes. Groovy allows you to compile classes at runtime using GroovyClassLoader.parseClass or GroovyShell.evaluate. If you are continuously parsing classes then yes adding class unloading flags can help. See also this post:
Locating code that is filling PermGen with dead Groovy code
However, a typical Grails application does not dynamically compile classes at runtime and hence tweaking PermGen and class unloading settings won't actually achieve anything.
You should verify if you have allocated enough heap memory using the -Xmx flag and if not allocate more.

Java is not able to collect garbages in time

I have such problem that jvm is not able to perform gc in time and application freezes. "Solution" for that is to connect to application using jconsole and suggest jvm to make garbage collections. I do not have to say that it is very poor behavior of application. Are there some option for jvm to suggest to it to perform gc sooner/more often? Maybe there are some other real solution to this problem?
The problem appears not to be not enough of memory but that gc is not able to do collection in time before new data is send to application. It is so because gc appears to start to collect data to late. If is is suggested early enough by System.gc() button of jconsole then problem does not occur.
Young generation is collected by 'PS Scavenge' which is parallel collector.
Old generation is collected by 'PS MarkSweep' which is parallel mark and sweep collector.
You should check for memory leaks.
I'm pretty sure you won't get OutOfMemoryException unless there's no memory to be released and no more available memory.
There is System.gc() that does exactly what you described: It suggests to the JVM that a garbage collection should take place. (There are also command-line arguments for the JVM that can serve as directives for the memory manager.)
However, if you're running out of memory during an allocation, it typically means that the JVM did attempt a garbage collection first and it failed to release the necessary memory. In that case, you probably have memory leaks (in the sense of keeping unnecessary references) and you should get a memory profiler to check that. This is important because if you have memory leaks, then more frequent garbage collections will not solve your problem - except that maybe they will postpone its manifestation, giving you a false sense of security.
From the Java specification:
OutOfMemoryError: The Java Virtual Machine implementation has run out
of either virtual or physical memory, and the automatic storage
manager was unable to reclaim enough memory to satisfy an object
creation request.
You can deploy java melody on your server and add your application on it, it will give you detailed report of your memory leaks and memory usage. With this you will be able to optimize your system and code correctly.
I guess, either your application requires more memory to run efficiently, try tuning your JVM by setting parameters like -Xms512M -Xmx1024M.
Or,
There is memory leak which is exhausting the memory.
You should check the memory consumption pattern of your application. e.g. what memory it is occupying when it is processing more vs remain idle.
If you observe a constant surge in memory peaks, it could suggest towards a possible memory leak.
One of the best thread on memory leak issue is How to find a Java Memory Leak
Another good one is http://www.ibm.com/developerworks/library/j-leaks/
Additionally,
you may receive an OOME if you're loading a lot of classes (let's say, all classes present in your rt.jar). Since loaded classes reside in PermGen rather than heap memory, you may also want to increase your PermGen size using -XX:MaxPermSize switch.
And, of course, you're free to choose a garbage collector – ParallelGC, ConcMarkSweepGC (CMS) or G1GC (G1).
Please be aware that there're APIs in Java that may cause memory leaks by themselves (w/o any programmer's error) -- e. g. java.lang.String#substring() (see here)
If your application freezes, but gets unfrozen by a forced GC, then your problem is very probably not the memory, but some other resource leak, which is alleviated by running finalizers on dead objects. Properly written code must never rely on finalizers to do the cleanup, so try to find any unclosed resources in your application.
You can start the jvm with more memory
java -Xms512M -Xmx1024M
will start the jvm with 512Mb of memory, allowing it to grow to a gigabyte.
You can use System.gc() to suggest to the VM to run the garbage collector. There is no guarantee that it will run immediately.
I doubt if that will help, but it might work. Another thing you could look at is increasing the maximum memory size of the JVM. You can do this by giving the command line argument -Xmx512m. This would give 512 megabytes of heap size instead of the default 128.
You can use JConsole to view the memory usage of your application. This can help to see how the memory usage develops which is useful in detecting memory leaks.

Java - Allocated space not reduced

I'm developing a Java application which sometimes do some heavy work.
When this is the case, it use more ram than usually, so the allocated memory space of the app is increased.
My question is why the allocated space is not reduced once the work is finished ?
Using a profiler, I can see that for example 70mb is assigned, but only 5mb are used !
It looks like the allocated space can only grow, and not shrink.
Thanks
Usually the JVM is very restrictive when it comes to freeing memory it has allocated. You can configure it to free more agressively though. Try sending these settings to the JVM when you start your program:
-XX:GCTimeRatio=5
-XX:AdaptiveSizeDecrementScaleFactor=1
The JVM decides when to release the memory back to the operating system. In my experience with Windows XP, this almost never happens. Occasionally I've seem memory released back when the Command Prompt (or Swing window) is minimized. I believe that the JVM on Linux is better at returning memory.
Generally there can be 2 reasons.
Probably your program has memory management problem. If for example you store some objects in collection and never remove these objects from collection they will never be garbage collected. If this is a case you have a bug that should be found and fixed.
But probably your code is OK but GC still does not remove objects that are not used more. The reason for this is that GC lives its own live and decides its own decisions. If for example it thinks that it has enough memory it does not remove used objects until the memory usage arrives to some threshold.
To recognize which case you are having here try to call System.gc() either programmatically or using profiler (usually profilers have button that run GC). If used objects are removed after forcing GC to run your code is OK. Otherwise try to locate the bug. Profiler that you are already using should help you.

Java memory usage stays well within max heap size, but my system memory is slowly being eaten

I'm relatively new to Java programming so please bear with me trying to understand what's going on here.
The application I've developed uses a max heap size of 256MB. With the GC being done, I never run into any problems with this. The used heap builds up when a big image is loaded and gets freed nicely when it is unloaded. Out of memory errors are something that I've yet to see.
However... Running the application for about an hour. I notice that the process uses more and more system memory that never gets freed. So the application starts with around 160MB used, builds up as the heap size grows, but when the heap size shrinks the system memory used just keeps getting more. Up until the process uses 2.5GB and my system starts to become slow.
Now I'm trying to understand the surviving generations bit. It seems the heap size and surviving generations aren't really connected to each other? My application builds up a lot of surviving generations, but I never run out of memory according to the used memory by the application itself. But the JVM keeps eating memory, never giving it back to the system.
I've searching around the web, sometimes finding information that is somewhat useful. But what I don't get is that the application stays well within the heap size boundaries and still my system memory is being eaten up.
What is going on here?
I'm using NetBeans IDE on OSX Lion with the latest 1.6 JDK available.
The best way to start would be jvisualvm from the JDK on the same machine. Attach to your running program and enable profiling.
Another option is to try running the application in debug mode and stop it once in a while to inspect your data structures. This sounds like a broken/weird practice but usually if you have a memory leak it becomes very obvious where it is.
Good luck!

Analyze Tomcat Heap in detail on a production System

Having analyzed a light-load web application running in tomcat, using JMX Console, it turns out the "PS Old Gen" is growing slowly but constant. It starts with 200MB and grows around 80MB/Hour.
CPU is not an issue, it runs at 0-1% on average, but somewhere it leaks memory, so it will become unstable some days after deployment.
How do i find out what objects are allocated on the heap? Are there any good tutorials or tools you know?
You could try jmap, one of the JDK Development Tools. You can use jhat with the output to walk heap dumps using your web browser.
See this answer for a short explanation.
This comes up quite often, so searching SO for those tools should turn up some alternatives.
I've used the HeapAnalyzer tool from IBM's alphaWorks with good success. It takes output from Java's heap profile, hprof, and analyzes it to show you the most likely memory leaks.
You can use NetBeans profiler. It has 2 modes, launching tomcat profiled directly from ide (for localhost) or using a remote profiling with a JAR provided and some run config on server.
I used it in a project for a memory leak and it was useful.
See my answer here:
Strategies for the diagnosis of Java memory issues
And there are also tips here:
How can I figure out what is holding on to unfreed objects?
What you are seeing is normal, unless you can prove otherwise.
You do not need to analyze the heap when the additional "consumed space" disappears when a GC in the old space happens.
At some point, when the used space reaches your maximum heap size you will observe a pause caused by the default GC you use and afterwards the used memory should go down a lot. Only if it does not go down after a GC you might be interested what is still holding onto those objects.
JRockit Mission Control can analyze memory leaks while connected to JVM. No need to take snapshots all the time. This can be useful if you have a server with a large heap.
Just hook the tool up to the JVM and it will give you a trend table where you can see which type of objects that are growing the most, and then you can explore references to those objects. You can also get allocations traces, while the JVM is running, so you can see where in the application the objects are allocated.
You can download it here for free

Categories