I have recently discovered the following facts:
The first time I navigate through pages of my application, perm gem is growing up significantly. (That's normal)
My perm gen is growing up when I am navigating through pages of my application that I have already navigated through.
But this is only happening if I stop using my application for a few minutes. If I don't do that perm gen remains the same though I keep navigating.
The increase is not much but I think is not a normal behaviour.
I also noticed that perm gen never goes down or It goes down a little bit.
What can be the cause of that?
I am using Java6, Tomcat6, Hibernate, Spring 3.3.4 and JSF 1.2.
PermGen errors and leaks are tricky to investigate.
To deal with them, them, the first thing you can do is to check you statics. Your application can manage some statics that are holding references to objects that should have been garbage collected but are still marked as reachable because are referenced by the these statics.
For example, you are using Spring, and in Spring, beans are Singleton by default so it can be a source of PermGen problems.
To know more about the objects in memory you can use some profiling tools, such as VisualVM. It can help you to investigate how many instances of a given class is holding Tomcat, and it can give you a clue about where the leak can be.
To learn more about the PermGen and PermGen errors and issues you can look at The java.lang.OutOfMemoryError: PermGen Space error demystified
Related
I am using JVM Explorer - link to JVM Explorer , to profile my Spring application. I have following questions about it.
Why 'Used Heap Memory' keeps increasing even after the application
has started up and have not received any requests yet? (Image 1)
Why even after garbage collection and before receiving any requests
'Used Heap Memory' keeps increasing? (Image2)
Why after garbage collection, by sending some requests to the application number of loaded classes is increasing? Is not the application supposed to use previous classes? why is it just increasing almost everything (heap, number of loaded classes)? (Image3)
After application starts up - enlarge image
After clicking on 'Run Garbage Collector' button. - enlarge image
After sending some requests to the application following completion of Garbage Collection Procedure - enlarge image
Why 'Used Heap Memory' keeps increasing even after the application has started up and have not received any requests yet? (Image 1)
Something in your JVM is creating objects. You would need a memory profiler to see what is doing this. It could be part of Swing, or yoru application or another library.
BTW Most profiling tools use JMX which processes a lot of garbage. e.g. when I run FlightRecorder or VisualVM on some of my applications it shows the JMX monitoring is creating most of the garbage.
Why even after garbage collection and before receiving any requests 'Used Heap Memory' keeps increasing? (Image2)
Whatever was creating objects is still creating objects.
Why after garbage collection, by sending some requests to the application number of loaded classes is increasing?
Classes are lazily loaded. Some classes are not needed until you do something.
Is not the application supposed to use previous classes?
Yes, but this doesn't mean it won't need more classes.
why is it just increasing almost everything (heap, number of loaded classes)? (Image3)
Your application is doing more work.
If you wan't to know what work the application is doing, I suggest using a memory profiler like VisualVM or Flight Recorder. I use YourKit for these sort of questions.
Note: it takes hard work to tune a Java program so that it doesn't produce garbage and I would say most libraries only try to reduce garbage if it causes a known performance problem.
I like #PeterLawrey's good answer, however this is missing there:
The memory is primarily meant to be used, not to be spared. It may easily be the case that your application is just well written: it can work with a small memory and it can re-create all it needs, but it can also efficiently exploit the fact that your system has a lot of memory and the application uses all the possible memory to work much more efficiently.
I can easily imagine that the thing which keeps taking up the memory is for instance a cache. If the cache contains a lot of data, the application works faster.
If you do not have issues like OutOfMemoryError you do not have to worry necessarily. You should still be vigilant and inspect it further, but your described situation does not automatically mean that something is wrong.
It is analogous to the constant lamentation of Windows users that "I have bought more memory but my Windows uses it all up" - it is good when the memory is being used! That's what we buy it for!
Yesterday I deployed my first Grails (2.3.6) app to a dev server and began monitoring it. I just got an automated monitor stating that CPU was pinned on this machine, and so I SSHed into it. I ran top and discovered that it was my Java app's PID that was pinning the server. I also noticed memory was at 40%. After a few seconds, the CPU stopped pinning, went down to a normal level, and memory went back down into the ~20% range. Classic major GC.
While it was collecting, I did a heap dump. After the GC, I then opened the dump in JVisualVM and saw that most of the memory was being allocated for an org.codehaus.groovy.runtime.metaclass.MetaMethodIndex.Entry class. There were almost 250,000 instances of these in total, eating up about 25 MB of memory.
I googled this class and took a look at it's ultra helpful Javadocs. So I still have no idea what this class does.
But googling it also brought up about a dozen or so related articles (some of them SO questions) involving this class and a PermGen/classloader leak with Grails/Groovy apps. And while it seems that my app did in fact clean up these 250K instance with a GC, it still is troubling that there were so many instances of it, and that the GC pinned CPU for over 5 minutes.
My questions:
What is this class and what is Groovy doing with it?
Can someone explain this answer to me? Why would -XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled help this particular problem?
Why is this class particularly troublesome for the PermGen?
Groovy is a dynamic language, every method call is dispatched dynamically. To optimise that Groovy creates a MetaClass for every java.lang.Class in the MetaClassRegistry. These MetaClass instances are created on-demand and stored using Weak references.
The reason you see a lot of org.codehaus.groovy.runtime.metaclass.MetaMethodIndex.Entry is because Groovy is storing a map of classes and methods in memory so that they can be quickly dispatched by the runtime. Depending on the size of the application this can be as you have discovered thousands of classes as each class can have dozens sometimes hundreds of methods.
However, there is no "memory leak" in Groovy and Grails, what you are seeing is normal behaviour. Your application is running low on memory, probably because it hasn't been allocated enough memory, this in turn causes MetaClass instances to be garbage collected. Now say for example you have a loop:
for(str in strings) {
println str.toUpperCase()
}
In this case we are calling a method on the String class. If you are running low on memory what will happen is that for each iteration of the loop the MetaClass will be garbage collected and then recreated again for the next iteration. This can dramatically slow down an application and lead to the CPU being pinned as you have seen. This state is commonly referred to as "metaclass churn" and is a sign your application is running low on heap memory.
If Groovy was not garbage collecting these MetaClass instances then yes that would mean there is a memory leak in Groovy, but the fact that it is garbage collecting these classes is a sign that all is well, except for the fact that you have not allocated enough heap memory in the first place. That is not to say that there may be a memory leak in another part of the application that is eating up all the available memory and leaving not enough for Groovy to operate correctly.
As for the other answer you refer to, adding class unloading and PermGen tweaks won't actually do anything to resolve your memory issues unless you dynamically parsing classes at runtime. PermGen space is used by the JVM to store dynamically created classes. Groovy allows you to compile classes at runtime using GroovyClassLoader.parseClass or GroovyShell.evaluate. If you are continuously parsing classes then yes adding class unloading flags can help. See also this post:
Locating code that is filling PermGen with dead Groovy code
However, a typical Grails application does not dynamically compile classes at runtime and hence tweaking PermGen and class unloading settings won't actually achieve anything.
You should verify if you have allocated enough heap memory using the -Xmx flag and if not allocate more.
I always thought, that the memory of permsize of a JVM is filled with loading classes during starting up the JVM. Probably also with stuff like JNI during runtime ? But in general it should not growth during runtime "signifcantly".
Now I noticed, that since I load a lots of data (20GB) into the heapspace, which max is 32GB ( ArrayLists of Data ), then I get a 'OutOfMemoryError: PermGen space'.
Is there any correlation or just accidentally ?
I know howto increase the permsize. This is not the question.
With tomcat, I have set the following for increasing PermGen space.
set "JAVA_OPTS=-XX:MaxPermSize=256m"
You may like to do something like above.
I have set in MB(256m), I am not sure how to set for GB.
Hope helps.
The PermGen memory space is not part of the heap (sometimes this causes confusion). It's where some kind of objects are allocated, like
Class objects, Method objects, and the pool of strings objects. Unlike the name would indicate, this memory space is also collected (during
the FullGC), but often bring major headaches, as known
OutOfMemoryError.
Problems with bursting PermGen are difficult to diagnose precisely
because it is not the application objects . Most of the cases, the problem is connected to
an exaggerated amount of classes that are loaded into memory. A well known issue, was the use
of Eclipse with many plugins ( WTP ) with default JVM settings . Many classes were loaded in memory and ends with a burst of the permGEN.
Another problem of PermGen are the hot deploys in application servers. For several reasons, the server cannot release
the context classes at the destroy time . A new version of the application is then loaded,
but old the classes remains, increasing the PermGen.
That's why sometimes we need to restart the whole container because of the PermGen.
We are running a liferay portal on tomcat 6. Each portlet is a contained web application so it includes all libraries the portlet itself requires. We currently have 30+ portlets. The result of this is that the permgen of our tomcat increases with each portlet we deploy.
We now have two paths we can follow.
Either move some of the libraries which are commenly used by each of our portlets to the tomcat shared library. This would include stuff like spring/hibernate/cxf/.... to decrease our permgen size
Or easier would be to increase the permgen size.
This second option would allow us to keep every portlet as a selfcontained entity.
The question now is, is there any negative performance impact from increasing the permgen size? We are currently running at 512MB.
I have found little to no information about this. But found some post were people are talking about running on 1024MB permgen size without issues.
As long as you have enough memory on your server, I can't imagine anything can go wrong. If you don't, well, the Tomcat wouldn't even start, probably, because it wouldn't be able to allocate enough memory. So, if it does start up, you're good. As far as my experience goes, 1GB PermGen is perfectly sound.
The downside of a large PermGen is that it leaves you with less system memory you can then allocate for heap (Xmx).
On the other hand, I'd advise you to reconsider the benefits of thinking of portlets as self-contained entities. For example:
interoperability issues: if all the portlets are allowed to potentially use different versions of same libraries, there is some risk that they won't cooperate with each other and with the portal itself as intended
performance: PermGen footprint is just one thing, but adding jars every here and there in portlets will require additional file descriptors; I don't know about Windows, but this will hurt linux server's performance in the long run.
freedom of change: if you're using maven to build your portlets, switching from lib/ext libs to portlet's lib libraries is just a matter of changing the dependencies scope (this may be more annoying with portal libs); as far as I remember, Liferay SDK also makes it easy to do a similar switch with ant to do a similar switch in a jiffy by adding additional ant task to resolve the dependencies and deleting them from portlet's lib as required
PermGen memory can be garbage collected by full collections, so increasing it may increase the amount of GC time when a full collection takes place.
These collections shouldn't take place too often though, and would typically still take less than a second to full GC 1GB of permgen memory - I'm just pulling this number from (my somewhat hazy) memory, so if you are really worried about GC times to some timing tests yourself (use -verbose:gc and read the logs, more details here)
Permgen size is outside of OLD Gen - so please don't mixed up .
Agreed on the 2nd point - we can increase the permsize as much as we can do - as memory is pretty cheap but this would raise some question on how we are managing our code. why the heck we need this much of perm -Is JTa consuming that much - how many class we are loading ? How many file descriptors the app is opening ( check with lsof command) .
We should try to answer those.
I constantly detect OOM in PermGen for my environment:
java 6
jboss-4.2.3
Not a big web-application
I know about String.intern() problem - but I don't have enough valuable usage of it.
Increasing of MaxPermGen size didn't take a force (from 128 Mb to 256 Mb).
What other reasons could invoke OOM for PermGen?
What scenario of investigation is the best in such situation (strategy, tools and etc.)?
Thanks for any help
See this note
Put JDBC driver in common/lib (as tomcat documentation says) and not in WEB-INF/lib
Don't put commons-logging into WEB-INF/lib since tomcat already bootstraps it
new class objects get placed into the PermGen and thus occupy an ever increasing amount of space. Regardless of how large you make the PermGen space, it will inevitably top out after enough deployments. What you need to do is take measures to flush the PermGen so that you can stabilize its size. There are two JVM flags which handle this cleaning:
-XX:+CMSPermGenSweepingEnabled
This setting includes the PermGen in a garbage collection run. By default, the PermGen space is never included in garbage collection (and thus grows without bounds).
-XX:+CMSClassUnloadingEnabled
This setting tells the PermGen garbage collection sweep to take action on class objects. By default, class objects get an exemption, even when the PermGen space is being visited during a garabage collection.
You typically get this error when redeploying an application while having a classloader leak, because it means all your classes are loaded again while the old versions stay around.
There are two solutions:
Restart the app server instead of redeploying the application - easy, but annoying
Investigate and fix the leak using a profiler. Unfortunately, classloader leaks can be very hard to pinpoint.