PermGen Out of Memory reasons - java

I constantly detect OOM in PermGen for my environment:
java 6
jboss-4.2.3
Not a big web-application
I know about String.intern() problem - but I don't have enough valuable usage of it.
Increasing of MaxPermGen size didn't take a force (from 128 Mb to 256 Mb).
What other reasons could invoke OOM for PermGen?
What scenario of investigation is the best in such situation (strategy, tools and etc.)?
Thanks for any help

See this note
Put JDBC driver in common/lib (as tomcat documentation says) and not in WEB-INF/lib
Don't put commons-logging into WEB-INF/lib since tomcat already bootstraps it
new class objects get placed into the PermGen and thus occupy an ever increasing amount of space. Regardless of how large you make the PermGen space, it will inevitably top out after enough deployments. What you need to do is take measures to flush the PermGen so that you can stabilize its size. There are two JVM flags which handle this cleaning:
-XX:+CMSPermGenSweepingEnabled
This setting includes the PermGen in a garbage collection run. By default, the PermGen space is never included in garbage collection (and thus grows without bounds).
-XX:+CMSClassUnloadingEnabled
This setting tells the PermGen garbage collection sweep to take action on class objects. By default, class objects get an exemption, even when the PermGen space is being visited during a garabage collection.

You typically get this error when redeploying an application while having a classloader leak, because it means all your classes are loaded again while the old versions stay around.
There are two solutions:
Restart the app server instead of redeploying the application - easy, but annoying
Investigate and fix the leak using a profiler. Unfortunately, classloader leaks can be very hard to pinpoint.

Related

Java JAR memory usage VS class file memory usage

I recently changed my large Java application to be delivered in JARs instead of individual class files. I have 405 JARS which hold 5000 class files. My problem is that when I run my program(s) as JARs (classpath is a wildcard to get all JARs) Java will continually use more and more memory. I have seen the memory go > 2GB and it seems like Java is not doing stop-the-world garbage collections to keep the memory lower. If I run the exact same program against the exploded JARs (only class files), Java's memory usage stays much lower (< 256MB) and stays there. This is happening in Oracle's Java 8 on Windows 7 (x64) and Windows Server (x64). Why would packaging my application as JARs change the memory profile? Also I have run the program for a long time as JARs with the memory maximum limited to 128MB with no problems so I don't have a memory leak.
With JAR files in classpath
With class files in classpath
Edit: I accepted the answer from #K Erlandsson because I think it is the best explanation and this is just an ugly quirk of Java. Thanks every one (and especially #K Erlandsson) for your help.
The first thing to note is that how much memory that is totally used on the heap is not very interesting at all times, since much of the used memory can be garbage and will be cleared by the next GC.
It is how much heap that is used by live objects that you need to be concerned about. You write in a comment:
I don't know if this matters, but if I use jvisualvm.exe to force a GC
(mark sweep) the heap memory usage will drop clearing almost all the
heap memory.
This matters. A lot. This means that when you see a higher heap usage when you use your jars, you see more garbage, not more memory consumed by live objects. The garbage is cleared when you do a GC and all is well.
Loading classes from jar files will consume more memory, temporarily, than loading them from class files. The jar files need to be opened, seeked, and read from. This requires more operations and more temporary data than simply opening a specific .class file and reading it.
Since most of the heap usage is cleared by a GC, this additional memory consumption is not something you need to be very concerned about.
You also write:
Java will continually use more and more memory. I have seen the memory
go > 2GB and it seems like Java is not doing stop-the-world garbage
collections to keep the memory lower.
This is typical behavior. The GC only runs when the JVM thinks it is necessary. The JVM will tune this depending on memory behavior.
Edit: Now that we see your jConsole images we see a difference in committed heap memory (250 mb vs 680 mb). Committed heap is the actual size of the heap. This will vary (up to what you set with -Xmx) depending on what that JVM thinks will yield the best performance for your application. However, it will mostly increase, almost never decrease.
For the jar case the JVM has assigned a bigger heap to your application. Probably due to more memory being required during the initial class loading. The JVM then thought a bigger heap would be faster.
When you have a bigger heap, more committed memory, there is more memory to use before running a GC. That is why you see the difference in memory usage in the two case.
Bottom line: All the extra usage you see is garbage, not live objets, why you do not need to be concerned about this behavior unless you have an actual problem since the memory will be reclaimed on the next GC.
It's quite common to load resources from the classpath. When a resource is originated from a jar file, the URL object will keep a reference to the jar file entry. This might be adding some memory consumption. It's possible to disable this caching by disabling default url caching.
The API for disabling default URL caching is quite awkward:
public static void disableUrlConnectionCaching() {
// sun.net.www.protocol.jar.JarURLConnection leaves the JarFile instance open if URLConnection caching is enabled.
try {
URL url = new URL("jar:file://valid_jar_url_syntax.jar!/");
URLConnection urlConnection = url.openConnection();
urlConnection.setDefaultUseCaches(false);
} catch (MalformedURLException e) {
// ignore
} catch (IOException e) {
// ignore
}
}
Disable default URL caching in the startup of your application.
Tomcat already disables URL caching by default because it also causes file locking issues and prevents updating jar files in a running application.
https://github.com/apache/tomcat/blob/5bbbcb1f8ca224efeb8e8308089817e30e4011aa/java/org/apache/catalina/core/JreMemoryLeakPreventionListener.java#L408-L423

Perm gen growing up

I have recently discovered the following facts:
The first time I navigate through pages of my application, perm gem is growing up significantly. (That's normal)
My perm gen is growing up when I am navigating through pages of my application that I have already navigated through.
But this is only happening if I stop using my application for a few minutes. If I don't do that perm gen remains the same though I keep navigating.
The increase is not much but I think is not a normal behaviour.
I also noticed that perm gen never goes down or It goes down a little bit.
What can be the cause of that?
I am using Java6, Tomcat6, Hibernate, Spring 3.3.4 and JSF 1.2.
PermGen errors and leaks are tricky to investigate.
To deal with them, them, the first thing you can do is to check you statics. Your application can manage some statics that are holding references to objects that should have been garbage collected but are still marked as reachable because are referenced by the these statics.
For example, you are using Spring, and in Spring, beans are Singleton by default so it can be a source of PermGen problems.
To know more about the objects in memory you can use some profiling tools, such as VisualVM. It can help you to investigate how many instances of a given class is holding Tomcat, and it can give you a clue about where the leak can be.
To learn more about the PermGen and PermGen errors and issues you can look at The java.lang.OutOfMemoryError: PermGen Space error demystified

Permgen and Garbage collection Java, Busting permgen myths

I know that in the JVM, the permgen area is used to store class definitions. In my Tomcat I see that the current memory usage of the permgen is near 100MB, it seems that it's just growing over time, even there's no one using the applications in Tomcat
My questions are:
Is it true that the permgen is never garbage collected, I mean the memory used there keeps growing and growing?
When the permgem gets garbage collected?
What does mean "CMSPermGenSweepingEnabled" and "CMSClassUnloadingEnabled"?
My max size of permgem is 256 and I don't want to have an OutMemoryException next week.
Please only accurate and documented answers.
I use Tomcat 7, Java 7, I use a lot the parallel deployment tecnique and I do undeploys, redeploys several times a week.
I never use the method intern() of Strings
actually it's not true that Permgen does never get garbage collected. It contains the classes that where loaded by the application, and gets collected when classloaders get garbage collected, typically in redeployment scenarios.
You can use these JVM flags to see when classes are loaded into and unloaded from the permgen:
-XX:+TraceClassLoading -XX:+TraceClassUnloading
To see which classes are getting loaded, use this flag:
-verbose:class
If the application is reflection intensive, that can be a cause too, have a look at this answer, try to use visualvm to take heap dumps an look for classes named lie sun.reflect.GeneratedMethodAccessor11.
For the garbage collection flags refer to this answer, but the best bet to fix the permgen leak is to see what classes are being created and why using some tooling/logs.

growing permsize when loading lots of data into memory

I always thought, that the memory of permsize of a JVM is filled with loading classes during starting up the JVM. Probably also with stuff like JNI during runtime ? But in general it should not growth during runtime "signifcantly".
Now I noticed, that since I load a lots of data (20GB) into the heapspace, which max is 32GB ( ArrayLists of Data ), then I get a 'OutOfMemoryError: PermGen space'.
Is there any correlation or just accidentally ?
I know howto increase the permsize. This is not the question.
With tomcat, I have set the following for increasing PermGen space.
set "JAVA_OPTS=-XX:MaxPermSize=256m"
You may like to do something like above.
I have set in MB(256m), I am not sure how to set for GB.
Hope helps.
The PermGen memory space is not part of the heap (sometimes this causes confusion). It's where some kind of objects are allocated, like
Class objects, Method objects, and the pool of strings objects. Unlike the name would indicate, this memory space is also collected (during
the FullGC), but often bring major headaches, as known
OutOfMemoryError.
Problems with bursting PermGen are difficult to diagnose precisely
because it is not the application objects . Most of the cases, the problem is connected to
an exaggerated amount of classes that are loaded into memory. A well known issue, was the use
of Eclipse with many plugins ( WTP ) with default JVM settings . Many classes were loaded in memory and ends with a burst of the permGEN.
Another problem of PermGen are the hot deploys in application servers. For several reasons, the server cannot release
the context classes at the destroy time . A new version of the application is then loaded,
but old the classes remains, increasing the PermGen.
That's why sometimes we need to restart the whole container because of the PermGen.

Large Permgen size + performance impact

We are running a liferay portal on tomcat 6. Each portlet is a contained web application so it includes all libraries the portlet itself requires. We currently have 30+ portlets. The result of this is that the permgen of our tomcat increases with each portlet we deploy.
We now have two paths we can follow.
Either move some of the libraries which are commenly used by each of our portlets to the tomcat shared library. This would include stuff like spring/hibernate/cxf/.... to decrease our permgen size
Or easier would be to increase the permgen size.
This second option would allow us to keep every portlet as a selfcontained entity.
The question now is, is there any negative performance impact from increasing the permgen size? We are currently running at 512MB.
I have found little to no information about this. But found some post were people are talking about running on 1024MB permgen size without issues.
As long as you have enough memory on your server, I can't imagine anything can go wrong. If you don't, well, the Tomcat wouldn't even start, probably, because it wouldn't be able to allocate enough memory. So, if it does start up, you're good. As far as my experience goes, 1GB PermGen is perfectly sound.
The downside of a large PermGen is that it leaves you with less system memory you can then allocate for heap (Xmx).
On the other hand, I'd advise you to reconsider the benefits of thinking of portlets as self-contained entities. For example:
interoperability issues: if all the portlets are allowed to potentially use different versions of same libraries, there is some risk that they won't cooperate with each other and with the portal itself as intended
performance: PermGen footprint is just one thing, but adding jars every here and there in portlets will require additional file descriptors; I don't know about Windows, but this will hurt linux server's performance in the long run.
freedom of change: if you're using maven to build your portlets, switching from lib/ext libs to portlet's lib libraries is just a matter of changing the dependencies scope (this may be more annoying with portal libs); as far as I remember, Liferay SDK also makes it easy to do a similar switch with ant to do a similar switch in a jiffy by adding additional ant task to resolve the dependencies and deleting them from portlet's lib as required
PermGen memory can be garbage collected by full collections, so increasing it may increase the amount of GC time when a full collection takes place.
These collections shouldn't take place too often though, and would typically still take less than a second to full GC 1GB of permgen memory - I'm just pulling this number from (my somewhat hazy) memory, so if you are really worried about GC times to some timing tests yourself (use -verbose:gc and read the logs, more details here)
Permgen size is outside of OLD Gen - so please don't mixed up .
Agreed on the 2nd point - we can increase the permsize as much as we can do - as memory is pretty cheap but this would raise some question on how we are managing our code. why the heck we need this much of perm -Is JTa consuming that much - how many class we are loading ? How many file descriptors the app is opening ( check with lsof command) .
We should try to answer those.

Categories