I'm currently investigating some out of meta space issues we've been experiencing recently. One of the main culprits seems to be the loading of duplicate classes upon redeployment of a WAR. Trying it out locally, with just one of our WARS, by heap dumping after undeploying completely, I can see that the majority of the instances created by the application are still there (even after garbage collection).
From the heap dump, I can see that it seems to be the ManagedThreadFactoryImpl that is holding onto references.
Is there anything I can do/add to the application shutdown process so it cleans up after itself?
All our WARs are spring applications, most use scheduling/asynchronous elements.
We're using JDK8 with Wildfly 8.2
Seems like the classloaders are not unloading. Try Java Mission Control (JMC) and record the use case. This lets you go to a specific point in time in your recording and debug the issue. It gives the snapshot of classes loaded at a specific time with stacktrace, threaddumps and a lot of important things.
JMC is included in JDK. You can find more info here: https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/tooldescr002.html#BABIBBDE
You dont have to go through the pain of taking heap dumps and then wait for a tool to analyze it.
Related
An application I have uses Java Agents with need large jar Libraries (the biggest one is PDFBox - all in all 11MB). They were running for 3 years without any issue with the jars in jvm/lib/ext.
During an upgrade to Domino 9.0.1FP6 the administrator forgot to reinstall the jars in jvm/lib/ext - with obvious repercussions. (Such an annoyance that IBM just completely replaces the whole jvm sometimes without being gentle to the jars)
Upon request, I changed the code by including the jars directly into the Java Agents. Things worked well for 2-3 days, and now we're getting OutOfMemory errors.
As far as I understand it, the jars get loaded onto the Java Heap when the agents get started, but the garbage collection is working slower than the continuous loading of the jars into the heap. I couldn't find any precise documentation by IBM on this matter.
We've increased JavaMaxHeapSize in the notes.ini of the servers but that didn't bring the expected results.
I'm dismissing the possibility that I have forgotten a recycle() in my code because it run beforehand with no memory leaks for three years.
I have thought of the possibility of running a separate Agent that checks total memory usage and then runs Sytem.gc() but I'm not convinced since I have no guarantee that the garbage collector will actually fire.
Apart from the obvious move of putting back the jars in jvm/lib/ext, is there an alternative that I haven't considered?
And is there anywhere some sort of documentation about how these classes get loaded into the Heap, and whether there's a possibility that the jars erroneously are not recognized as being garbage-collectible?
It's a memory leak bug - see http://www-01.ibm.com/support/docview.wss?uid=swg1LO49880 for details.
You need to go back to placing the jar files in jvm/lib/ext.
Is it possible to get this error on tomcat server in other way then redeploy wars or edit jsp files? We got this error on server where theoretically we don't do redeploys. Do you know best solution to monitor PermGen from linux console?
permGen means Permanent generation, this is the place where all your constant are stored like strings (in most cases before java 7) were stored on this permgen, One way to get rid of it is you simply increase the memory using
-XX:PermSize=512m
This is what I did, but from your scenario it feels like there is some kind of memory leak, I am not sure how to detect it, there are frameworks available for this and netbeans also provides application profiling support.
Here are some good links
http://www.eclipse.org/tptp/home/documents/tutorials/profilingtool/profilingexample_32.html
http://www.ej-technologies.com/products/jprofiler/overview.html
https://netbeans.org/kb/docs/java/profiler-intro.html
It doesn't have to be because of redeployments or more over a file edits.
It is more likely that your Java apps on the server are using up all allocated memory.
So yes, it is possible to get this error on tomcat server in other way then redeploy wars or edit jsp files.
When it comes to monitoring you might be interested in this API: http://docs.oracle.com/javase/7/docs/api/java/lang/management/MemoryMXBean.html
Or try looking for a monitoring software by typing tomcat monitor permgen in Google - lots of results are being returned.
There is a tool for remote monitoring of VM: http://visualvm.java.net/
I remember the default permgen on Tomcat being pretty low, if you have a decent sized application with a lot of third party dependencies this can cause loads of classes to reside in pergmen. You could legitimately need more pergmen space, try increasing it.
I am working in a Java/J2EE project with JBoss as an application server.
We build our war and do hot deployment in server using Jenkins.Sometimes, we get some Out of Memory error in JBoss.
I wonder if hot deployment is responsible for that. Also, would like to know if hot deployment has any pitfalls over normal manual start-stop deployment.
Can someone please provide some valuable inputs?
I agree with the answers about adjusting your heap/permgen space, although its hard to be specific without more information about how much memory is allowed, what you are using etc.
Also, would like to know if hot deployment has any pitfalls over normal manual start-stop deployment.
When you manually start and stop the service between deployments you can be a little sloppy about cleaning up your web app - and no one will ever know.
When you hot deploy, the previous instance of your servlet context is destroyed. In order to reduce the frequency of OutOfMemory exceptions, you want to make sure that when this occurs, you clean up after yourself. Even though there is nothing you can do about classloader PermGen memory problems, you don't want to compound the problem by introducing additional memory leaks.
For example - if your war file starts any worker threads, these need to be stopped. If you bind an object in JNDI, the object should be unbound. If there are any open files, database connects etc. these should be closed.
If you are using a web framework like Spring - much of this is already taken care of. Spring registers a ServletContextListener which automatically stops the container when the servlet context is destroyed. However you would still need to make sure that any beans which create resources during init will clean up those resources during destroy.
If you are doing a hand-crafted servlet, then you would want to register an implementation of ServletContextListener in your web.xml file, and in the implementation of contextDestroyed clean up any resources.
BTW - you should include the exact OutOfMemory exception in your answer. If it says something like java.lang.OutOfMemoryError: PermGen space, then it's probably an issue of class instances and there is not much you can do. If it is java.lang.OutOfMemoryError: Java heap space then perhaps it's memory in your application that is not being cleaned up
Hot deployment does not clear up the previously loaded Class instances in Perm Gen. It loads the Class instances afresh. A little google pointed me back to SO What makes hot deployment a "hard problem"?
You should increase Heap Space specifically Perm Space
-Xms<size> set initial Java heap size
-Xmx<size> set maximum Java heap size
-XX:MaxPermSize set maximum Permanent generation size
You can set it in JAVA_OPTS in your jboss run.sh or run.bat like:
-Xms256m -Xmx1024m -XX:MaxPermSize=512m
We are running a liferay portal on tomcat 6. Each portlet is a contained web application so it includes all libraries the portlet itself requires. We currently have 30+ portlets. The result of this is that the permgen of our tomcat increases with each portlet we deploy.
We now have two paths we can follow.
Either move some of the libraries which are commenly used by each of our portlets to the tomcat shared library. This would include stuff like spring/hibernate/cxf/.... to decrease our permgen size
Or easier would be to increase the permgen size.
This second option would allow us to keep every portlet as a selfcontained entity.
The question now is, is there any negative performance impact from increasing the permgen size? We are currently running at 512MB.
I have found little to no information about this. But found some post were people are talking about running on 1024MB permgen size without issues.
As long as you have enough memory on your server, I can't imagine anything can go wrong. If you don't, well, the Tomcat wouldn't even start, probably, because it wouldn't be able to allocate enough memory. So, if it does start up, you're good. As far as my experience goes, 1GB PermGen is perfectly sound.
The downside of a large PermGen is that it leaves you with less system memory you can then allocate for heap (Xmx).
On the other hand, I'd advise you to reconsider the benefits of thinking of portlets as self-contained entities. For example:
interoperability issues: if all the portlets are allowed to potentially use different versions of same libraries, there is some risk that they won't cooperate with each other and with the portal itself as intended
performance: PermGen footprint is just one thing, but adding jars every here and there in portlets will require additional file descriptors; I don't know about Windows, but this will hurt linux server's performance in the long run.
freedom of change: if you're using maven to build your portlets, switching from lib/ext libs to portlet's lib libraries is just a matter of changing the dependencies scope (this may be more annoying with portal libs); as far as I remember, Liferay SDK also makes it easy to do a similar switch with ant to do a similar switch in a jiffy by adding additional ant task to resolve the dependencies and deleting them from portlet's lib as required
PermGen memory can be garbage collected by full collections, so increasing it may increase the amount of GC time when a full collection takes place.
These collections shouldn't take place too often though, and would typically still take less than a second to full GC 1GB of permgen memory - I'm just pulling this number from (my somewhat hazy) memory, so if you are really worried about GC times to some timing tests yourself (use -verbose:gc and read the logs, more details here)
Permgen size is outside of OLD Gen - so please don't mixed up .
Agreed on the 2nd point - we can increase the permsize as much as we can do - as memory is pretty cheap but this would raise some question on how we are managing our code. why the heck we need this much of perm -Is JTa consuming that much - how many class we are loading ? How many file descriptors the app is opening ( check with lsof command) .
We should try to answer those.
It looks like
MemoryError: PermGen space
java.lang.OutOfMemoryError: PermGen space
is a common problem. You can Increase the size of your perm space, but after 100 or 200 redeploys it will be full. Tracking ClassLoader memory leaks is nearly impossible.
What are your methods for Tomcat (or another simple servlet container - Jetty?) on production server? Is server restart after each deploy a solution?
Do you use one Tomcat for many applications ?
Maybe I should use many Jetty servers on different ports (or an embedded Jetty) and do undeploy/restart/deploy each time ?
I gave up on using the tomcat manager and now always shutdown tomcat to redeploy.
We run two tomcats on the same server and use apache webserver with mod_proxy_ajp so users can access both apps via the same port 80. This is nice also because the users see the apache Service Unavailable page when the tomcat is down.
You can try adding these Java options:
-XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled
This enables garbage collection in PermGen space (off by default) and allows the GC to unload classes. In addition you should use the -XX:PermSize=64m -XX:MaxPermSize=128m mentioned elsewhere to increase the amount of PermGen available.
Yes indeed, this is a problem. We're running three web apps on a Tomcat server: No. 1 uses a web application framework, Hibernate and many other JARs, no. 2 uses Hibernate and a few JARs and no. 3 is basically a very simple JSP application.
When we deploy no. 1, we always restart Tomcat. Otherwise a PermGen space error will soon bite us. No. 2 can sometimes be deployed without problem, but since it often changes when no. 1 does as well, a restart is scheduled anyway. No. 3 poses no problem at all and can be deployed as often as needed without problem.
So, yes, we usually restart Tomcat. But we're also looking forward to Tomcat 7, which is supposed to handle many memory / class loader problems that are burried into different third-party JARs and frameworks.
PermGen switches in HotSpot only delay the problem, and eventually you will get the OutOfMemoryError anyway.
We have had this problem a long time, and the only solution I've found so far is to use JRockit instead. It doesn't have a PermGen, so the problem just disappears. We are evaluating it on our test servers now, and we haven't had one PermGen issue since the switch. I also tried redeploying more than 20 times on my local machine with an app that gets this error on first redeploy, and everything chugged along beautifully.
JRockit is meant to be integrated into OpenJDK, so maybe this problem will go away for stock Java too in the future.
http://www.oracle.com/technetwork/middleware/jrockit/overview/index.html
And it's free, under the same license as HotSpot:
https://blogs.oracle.com/henrik/entry/jrockit_is_now_free_and
You should enable PermGen garbage collection. By default Hotspot VM does NOT collect PermGen garbage, which means all loaded class files remain in memory forever. Every new deployment loads a new set of class files which means you eventually run out of PermGen space.
Which version of Tomcat are you using? Tomcat 7 and 6.0.30 have many features to avoid these leaks, or at least warn you about their cause.
This presentation by Mark Thomas of SpringSource (and longtime Tomcat committer) on this subject is very interesting.
Just of reference, there is a new version of Plumbr tool, that can monitor and detect Permanent Generation leaks as well.