Is it possible to get this error on tomcat server in other way then redeploy wars or edit jsp files? We got this error on server where theoretically we don't do redeploys. Do you know best solution to monitor PermGen from linux console?
permGen means Permanent generation, this is the place where all your constant are stored like strings (in most cases before java 7) were stored on this permgen, One way to get rid of it is you simply increase the memory using
-XX:PermSize=512m
This is what I did, but from your scenario it feels like there is some kind of memory leak, I am not sure how to detect it, there are frameworks available for this and netbeans also provides application profiling support.
Here are some good links
http://www.eclipse.org/tptp/home/documents/tutorials/profilingtool/profilingexample_32.html
http://www.ej-technologies.com/products/jprofiler/overview.html
https://netbeans.org/kb/docs/java/profiler-intro.html
It doesn't have to be because of redeployments or more over a file edits.
It is more likely that your Java apps on the server are using up all allocated memory.
So yes, it is possible to get this error on tomcat server in other way then redeploy wars or edit jsp files.
When it comes to monitoring you might be interested in this API: http://docs.oracle.com/javase/7/docs/api/java/lang/management/MemoryMXBean.html
Or try looking for a monitoring software by typing tomcat monitor permgen in Google - lots of results are being returned.
There is a tool for remote monitoring of VM: http://visualvm.java.net/
I remember the default permgen on Tomcat being pretty low, if you have a decent sized application with a lot of third party dependencies this can cause loads of classes to reside in pergmen. You could legitimately need more pergmen space, try increasing it.
Related
Getting below error when running the war file in tomcat7 (Ubuntu):
Exception in thread "http-bio-8080-AsyncTimeout" java.lang.OutOfMemoryError: Java heap space
at java.util.concurrent.ConcurrentLinkedQueue.iterator(ConcurrentLinkedQueue.java:667)
at org.apache.tomcat.util.net.JIoEndpoint$AsyncTimeout.run(JIoEndpoint.java:156)
at java.lang.Thread.run(Thread.java:745)
I am getting above exception when I have put my app.war file in webapps path.
Increase the heap size is a must. You could create or edit the $CATALINA_HOME/bin/setenv.sh, add the line:
JAVA_OPTS="-Xms1024m -Xmx2048m"
And then restart tomcat. I think it is better to unpack the war file and copy it to $CATALINA_HOME/webapps, and furthermore, using hot-deploy in production is not a good idea.
In my case it happens if Im deploying my application without restarting tomcat. I think GC (garbage-collector) has some problems to free the allocated memory.
My workaround is:
Just remove the war file
wait some time till your application folder is removed by tomcat
stop the server
upload the war file
and finally start your tomcat instance again.
this is common in the Java world and actually still might affect JRuby ... although hot-deploys are much smoother these days if you're on latest >= 1.7.15 and using jruby-openssl >= 0.9.5. but there still might be gems/libraries that will impose issues reloading and thus will leak memory (one such library is require 'timeout') so you'll need to review the heap dump for leak candidates. also Tomcat usually prints useful hints related to "potential" leak candidates during un/re-deploys.
now, there are things to make sure. if you're on Java 6 you need to use the concurrent (mark-and-sweep) GC as others do not swap the heap space and it's not the default, even so you will need to increase the heap size if you're planning hot-reloads since it needs to be able to hold twice your application's (Java) classes.
Let's say I have a very large Java application that's deployed on Tomcat. Over the course of a few weeks, the server will run out of memory, application performance is degraded, and the server needs a restart.
Obviously the application has some memory leaks that need to be fixed.
My question is.. If the application were deployed to a different server, would there be any change in memory utilization?
Certainly the services offered by the application server might vary in their memory utilization, and if the server includes its own unique VM -- i.e., if you're using J9 or JRockit with one server and Oracle's JVM with another -- there are bound to be differences. One relevant area that does matter is class loading: some app servers have better behavior than others with regard to administration. Warm-starting the application after a configuration change can result in serious memory leaks due to class loading problems on some server/VM combinations.
But none of these are really going to help you with an application that leaks. It's the program using the memory, not the server, so changing the server isn't going to affect much of anything.
There will probably be a slight difference in memory utilisation, but only in as much as the footprint differs between servlet containers. There is also a slight chance that you've encountered a memory leak with the container - but this is doubtful.
The most likely issue is that your application has a memory leak - in any case, the cause is more important than a quick fix - what would you do if the 'new' container just happens to last an extra week etc? Moving the problem rarely solves it...
You need to start analysing the applications heap memory, to locate the source of the problem. If your application is crashing with an OOME, you can add this to the JVM arguments.
-XX:-HeapDumpOnOutOfMemoryError
If the performance is just degrading until you restart the container manually, you should get into the routine of triggering periodic heap dumps. A timeline of dumps is often the most help, as you can see which object stores just grow over time.
To do this, you'll need a heap analysis tool:
JHat or IBM Heap Analyser or whatever your preference :)
Also see this question:
Recommendations for a heap analysis tool for Java?
Update:
And this may help (for obvious reasons):
How do I analyze a .hprof file?
It looks like
MemoryError: PermGen space
java.lang.OutOfMemoryError: PermGen space
is a common problem. You can Increase the size of your perm space, but after 100 or 200 redeploys it will be full. Tracking ClassLoader memory leaks is nearly impossible.
What are your methods for Tomcat (or another simple servlet container - Jetty?) on production server? Is server restart after each deploy a solution?
Do you use one Tomcat for many applications ?
Maybe I should use many Jetty servers on different ports (or an embedded Jetty) and do undeploy/restart/deploy each time ?
I gave up on using the tomcat manager and now always shutdown tomcat to redeploy.
We run two tomcats on the same server and use apache webserver with mod_proxy_ajp so users can access both apps via the same port 80. This is nice also because the users see the apache Service Unavailable page when the tomcat is down.
You can try adding these Java options:
-XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled
This enables garbage collection in PermGen space (off by default) and allows the GC to unload classes. In addition you should use the -XX:PermSize=64m -XX:MaxPermSize=128m mentioned elsewhere to increase the amount of PermGen available.
Yes indeed, this is a problem. We're running three web apps on a Tomcat server: No. 1 uses a web application framework, Hibernate and many other JARs, no. 2 uses Hibernate and a few JARs and no. 3 is basically a very simple JSP application.
When we deploy no. 1, we always restart Tomcat. Otherwise a PermGen space error will soon bite us. No. 2 can sometimes be deployed without problem, but since it often changes when no. 1 does as well, a restart is scheduled anyway. No. 3 poses no problem at all and can be deployed as often as needed without problem.
So, yes, we usually restart Tomcat. But we're also looking forward to Tomcat 7, which is supposed to handle many memory / class loader problems that are burried into different third-party JARs and frameworks.
PermGen switches in HotSpot only delay the problem, and eventually you will get the OutOfMemoryError anyway.
We have had this problem a long time, and the only solution I've found so far is to use JRockit instead. It doesn't have a PermGen, so the problem just disappears. We are evaluating it on our test servers now, and we haven't had one PermGen issue since the switch. I also tried redeploying more than 20 times on my local machine with an app that gets this error on first redeploy, and everything chugged along beautifully.
JRockit is meant to be integrated into OpenJDK, so maybe this problem will go away for stock Java too in the future.
http://www.oracle.com/technetwork/middleware/jrockit/overview/index.html
And it's free, under the same license as HotSpot:
https://blogs.oracle.com/henrik/entry/jrockit_is_now_free_and
You should enable PermGen garbage collection. By default Hotspot VM does NOT collect PermGen garbage, which means all loaded class files remain in memory forever. Every new deployment loads a new set of class files which means you eventually run out of PermGen space.
Which version of Tomcat are you using? Tomcat 7 and 6.0.30 have many features to avoid these leaks, or at least warn you about their cause.
This presentation by Mark Thomas of SpringSource (and longtime Tomcat committer) on this subject is very interesting.
Just of reference, there is a new version of Plumbr tool, that can monitor and detect Permanent Generation leaks as well.
Sometimes when I redeploy war too many times, jboss gives java.lang.OutOfMemoryError: PermGen space error, is it possible to monitor jboss with other Java program that is not run inside jboss, to make sure it has not run ot of memory and if it is, then automatically restart jboss?
I would expect that you can monitor the memory consumption via JMX and the MemoryMXBean. You can do this interactively via JConsole, or code up a simple monitor to do this automatically.
Here's some details on how to do this in-process, but you can do this remotely as well. See the JMX docs for more info.
Alternatively, you can run a process under the JavaServiceWrapper, and get it to shutdown/restart a process depending on messages coming out from stdout/err. That may be a simple way to perform your restart automatically. However, I'd prefer using the JMX solution in the long term so you can get advance warning of issues (and perhaps tie them to their underlying cause).
I would suggest HypericHQ. It's a very good standalone application that can monitor your JBoss instances, alert you when the permgen or heap gets low, and can even trigger a restart if required. It's a complex beast, but worth the investment.
Every 15-30 minutes Netbeans shows a "java.lang.OutOfMemoryError: PermGen space". From what I learned from Google this seems to be related to classloader leaks or memory leaks in general.
Unfortunately all suggestions I found were related to application servers and I have no idea to adapted them to Netbeans. (I'm not even sure it's the same problem)
Is it a problem in my application? How can I find the source?
It is because of constant class loading.
Java stores class byte code and all the constants (e.g. string constants) in permanent heap that is not garbage collected by default (which make sense in majority of situations because classes are loaded only once during the lifetime of an application).
In applications that often load classes during an entire lifetime that are:
web and application servers during hot redeployment;
IDE's when running developed applications (every time you hit Run button in Netbeans or eclipse it loads your application's classes a new);
etc
this behavior is improper because a heap fills full eventually.
You need to turn on permanent heap garbage collection to prevent this error.
I use options
-XX:MaxPermSize=256M
-XX:+CMSClassUnloadingEnabled
-XX:+CMSPermGenSweepingEnabled
(stopped my eclipse 3.4 from throwing "java.lang.OutOfMemoryError: PermGen space" so it should also work with netbeans).
Edit: Just note that for Netbeans you set those options in:
[Netbeans installation]\etc\netbeans.conf
You should prefixe those options with -J and add them in netbeans_default_options (see comments in netbeans.conf for more informations).
Try to add the following argument to netbeans netconf: -J-XX:MaxPermSize=256m
If you are developing a web application, try to put on server vm option
-Xmx512m -Xms512m -XX:MaxPermSize=512m
Go to Tools/Servers//Platform/VM Options (Netbeans 7.4)
See this link on how to set the size of PermSize. Most probably this isn't a problem of your code, so the only solution would be to increase the the PermSize. I have found that this is quite often, when you work with JAXB. In general, if you use many libraries that themselves also depend on many other jar files, the default size of the PermGen space may not be big enough.
See these blog posts for more details: Classloader leaks and How to fix the dreaded "java.lang.OutOfMemoryError: PermGen space"
You can change the startup options of netbeans JVM by editing the file /etc/netbeans.conf.
I often have this kind of errors when I develop a Webapp and I deploy it often. In that case, I restart tomcat from time to time.