Error during hot deployment in JBoss - java

I am working in a Java/J2EE project with JBoss as an application server.
We build our war and do hot deployment in server using Jenkins.Sometimes, we get some Out of Memory error in JBoss.
I wonder if hot deployment is responsible for that. Also, would like to know if hot deployment has any pitfalls over normal manual start-stop deployment.
Can someone please provide some valuable inputs?

I agree with the answers about adjusting your heap/permgen space, although its hard to be specific without more information about how much memory is allowed, what you are using etc.
Also, would like to know if hot deployment has any pitfalls over normal manual start-stop deployment.
When you manually start and stop the service between deployments you can be a little sloppy about cleaning up your web app - and no one will ever know.
When you hot deploy, the previous instance of your servlet context is destroyed. In order to reduce the frequency of OutOfMemory exceptions, you want to make sure that when this occurs, you clean up after yourself. Even though there is nothing you can do about classloader PermGen memory problems, you don't want to compound the problem by introducing additional memory leaks.
For example - if your war file starts any worker threads, these need to be stopped. If you bind an object in JNDI, the object should be unbound. If there are any open files, database connects etc. these should be closed.
If you are using a web framework like Spring - much of this is already taken care of. Spring registers a ServletContextListener which automatically stops the container when the servlet context is destroyed. However you would still need to make sure that any beans which create resources during init will clean up those resources during destroy.
If you are doing a hand-crafted servlet, then you would want to register an implementation of ServletContextListener in your web.xml file, and in the implementation of contextDestroyed clean up any resources.
BTW - you should include the exact OutOfMemory exception in your answer. If it says something like java.lang.OutOfMemoryError: PermGen space, then it's probably an issue of class instances and there is not much you can do. If it is java.lang.OutOfMemoryError: Java heap space then perhaps it's memory in your application that is not being cleaned up

Hot deployment does not clear up the previously loaded Class instances in Perm Gen. It loads the Class instances afresh. A little google pointed me back to SO What makes hot deployment a "hard problem"?

You should increase Heap Space specifically Perm Space
-Xms<size> set initial Java heap size
-Xmx<size> set maximum Java heap size
-XX:MaxPermSize set maximum Permanent generation size
You can set it in JAVA_OPTS in your jboss run.sh or run.bat like:
-Xms256m -Xmx1024m -XX:MaxPermSize=512m

Related

How to prevent a Spring Boot / Tomcat (Java8) process be OOM-killed?

Since moving to Tomcat8/Java8, now and then the Tomcat server is OOM-killed. OOM = Out-of-memory kill by the Linux kernel.
How can I prevent the Tomcat server be OOM-killed?
Can this be the result of a memory leak? I guess I would get a normal Out-of-memory message, but no OOM-kill. Correct?
Should I change settings in the HEAP size?
Should I change settings for the MetaSpace size?
Knowing which Tomcat process is killed, how to retrieve info so that I can reconfigure the Tomcat server?
Firstly check that the oomkill isn't being triggered by another process in the system, or that the server isn't overloaded with other processes. It could be that Tomcat is being unfairly targeted by oomkill when some other greedy process is the culprit.
Heap should be set as a maximum size (-Xmx) to be smaller than the physical RAM on the server. If it is more than this, then paging will cause desperately poor performance when garbage collecting.
If it's caused by the metaspace growing in an unbounded fashion, then you need to find out why that is happening. Simply setting the maximum size of metaspace will cause an outofmemory error once you reach the limit you've set. And raising the limit will be pointless, because eventually you'll hit any higher limit you set.
Run your application and before it crashes (not easy of course but you'll need to judge it), kill -3 the tomcat process. Then analyse the heap and try to find out why metaspace is growing big. It's usually caused by dynamically loading classes. Is this something your application is doing? More likely, it's some framework doing this. (Nb oom killer will kill -9 the tomcat process, and you won't be able to diagnostics after that, so you need to let the app run and intervene before this happens).
Check out also this question - there's an intriguing answer which claims an obscure fix to an XML binding setting cleared the problem (highly questionable but may be worth a try) java8 "java.lang.OutOfMemoryError: Metaspace"
Another very good solution is transforming your application to a Spring Boot JAR (Docker) application. Normally this application has a lot less memory consumption.
So steps the get huge improvements (if you can move to Spring Boot application):
Migrate to Spring Boot application. In my case, this took 3 simple actions.
Use a light-weight base image. See below.
VERY IMPORTANT - use the Java memory balancing options. See the last line of the Dockerfile below. This reduced my running container RAM usage from over 650MB to ONLY 240MB. Running smoothly. So, SAVING over 400MB on 650MB!!
This is my Dockerfile:
FROM openjdk:8-jdk-alpine
ENV JAVA_APP_JAR your.jar
ENV AB_OFF true
EXPOSE 8080
ADD target/$JAVA_APP_JAR /deployments/
CMD ["java","-XX:+UnlockExperimentalVMOptions", "-XX:+UseCGroupMemoryLimitForHeap", "-jar","/deployments/your.jar"]

Redeploying on wildfly cause outofmemory: metaspace

I'm currently investigating some out of meta space issues we've been experiencing recently. One of the main culprits seems to be the loading of duplicate classes upon redeployment of a WAR. Trying it out locally, with just one of our WARS, by heap dumping after undeploying completely, I can see that the majority of the instances created by the application are still there (even after garbage collection).
From the heap dump, I can see that it seems to be the ManagedThreadFactoryImpl that is holding onto references.
Is there anything I can do/add to the application shutdown process so it cleans up after itself?
All our WARs are spring applications, most use scheduling/asynchronous elements.
We're using JDK8 with Wildfly 8.2
Seems like the classloaders are not unloading. Try Java Mission Control (JMC) and record the use case. This lets you go to a specific point in time in your recording and debug the issue. It gives the snapshot of classes loaded at a specific time with stacktrace, threaddumps and a lot of important things.
JMC is included in JDK. You can find more info here: https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/tooldescr002.html#BABIBBDE
You dont have to go through the pain of taking heap dumps and then wait for a tool to analyze it.

java.lang.OutOfMemoryError: Java heap space deploying .war on Tomcat 7

Getting below error when running the war file in tomcat7 (Ubuntu):
Exception in thread "http-bio-8080-AsyncTimeout" java.lang.OutOfMemoryError: Java heap space
at java.util.concurrent.ConcurrentLinkedQueue.iterator(ConcurrentLinkedQueue.java:667)
at org.apache.tomcat.util.net.JIoEndpoint$AsyncTimeout.run(JIoEndpoint.java:156)
at java.lang.Thread.run(Thread.java:745)
I am getting above exception when I have put my app.war file in webapps path.
Increase the heap size is a must. You could create or edit the $CATALINA_HOME/bin/setenv.sh, add the line:
JAVA_OPTS="-Xms1024m -Xmx2048m"
And then restart tomcat. I think it is better to unpack the war file and copy it to $CATALINA_HOME/webapps, and furthermore, using hot-deploy in production is not a good idea.
In my case it happens if Im deploying my application without restarting tomcat. I think GC (garbage-collector) has some problems to free the allocated memory.
My workaround is:
Just remove the war file
wait some time till your application folder is removed by tomcat
stop the server
upload the war file
and finally start your tomcat instance again.
this is common in the Java world and actually still might affect JRuby ... although hot-deploys are much smoother these days if you're on latest >= 1.7.15 and using jruby-openssl >= 0.9.5. but there still might be gems/libraries that will impose issues reloading and thus will leak memory (one such library is require 'timeout') so you'll need to review the heap dump for leak candidates. also Tomcat usually prints useful hints related to "potential" leak candidates during un/re-deploys.
now, there are things to make sure. if you're on Java 6 you need to use the concurrent (mark-and-sweep) GC as others do not swap the heap space and it's not the default, even so you will need to increase the heap size if you're planning hot-reloads since it needs to be able to hold twice your application's (Java) classes.

OutOfMemory PermGen error - ways to get this error

Is it possible to get this error on tomcat server in other way then redeploy wars or edit jsp files? We got this error on server where theoretically we don't do redeploys. Do you know best solution to monitor PermGen from linux console?
permGen means Permanent generation, this is the place where all your constant are stored like strings (in most cases before java 7) were stored on this permgen, One way to get rid of it is you simply increase the memory using
-XX:PermSize=512m
This is what I did, but from your scenario it feels like there is some kind of memory leak, I am not sure how to detect it, there are frameworks available for this and netbeans also provides application profiling support.
Here are some good links
http://www.eclipse.org/tptp/home/documents/tutorials/profilingtool/profilingexample_32.html
http://www.ej-technologies.com/products/jprofiler/overview.html
https://netbeans.org/kb/docs/java/profiler-intro.html
It doesn't have to be because of redeployments or more over a file edits.
It is more likely that your Java apps on the server are using up all allocated memory.
So yes, it is possible to get this error on tomcat server in other way then redeploy wars or edit jsp files.
When it comes to monitoring you might be interested in this API: http://docs.oracle.com/javase/7/docs/api/java/lang/management/MemoryMXBean.html
Or try looking for a monitoring software by typing tomcat monitor permgen in Google - lots of results are being returned.
There is a tool for remote monitoring of VM: http://visualvm.java.net/
I remember the default permgen on Tomcat being pretty low, if you have a decent sized application with a lot of third party dependencies this can cause loads of classes to reside in pergmen. You could legitimately need more pergmen space, try increasing it.

Tomcat on production server, PermGen and redeploys

It looks like
MemoryError: PermGen space
java.lang.OutOfMemoryError: PermGen space
is a common problem. You can Increase the size of your perm space, but after 100 or 200 redeploys it will be full. Tracking ClassLoader memory leaks is nearly impossible.
What are your methods for Tomcat (or another simple servlet container - Jetty?) on production server? Is server restart after each deploy a solution?
Do you use one Tomcat for many applications ?
Maybe I should use many Jetty servers on different ports (or an embedded Jetty) and do undeploy/restart/deploy each time ?
I gave up on using the tomcat manager and now always shutdown tomcat to redeploy.
We run two tomcats on the same server and use apache webserver with mod_proxy_ajp so users can access both apps via the same port 80. This is nice also because the users see the apache Service Unavailable page when the tomcat is down.
You can try adding these Java options:
-XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled
This enables garbage collection in PermGen space (off by default) and allows the GC to unload classes. In addition you should use the -XX:PermSize=64m -XX:MaxPermSize=128m mentioned elsewhere to increase the amount of PermGen available.
Yes indeed, this is a problem. We're running three web apps on a Tomcat server: No. 1 uses a web application framework, Hibernate and many other JARs, no. 2 uses Hibernate and a few JARs and no. 3 is basically a very simple JSP application.
When we deploy no. 1, we always restart Tomcat. Otherwise a PermGen space error will soon bite us. No. 2 can sometimes be deployed without problem, but since it often changes when no. 1 does as well, a restart is scheduled anyway. No. 3 poses no problem at all and can be deployed as often as needed without problem.
So, yes, we usually restart Tomcat. But we're also looking forward to Tomcat 7, which is supposed to handle many memory / class loader problems that are burried into different third-party JARs and frameworks.
PermGen switches in HotSpot only delay the problem, and eventually you will get the OutOfMemoryError anyway.
We have had this problem a long time, and the only solution I've found so far is to use JRockit instead. It doesn't have a PermGen, so the problem just disappears. We are evaluating it on our test servers now, and we haven't had one PermGen issue since the switch. I also tried redeploying more than 20 times on my local machine with an app that gets this error on first redeploy, and everything chugged along beautifully.
JRockit is meant to be integrated into OpenJDK, so maybe this problem will go away for stock Java too in the future.
http://www.oracle.com/technetwork/middleware/jrockit/overview/index.html
And it's free, under the same license as HotSpot:
https://blogs.oracle.com/henrik/entry/jrockit_is_now_free_and
You should enable PermGen garbage collection. By default Hotspot VM does NOT collect PermGen garbage, which means all loaded class files remain in memory forever. Every new deployment loads a new set of class files which means you eventually run out of PermGen space.
Which version of Tomcat are you using? Tomcat 7 and 6.0.30 have many features to avoid these leaks, or at least warn you about their cause.
This presentation by Mark Thomas of SpringSource (and longtime Tomcat committer) on this subject is very interesting.
Just of reference, there is a new version of Plumbr tool, that can monitor and detect Permanent Generation leaks as well.

Categories