I want to build an automatic script or something equivalent to auto-deploy war-files to a tomcat-server via jenkins. The tomcat manager is not enabled, so the "Deploy to container Plugin" is pretty much a miss, since it needs the manager to be active. At the moment I'm pushing the war files to the tomcat via scp, however the tomcat seems to crash on every other try. I also tried the maven-tomcat-deploy alternative, which also needs the tomcat's manager to be active. Is there another way to auto-deploy the war files to my tomcat?
As far as I know you either need the manager to upload the war or copy it.
If Tomcat crashes after redeployment I'd check if there's no reference from an instance with its class loaded by the Tomcat classloader to an instance with its class loaded by your application classloader.
When the GC kicks in the reference to your application loaded instance leads to the class which in turn leads to the classloader which references all the classes. So the GC can't clear the memory they take.
Most, if not all, of the issues I encountered with Tomcat failing redeployment have been caused by this kind of referencing issue. Because of this the old deployment will stay in memory though nothing will happen with it. You can increase the allocated memory but this will only postpone the crash.
Related
On tomcat/manager when I click on find leaks button I get the following message:
The following web applications were stopped (reloaded, undeployed), but their
classes from previous runs are still loaded in memory, thus causing a memory
leak (use a profiler to confirm):
This is actually causing big problems for me because when I need to re-deploy an app with some changes, the old version remains in the server and when I try to use it I get:
IllegalStateException: Illegal access: this web application instance has been stopped already
Even though in tomcat/manager the application shows as started.
How do I resolve this? I looked at here but did not solve the problem. I tried running jps command to get pid of JVM but it doesn't return the JVMs, I guess due to permissoin issue.
Is it possible to configure tomcat somehow so that when an application is undeployed, it shouldn't keep any classes of that application in the memory?
The question How to purge tomcat's cache when deploying a new .war file? Is there a config setting? is not solving my problem as I followed the steps given there:
undeploy the app
stop tomcat
delete the app from `work` directory of tomcat
clear browser cache
start tomcat
put the war file in `webapps`
wait a few moments
start the app
those steps didn't solve the problem
There is no way to configure tomcat to drop classes that are still referenced. An easy way (probably not the only one) to cause this is to start a thread from your webapplication and not terminate it when the app is undeployed: To tomcat, this thread is then unavailable, but for the JVM, the running thread is a valid root for classes that can't be garbage collected.
Tomcat does its best to detect and indicate such a condition, but really can't go against the JVM and garbage collector and discard perfectly valid objects that are still referenced.
The steps you give, including stopping and starting tomcat, seem impossible: Once you terminate the JVM, the only chance for a newly started JVM to reference the old classes is if it gets hold of them. But you also state that you're deleting the work directory (what about the temp directory?). You should also make sure that you're actually deleting the right content, not some other server's content in a similar directory. Other than that: It seems you're just missing an almost obvious place where the old classes are located. Once they're picked up on restart, the condition that I'm describing above might kick in. There's no way around implementing the webapp to behave nicely and not start random background threads without terminating them.
You might want to check if CATALINA_HOME and CATALINA_BASE are different and might give a hint about other applications deployed under the same name, and if your application gets started when you just delete the files you describe, then start (without new deployment)
I have a vaadin app where if I don't limit the RAM, the WAR will run it up to 2.5+gb in Tomcat but if I limit it to 1gb in eclipse using this the program will stay steady around 700mb(when no action)-1.2gb (when running a large dataset).
Is there a way to export this war with the memory constraint? I have other war apps on the same Tomcat server, but this one is the only one that runs rampant. Or is it better practice to create a separate virtual server and set the memory constraint in Tomcat just for it?
Eclipse runs each program in a separate JVM.
A Tomcat instance runs inside of a JVM, and so will all web apps deployed in it. You could start a second Tomcat instance (and thus a second JVM) and set the specific memory constraints for each JVM
I know Tomcat can reload the .war file when redeploy it, I don't need to kill the Tomcat process and restart it. I can remove the .war, wait for Tomcat to undeploy it, and copy the new .war to the web path. But, after many times of trivial update war without restart the Tomcat, is it possible that Tomcat will not release the memory effectively or cause some performance issues? Assume there is only one war application in one Tomcat instance.
The basic problem is that Java does currently not provide any kind of isolation between the parts of code running in a Java Virtual Machine (JVM), in the same way that an operating system does with processes. You can kill a process without affecting another process under Windows/Linux/etc. All you can do is ensuring that things can be garbage collected.
For Tomcat the way that WAR's are being handled - according to the various specifications - require that each war has its own classloader that is responsible for running that code. When the WAR is undeployed the final result should be that that class loader should be garbage collected.
Unfortunately the garbage collector can only handle objects that are completely unreferenced, and there is a large set of subtle bugs that can be present in WAR code that can prohibit this, and then every redeploy cause another classloader to be created and none destroyed so you have a memory leak. A lot of effort has gone into detecting and working around these type of bugs inside Tomcat itself, but it is close to impossible to do 100% right without JVM support.
The only cure besides fixing the WAR is to restart the JVM.
You can watch the memory usage with VisualVM even in production to see what happens over time with the Tomcat JVM.
Yes. It is far cleaner to stop Tomcat, deploy your new war, then restart Tomcat. One drawback is much of your application classes by default won't be loaded till a new request comes in to hit your application but its not a huge issue. Just means theres a few seconds of startup on the very first request to your new WAR. This is how we deploy wars in production.
Also allows us to setup health-checking in the logs if the new war prevents Tomcat from starting up correctly then we rollback the war knowing theres an issue, but thats a separate topic.
What About Down Time?
This might be out of scope of your question but when you want to prevent users from seeing any downtime you would run multiple instance of tomcat and deploy and restart one at a time.
I have a problem about Memory Leak in Tomcat server.
When I undeploy and redeploy the error message war appeared as below:
The following web applications were stopped (reloaded, undeployed), but their classes from previous runs are still loaded in memory, thus causing a memory leak (use a profiler to confirm): /myWebApplication
How can I fix it and deploy as normally
If anyone known that , please help me .
Thank you very much !
Make sure that you implement a ServletContextListener and clean up any resources (e.g. threads, timers, singletons, hibernate) that may still be active or referenced at the time that contextDestroyed() is called.
I am working in a Java/J2EE project with JBoss as an application server.
We build our war and do hot deployment in server using Jenkins.Sometimes, we get some Out of Memory error in JBoss.
I wonder if hot deployment is responsible for that. Also, would like to know if hot deployment has any pitfalls over normal manual start-stop deployment.
Can someone please provide some valuable inputs?
I agree with the answers about adjusting your heap/permgen space, although its hard to be specific without more information about how much memory is allowed, what you are using etc.
Also, would like to know if hot deployment has any pitfalls over normal manual start-stop deployment.
When you manually start and stop the service between deployments you can be a little sloppy about cleaning up your web app - and no one will ever know.
When you hot deploy, the previous instance of your servlet context is destroyed. In order to reduce the frequency of OutOfMemory exceptions, you want to make sure that when this occurs, you clean up after yourself. Even though there is nothing you can do about classloader PermGen memory problems, you don't want to compound the problem by introducing additional memory leaks.
For example - if your war file starts any worker threads, these need to be stopped. If you bind an object in JNDI, the object should be unbound. If there are any open files, database connects etc. these should be closed.
If you are using a web framework like Spring - much of this is already taken care of. Spring registers a ServletContextListener which automatically stops the container when the servlet context is destroyed. However you would still need to make sure that any beans which create resources during init will clean up those resources during destroy.
If you are doing a hand-crafted servlet, then you would want to register an implementation of ServletContextListener in your web.xml file, and in the implementation of contextDestroyed clean up any resources.
BTW - you should include the exact OutOfMemory exception in your answer. If it says something like java.lang.OutOfMemoryError: PermGen space, then it's probably an issue of class instances and there is not much you can do. If it is java.lang.OutOfMemoryError: Java heap space then perhaps it's memory in your application that is not being cleaned up
Hot deployment does not clear up the previously loaded Class instances in Perm Gen. It loads the Class instances afresh. A little google pointed me back to SO What makes hot deployment a "hard problem"?
You should increase Heap Space specifically Perm Space
-Xms<size> set initial Java heap size
-Xmx<size> set maximum Java heap size
-XX:MaxPermSize set maximum Permanent generation size
You can set it in JAVA_OPTS in your jboss run.sh or run.bat like:
-Xms256m -Xmx1024m -XX:MaxPermSize=512m