We are running a liferay portal on tomcat 6. Each portlet is a contained web application so it includes all libraries the portlet itself requires. We currently have 30+ portlets. The result of this is that the permgen of our tomcat increases with each portlet we deploy.
We now have two paths we can follow.
Either move some of the libraries which are commenly used by each of our portlets to the tomcat shared library. This would include stuff like spring/hibernate/cxf/.... to decrease our permgen size
Or easier would be to increase the permgen size.
This second option would allow us to keep every portlet as a selfcontained entity.
The question now is, is there any negative performance impact from increasing the permgen size? We are currently running at 512MB.
I have found little to no information about this. But found some post were people are talking about running on 1024MB permgen size without issues.
As long as you have enough memory on your server, I can't imagine anything can go wrong. If you don't, well, the Tomcat wouldn't even start, probably, because it wouldn't be able to allocate enough memory. So, if it does start up, you're good. As far as my experience goes, 1GB PermGen is perfectly sound.
The downside of a large PermGen is that it leaves you with less system memory you can then allocate for heap (Xmx).
On the other hand, I'd advise you to reconsider the benefits of thinking of portlets as self-contained entities. For example:
interoperability issues: if all the portlets are allowed to potentially use different versions of same libraries, there is some risk that they won't cooperate with each other and with the portal itself as intended
performance: PermGen footprint is just one thing, but adding jars every here and there in portlets will require additional file descriptors; I don't know about Windows, but this will hurt linux server's performance in the long run.
freedom of change: if you're using maven to build your portlets, switching from lib/ext libs to portlet's lib libraries is just a matter of changing the dependencies scope (this may be more annoying with portal libs); as far as I remember, Liferay SDK also makes it easy to do a similar switch with ant to do a similar switch in a jiffy by adding additional ant task to resolve the dependencies and deleting them from portlet's lib as required
PermGen memory can be garbage collected by full collections, so increasing it may increase the amount of GC time when a full collection takes place.
These collections shouldn't take place too often though, and would typically still take less than a second to full GC 1GB of permgen memory - I'm just pulling this number from (my somewhat hazy) memory, so if you are really worried about GC times to some timing tests yourself (use -verbose:gc and read the logs, more details here)
Permgen size is outside of OLD Gen - so please don't mixed up .
Agreed on the 2nd point - we can increase the permsize as much as we can do - as memory is pretty cheap but this would raise some question on how we are managing our code. why the heck we need this much of perm -Is JTa consuming that much - how many class we are loading ? How many file descriptors the app is opening ( check with lsof command) .
We should try to answer those.
Related
I am building a very complex software that will be used for production and will run on a server as a service.
I need to make this jar have set max RAM usage when running with some calculations made by my program, i have seen that there are ways for setting the memory before running the built program, but i would like to set how much memory the jar is going to use when i am running it, is this possible?
There are two issues here. As mentioned above, you can only request up to a specific amount of memory. Efficient garbage collection can help you reclaim memory that is no longer needed.
The second, and probably real, issue here is metering how much memory is actually used by the application. There are many frameworks (e.g., JMeter) for measuring how much memory is used - and this can be done with respect to the amount of data used. When doing NP-complete (or even just more than O(n) problems) this can be very useful from the users perspective ("This works well with up to 2 ||| objects")
Let's say I have a very large Java application that's deployed on Tomcat. Over the course of a few weeks, the server will run out of memory, application performance is degraded, and the server needs a restart.
Obviously the application has some memory leaks that need to be fixed.
My question is.. If the application were deployed to a different server, would there be any change in memory utilization?
Certainly the services offered by the application server might vary in their memory utilization, and if the server includes its own unique VM -- i.e., if you're using J9 or JRockit with one server and Oracle's JVM with another -- there are bound to be differences. One relevant area that does matter is class loading: some app servers have better behavior than others with regard to administration. Warm-starting the application after a configuration change can result in serious memory leaks due to class loading problems on some server/VM combinations.
But none of these are really going to help you with an application that leaks. It's the program using the memory, not the server, so changing the server isn't going to affect much of anything.
There will probably be a slight difference in memory utilisation, but only in as much as the footprint differs between servlet containers. There is also a slight chance that you've encountered a memory leak with the container - but this is doubtful.
The most likely issue is that your application has a memory leak - in any case, the cause is more important than a quick fix - what would you do if the 'new' container just happens to last an extra week etc? Moving the problem rarely solves it...
You need to start analysing the applications heap memory, to locate the source of the problem. If your application is crashing with an OOME, you can add this to the JVM arguments.
-XX:-HeapDumpOnOutOfMemoryError
If the performance is just degrading until you restart the container manually, you should get into the routine of triggering periodic heap dumps. A timeline of dumps is often the most help, as you can see which object stores just grow over time.
To do this, you'll need a heap analysis tool:
JHat or IBM Heap Analyser or whatever your preference :)
Also see this question:
Recommendations for a heap analysis tool for Java?
Update:
And this may help (for obvious reasons):
How do I analyze a .hprof file?
I constantly detect OOM in PermGen for my environment:
java 6
jboss-4.2.3
Not a big web-application
I know about String.intern() problem - but I don't have enough valuable usage of it.
Increasing of MaxPermGen size didn't take a force (from 128 Mb to 256 Mb).
What other reasons could invoke OOM for PermGen?
What scenario of investigation is the best in such situation (strategy, tools and etc.)?
Thanks for any help
See this note
Put JDBC driver in common/lib (as tomcat documentation says) and not in WEB-INF/lib
Don't put commons-logging into WEB-INF/lib since tomcat already bootstraps it
new class objects get placed into the PermGen and thus occupy an ever increasing amount of space. Regardless of how large you make the PermGen space, it will inevitably top out after enough deployments. What you need to do is take measures to flush the PermGen so that you can stabilize its size. There are two JVM flags which handle this cleaning:
-XX:+CMSPermGenSweepingEnabled
This setting includes the PermGen in a garbage collection run. By default, the PermGen space is never included in garbage collection (and thus grows without bounds).
-XX:+CMSClassUnloadingEnabled
This setting tells the PermGen garbage collection sweep to take action on class objects. By default, class objects get an exemption, even when the PermGen space is being visited during a garabage collection.
You typically get this error when redeploying an application while having a classloader leak, because it means all your classes are loaded again while the old versions stay around.
There are two solutions:
Restart the app server instead of redeploying the application - easy, but annoying
Investigate and fix the leak using a profiler. Unfortunately, classloader leaks can be very hard to pinpoint.
I'm using an ICEFaces application that runs over JBOSS, my currently heapsize is set to
-Xms1024m –Xmx1024m -XX:MaxPermSize=256m
what is your recommendation to adjust memory parameters for JBOSS AS 5 (5.0.1 GA) JVM 6?
According to this article:
AS 5 is known to be greedy when it comes to PermGen. When starting, it often throws OutOfMemoryException: PermGen Error.
This can be particularly annoying during development when you are hot deploying frequently an application. In this case, JBoss QA recommends to raise the permgen size, allow classes unloading and permgen sweep:
-XX:PermSize=512m -XX:MaxPermSize=1024 -XX:+UseConcMarkSweepGC -XX:+CMSPermGenSweepingEnabled -XX:+CMSClassUnloadingEnabled
But this is more FYI, I'm not suggesting to apply this configuration blindly (as people wrote in comments, "if it ain't broken, don't fix it").
Regarding your heap size, always keep in mind: the bigger the heap, the longer the major GC. Now, when you say "it was definitely too small", I don't really know what this means (what errors, symptoms, etc). To my knowledge, a 1024m heap is actually pretty big for a webapp and should really be more than enough for most of them. Just beware of the major GC duration.
Heap: Start with 512 MB, set the cap to where you believe your app should never get, and not to make your server start swapping.
Permgen: That's usually stable enough, once the app reads all classes used in the app. If you have tested the app and it works with 256 MB, then leave it so.
#wds: It's definitely not a good idea to set the heap maximum as high as possible for two reasons:
Large heaps make full GC take longer. If you have PermGen scanning enabled, a large PermGen space will take longer to GC as well.
JBoss AS on Linux can leave unused I/O handles open long enough to make Linux clean them up forcibly, blocking all processes on the machine until it is complete (might take over 1 minute!). If you forget to turn off the hot deploy scanner, this will happen much more frequently.
This would happen maybe once a week in my application until I:
decreased -Xms to a point where JBoss AS startup was beginning to slow down
decreased -Xmx to a point where full GCs happened more frequently, so the Linux I/O handle clean up stopped
For developers I think it's fine to increase PermGen, but in production you probably want to use only what is necessary to avoid long GC pauses.
Having analyzed a light-load web application running in tomcat, using JMX Console, it turns out the "PS Old Gen" is growing slowly but constant. It starts with 200MB and grows around 80MB/Hour.
CPU is not an issue, it runs at 0-1% on average, but somewhere it leaks memory, so it will become unstable some days after deployment.
How do i find out what objects are allocated on the heap? Are there any good tutorials or tools you know?
You could try jmap, one of the JDK Development Tools. You can use jhat with the output to walk heap dumps using your web browser.
See this answer for a short explanation.
This comes up quite often, so searching SO for those tools should turn up some alternatives.
I've used the HeapAnalyzer tool from IBM's alphaWorks with good success. It takes output from Java's heap profile, hprof, and analyzes it to show you the most likely memory leaks.
You can use NetBeans profiler. It has 2 modes, launching tomcat profiled directly from ide (for localhost) or using a remote profiling with a JAR provided and some run config on server.
I used it in a project for a memory leak and it was useful.
See my answer here:
Strategies for the diagnosis of Java memory issues
And there are also tips here:
How can I figure out what is holding on to unfreed objects?
What you are seeing is normal, unless you can prove otherwise.
You do not need to analyze the heap when the additional "consumed space" disappears when a GC in the old space happens.
At some point, when the used space reaches your maximum heap size you will observe a pause caused by the default GC you use and afterwards the used memory should go down a lot. Only if it does not go down after a GC you might be interested what is still holding onto those objects.
JRockit Mission Control can analyze memory leaks while connected to JVM. No need to take snapshots all the time. This can be useful if you have a server with a large heap.
Just hook the tool up to the JVM and it will give you a trend table where you can see which type of objects that are growing the most, and then you can explore references to those objects. You can also get allocations traces, while the JVM is running, so you can see where in the application the objects are allocated.
You can download it here for free