Grails application hogging too much memory - java

Tomcat 5.5.x and 6.0.x
Grails 1.6.x
Java 1.6.x
OS CentOS 5.x (64bit)
VPS Server with memory as 384M
JAVA_OPTS : tried many combinations- including the following
export JAVA_OPTS='-Xms128M -Xmx512M -XX:MaxPermSize=1024m'
export JAVA_OPTS='-server -Xms128M -Xmx128M -XX:MaxPermSize=256M'
(As advised by http://www.grails.org/Deployment)
I have created a blank Grails application i.e simply by giving the command grails create-app and then WARed it
I am running Tomcat on a VPS Server
When I simply start the Tomcat server, with no apps deployed, the free memory is about 236M
and used memory is about 156M
When I deploy my "blank" application, the memory consumption spikes to 360M and finally the Tomcat instance is killed as soon as it takes up all free memory
As you have seen, my app is as light as it can be.
Not sure why the memory consumption is as high it is.
I am actually troubleshooting a real application, but have narrowed down to this scenario which is easier to share and explain.
UPDATE
I tested the same "blank" application on my local Tomcat 5.5.x on Windows and it worked fine
The memory consumption of the Java process shot from 32 M to 107M. But it did not crash and it remained under acceptable limits
So the hunt for answer continues... I wonder if something is wrong about my Linux box. Not sure what though...
UPDATE 2
Also see this http://www.grails.org/Grails+Test+On+Virtual+Server
It confirms my belief that my simple-blank app should work on my configuration.

It is a false economy to try to run a long running Java-based application in the minimal possible memory. The garbage collector, and hence the application will run much more efficiently if it has plenty of regular heap memory. Give an application too little heap and it will spend too much time garbage collecting.
(This may seem a bit counter-intuitive, but trust me: the effect is predictable in theory and observable in practice.)
EDIT
In practical terms, I'd suggest the following approach:
Start by running Tomcat + Grails with as much memory as you can possibly give it so that you have something that runs. (Set the permgen size to the default ... unless you have clear evidence that Tomcat + Grails are exhausting permgen.)
Run the app for a bit to get it to a steady state and figure out what its average working set is. You should be able to figure that out from a memory profiler, or by examining the GC logging.
Then set the Java heap size to be (say) twice the measured working set size or more. (This is the point I was trying to make above.)
Actually, there is another possible cause for your problems. Even though you are telling Java to use heaps of a given size, it may be that it is unable to do this. When the JVM requests memory from the OS, there are a couple of situations where the OS will refuse.
If the machine (real or virtual) that you are running the OS does not have any more unallocated "real" memory, and the OS's swap space is fully allocated, it will have to refuse requests for more memory.
It is also possible (though unlikely) that per-process memory limits are in force. That would cause the OS to refuse requests beyond that limit.
Finally, note that Java uses more virtual memory that can be accounted for by simply adding the stack, heap and permgen numbers together. There is the memory used by the executable + DLLs, memory used for I/O buffers, and possibly other stuff.

384MB is pretty small. I'm running a small Grails app in a 512MB VPS at enjoyvps.net (not affiliated in any way, just a happy customer) and it's been running for months at just under 200MB. I'm running a 32-bit Linux and JDK though, no sense wasting all that memory in 64-bit pointers if you don't have access to much memory anyway.

Can you try deploying a tomcat monitoring webapp e.g. psiprobe and see where the memory is being used?

Related

Tomcat Service Memory Keeps Growing, but JVM Memory is stable

I've asked for help on this before, here, but the issue still exists and the previously accepted answer doesn't explain it. I've also read most every article and SO thread on this topic, and most point to application leaks or modest overhead, neither of which I believe explain what I'm seeing.
I have a fairly large web service (application alone is 600MB), which when left alone grows to 5GB or more as reported by the OS. The JVM, however, is limited to 1GB (Xms, Xmx).
I'd done extensive memory testing and have found no leaks whatsoever. I've also run Oracle's Java Mission Control (basically the JMX Console) and verified that actual JVM use is only about 1GB. So that means about 4GB are being consumed by Tomcat itself, native memory, or the like.
I don't think and JNI, etc. is to blame as this particular installation has been mostly unused. All it's been doing is periodically checking the database for work requests, and periodically monitoring its resource consumption. Also, this hasn't been a problem until recently after years of use.
The JMX Console does report a high level of fragmentation (70%). But can that alone explain the additional memory consumption?
Most importantly, though, is not so much why is this happening, but how can I fix/configure it so that it stops happening. Any suggestions?
Here are some of the details of the environment:
Windows Server 2008 R2 64-bit
Java 1.7.0.25
Tomcat 7.0.55
JvmMX, JvmMs = 1000 (1GB)
Thread count: 60
Perm Gen: 90MB
Also seen on:
Windows 2012 R2 64-bit
Java 1.8.0.121

Does the application server affect Java memory usage?

Let's say I have a very large Java application that's deployed on Tomcat. Over the course of a few weeks, the server will run out of memory, application performance is degraded, and the server needs a restart.
Obviously the application has some memory leaks that need to be fixed.
My question is.. If the application were deployed to a different server, would there be any change in memory utilization?
Certainly the services offered by the application server might vary in their memory utilization, and if the server includes its own unique VM -- i.e., if you're using J9 or JRockit with one server and Oracle's JVM with another -- there are bound to be differences. One relevant area that does matter is class loading: some app servers have better behavior than others with regard to administration. Warm-starting the application after a configuration change can result in serious memory leaks due to class loading problems on some server/VM combinations.
But none of these are really going to help you with an application that leaks. It's the program using the memory, not the server, so changing the server isn't going to affect much of anything.
There will probably be a slight difference in memory utilisation, but only in as much as the footprint differs between servlet containers. There is also a slight chance that you've encountered a memory leak with the container - but this is doubtful.
The most likely issue is that your application has a memory leak - in any case, the cause is more important than a quick fix - what would you do if the 'new' container just happens to last an extra week etc? Moving the problem rarely solves it...
You need to start analysing the applications heap memory, to locate the source of the problem. If your application is crashing with an OOME, you can add this to the JVM arguments.
-XX:-HeapDumpOnOutOfMemoryError
If the performance is just degrading until you restart the container manually, you should get into the routine of triggering periodic heap dumps. A timeline of dumps is often the most help, as you can see which object stores just grow over time.
To do this, you'll need a heap analysis tool:
JHat or IBM Heap Analyser or whatever your preference :)
Also see this question:
Recommendations for a heap analysis tool for Java?
Update:
And this may help (for obvious reasons):
How do I analyze a .hprof file?

Unexpected JVM behaviour on 64bit linux

i tried to compare my java web app behaviour on 32 bit windows and 64 bit linux.
When i view the memory usage via jconsole i find very different graph of memory usage.
On windows the appl never touches 512m.
However when i run on linux 64bit with 64 bit VM the memory keeps invcreasing gradually and reaches peak value about1000m very quickly and i also get oome error related to GC over head limit exceeded. on linux whenever i do manual run GC it drops below to less than 100m.
Its like the GC does seem to run so well as it does on windows.
On windows the app runs better with even more load.
How do i find the reason behind this?
Iam using jdk1.6.0.13
min heap:512m and max heap 1024m
EDIT:
Are you using the same JVM versions on both Windows and Linux?
yes.1.6.0.13.
Are you using the same garbage collectors on both systems?
I noticed in jconsole and i see that the gc are different.
Are you using the same web containers on both systems?
yes.Tomcat.
Does your webapp rely on native libraries?
Not sure. I use tomcat+spring+hibernate+jsf.
Are there other differences in the configuration of your webapp on the two platforms?
No
What exactly was the error message associated with the OOME?
java.lang.OutOfMemoryError: GC overhead limit exceeded
How long does it take for your webapp to start misbehaving / reporting errors on Linux?
The difference in usage pattern is seen after i leave it running for say 3hrs. The error appears after like a day or 2 days since by then avg memory usage is seen around 900 mb mark.
A 64bit JVM will naturally use a lot more memory than a 32bit JVM, that's to be expected (the internal pointers are twice the size, after all). You can't keep the same heap settings when moving from 32bit to 64bit and expect the same behaviour.
If your app runs happily in 512m on a 32bit JVM, there are no reasons whatsoever to use a 64bit JVM. The only rationale for doing that is to take advantage of giant heap sizes.
Remeber, it's perfectly valid to run a 32bit JVM on a 64bit operating system. The two are not related.
There are too many unknowns to be able to explain this:
Are you using the same JVM versions on both Windows and Linux?
Are you using the same garbage collectors on both systems?
Are you using the same web containers on both systems?
Does your webapp rely on native libraries?
Are there other differences in the configuration of your webapp on the two platforms?
What exactly was the error message associated with the OOME?
How long does it take for your webapp to start misbehaving / reporting errors on Linux?
Also, I agree with #skaffman ... don't use a 64bit JVM unless your application really requires it.

Alfresco Community on Tomcat starts very slow

We're currently testing out Alfresco Community on an old server (only 1GB of RAM). Because this is the Community version we need to restart it every time we change the configuration (we're trying to add some features like generating previews of DWG files etc). However, restarting takes a very long time (about 4 minutes I think). This is probably due to the limit amount of memory available. Does anybody know some features or settings that can improve this restart time?
As with all performance issues there is rarely a magic bullet.
Memory pressure - the app is starting up but the 512m heap is only just enough to fit the applications in and it is spending half of the start up time running GC.
Have a look at any of the following:
1. -verbose:gc
2. jstat -gcutil
2. jvisualvm - much nicer UI
You are trying to see how much time is being spent in GC, look for many full garbage collection events that don't reclaim much of the heap ie 99% -> 95%.
Solution - more heap, nothing else for it really.
You may want to try -XX:+AggressiveHeap in order to get the JVM to max out it's memory usage on the box, only trouble is with only 1gb of memory it's going to be limited. List of all JVM options
Disk IO - the box it's self is not running at close to 100% CPU during startup (assuming 100% of a single core, startup is normally single threaded) then there may be some disk IO that the application is doing that is the bottle neck.
Use the operating system tools such as Windows Performance monitor to check for disk IO. It maybe that it isn't the application causing the IO it could be swap activity (page faulting)
Solution: either fix the app (not to likely) or get faster disks/computer or more physical memory for the box
Two of the most common reasons why Tomcat loads slowly:
You have a lot of web applications. Tomcat takes some time to create the web context for each of those.
Your webapp have a large number of files in a web application directory. Tomcat scans the web application directories at startup
also have a look at java performance tuning whitepaper, further I would recomend to you Lambda Probe www.lambdaprobe.org/d/index.htm to see if you are satisfied with your gcc settings, it has nice realtime gcc and memory tracking for tomcat.
I myself have Alfresco running with the example 4.2.6 from java performance tuning whitepaper:
4.2.6 Tuning Example 6: Tuning for low pause times and high throughput
Memory settings are also very nicely explained in that paper.
kind regards Mahatmanich

How is your JVM 6 memory setting for JBOSS AS 5?

I'm using an ICEFaces application that runs over JBOSS, my currently heapsize is set to
-Xms1024m –Xmx1024m -XX:MaxPermSize=256m
what is your recommendation to adjust memory parameters for JBOSS AS 5 (5.0.1 GA) JVM 6?
According to this article:
AS 5 is known to be greedy when it comes to PermGen. When starting, it often throws OutOfMemoryException: PermGen Error.
This can be particularly annoying during development when you are hot deploying frequently an application. In this case, JBoss QA recommends to raise the permgen size, allow classes unloading and permgen sweep:
-XX:PermSize=512m -XX:MaxPermSize=1024 -XX:+UseConcMarkSweepGC -XX:+CMSPermGenSweepingEnabled -XX:+CMSClassUnloadingEnabled
But this is more FYI, I'm not suggesting to apply this configuration blindly (as people wrote in comments, "if it ain't broken, don't fix it").
Regarding your heap size, always keep in mind: the bigger the heap, the longer the major GC. Now, when you say "it was definitely too small", I don't really know what this means (what errors, symptoms, etc). To my knowledge, a 1024m heap is actually pretty big for a webapp and should really be more than enough for most of them. Just beware of the major GC duration.
Heap: Start with 512 MB, set the cap to where you believe your app should never get, and not to make your server start swapping.
Permgen: That's usually stable enough, once the app reads all classes used in the app. If you have tested the app and it works with 256 MB, then leave it so.
#wds: It's definitely not a good idea to set the heap maximum as high as possible for two reasons:
Large heaps make full GC take longer. If you have PermGen scanning enabled, a large PermGen space will take longer to GC as well.
JBoss AS on Linux can leave unused I/O handles open long enough to make Linux clean them up forcibly, blocking all processes on the machine until it is complete (might take over 1 minute!). If you forget to turn off the hot deploy scanner, this will happen much more frequently.
This would happen maybe once a week in my application until I:
decreased -Xms to a point where JBoss AS startup was beginning to slow down
decreased -Xmx to a point where full GCs happened more frequently, so the Linux I/O handle clean up stopped
For developers I think it's fine to increase PermGen, but in production you probably want to use only what is necessary to avoid long GC pauses.

Categories