We're currently testing out Alfresco Community on an old server (only 1GB of RAM). Because this is the Community version we need to restart it every time we change the configuration (we're trying to add some features like generating previews of DWG files etc). However, restarting takes a very long time (about 4 minutes I think). This is probably due to the limit amount of memory available. Does anybody know some features or settings that can improve this restart time?
As with all performance issues there is rarely a magic bullet.
Memory pressure - the app is starting up but the 512m heap is only just enough to fit the applications in and it is spending half of the start up time running GC.
Have a look at any of the following:
1. -verbose:gc
2. jstat -gcutil
2. jvisualvm - much nicer UI
You are trying to see how much time is being spent in GC, look for many full garbage collection events that don't reclaim much of the heap ie 99% -> 95%.
Solution - more heap, nothing else for it really.
You may want to try -XX:+AggressiveHeap in order to get the JVM to max out it's memory usage on the box, only trouble is with only 1gb of memory it's going to be limited. List of all JVM options
Disk IO - the box it's self is not running at close to 100% CPU during startup (assuming 100% of a single core, startup is normally single threaded) then there may be some disk IO that the application is doing that is the bottle neck.
Use the operating system tools such as Windows Performance monitor to check for disk IO. It maybe that it isn't the application causing the IO it could be swap activity (page faulting)
Solution: either fix the app (not to likely) or get faster disks/computer or more physical memory for the box
Two of the most common reasons why Tomcat loads slowly:
You have a lot of web applications. Tomcat takes some time to create the web context for each of those.
Your webapp have a large number of files in a web application directory. Tomcat scans the web application directories at startup
also have a look at java performance tuning whitepaper, further I would recomend to you Lambda Probe www.lambdaprobe.org/d/index.htm to see if you are satisfied with your gcc settings, it has nice realtime gcc and memory tracking for tomcat.
I myself have Alfresco running with the example 4.2.6 from java performance tuning whitepaper:
4.2.6 Tuning Example 6: Tuning for low pause times and high throughput
Memory settings are also very nicely explained in that paper.
kind regards Mahatmanich
Related
We have 6 Windows Server 2008/IIS 7.5 ColdFusion 9.0.2 servers on a round-robin load balancer. Each server is allocated 2GB for ColdFusion. Servers have 6GB of memory total. Garbage collection seems to be an issue across all the servers but I'm not sure how to resolve the issue without recycling ColdFusion.
The graph below is the AVG/MAX memory for our 6 servers over the past few days. Each day the AVG memory increases. Eventually, the servers start queuing requests (because they can't process them fast enough) and we have no choice but to recycle.
The data in the graph was taken from 1m snapshots of FusionReactor across all 6 servers.
Our servers are using the following command line in jvm.config for ColdFusion:
java.args=-Xmx2G -server -Xms2g -Dsun.io.useCanonCaches=false
-XX:MaxPermSize=192m -XX:+UseParallelGC -Xbatch -Dcoldfusion.rootDir={application.home}/ -Djava.security.policy={application.home}/servers/cfusion/cfusion-ear/cfusion-war/WEB-INF/cfusion/lib/coldfusion.policy
-Djava.security.auth.policy={application.home}/servers/cfusion/cfusion-ear/cfusion-war/WEB-INF/cfusion/lib/neo_jaas.policy
I'm not sure if changing garbage collection parameters is the solution, and I know nothing about GC especially as it relates to ColdFusion.
I'm aware this may have something to do with the code on the site. It's a portal (something like fusebox) that hosts many different applications inside of it. There are NOT many uses of cfobject calls in the portal.
This is similar to this question: Coldfusion OutOfMemoryError (CF9/Wheels)
But let me highlight the ones that are relevant for this:
Make sure you are on at least CF 9.01hf4 or 9.02hf1 and run ColdFusion on Java (See ColdFusion 9.01 on Java 7)
Bump up the `-XX:MaxPermSize=512m
Use -XX:+G1GC (See Is JDK 6u14 Garbage First (G1) garbage collector, suitable for JRun?)
Make that the JVM can use 4GB
Every 100 to 1000 iterations do a strongly suggested Garbage Collect
Make your function silent
Make sure that variables in functions are scoped to var or local
Consider ORM
Let's say I have a very large Java application that's deployed on Tomcat. Over the course of a few weeks, the server will run out of memory, application performance is degraded, and the server needs a restart.
Obviously the application has some memory leaks that need to be fixed.
My question is.. If the application were deployed to a different server, would there be any change in memory utilization?
Certainly the services offered by the application server might vary in their memory utilization, and if the server includes its own unique VM -- i.e., if you're using J9 or JRockit with one server and Oracle's JVM with another -- there are bound to be differences. One relevant area that does matter is class loading: some app servers have better behavior than others with regard to administration. Warm-starting the application after a configuration change can result in serious memory leaks due to class loading problems on some server/VM combinations.
But none of these are really going to help you with an application that leaks. It's the program using the memory, not the server, so changing the server isn't going to affect much of anything.
There will probably be a slight difference in memory utilisation, but only in as much as the footprint differs between servlet containers. There is also a slight chance that you've encountered a memory leak with the container - but this is doubtful.
The most likely issue is that your application has a memory leak - in any case, the cause is more important than a quick fix - what would you do if the 'new' container just happens to last an extra week etc? Moving the problem rarely solves it...
You need to start analysing the applications heap memory, to locate the source of the problem. If your application is crashing with an OOME, you can add this to the JVM arguments.
-XX:-HeapDumpOnOutOfMemoryError
If the performance is just degrading until you restart the container manually, you should get into the routine of triggering periodic heap dumps. A timeline of dumps is often the most help, as you can see which object stores just grow over time.
To do this, you'll need a heap analysis tool:
JHat or IBM Heap Analyser or whatever your preference :)
Also see this question:
Recommendations for a heap analysis tool for Java?
Update:
And this may help (for obvious reasons):
How do I analyze a .hprof file?
Currently we have developed application using Java 6 based on windows 32bit(Dual core & 3G Ram).
If we install into 64bit windows OS, does it will perform better because of the resources advantage that having in the 64bit(Same OS diff. bit)? The 64bit machine is having Quad core processor and Ram more than 4g. Is the any different for JVM between 32bit vs 64bit.
Thank you in advance for your feedback.
Extra info
I am doing Security Information Event management Sys.(SIEM) - log management.
We have 4 important parts ,
Collector -to collect logs from devices/system,
Aggregator -To aggregate the syslog to be meta data for reporting,
Real Time Monitoring-To display realtime analisys report/charts and dashboard that must run every second
GUI - Struts2 apps. that runs the web GUI, log analytics, backup and other things
So far the most resources cpu and memory are used by 1-Collector, 2-RealTime, 3-Aggregator.
Right now in 32bit, collector can recieved up to 2000logs per seconds. If more than that it will crash to memory heap. So we used tanuki software to auto restart back the collector service. We use the Tanuki to split the memory usage and auto restart once detected memory heap.
Our objective is to increase event per second from 2000logs to maximum if possible by using 64bit advantages.
For the GC we let the Java handle automatically, more important we can process the more logs in 1 second without any problem.
Switching to a 64-bit JVM doesn't guarantee any performance differences. You will, however see a huge difference in the amount of RAM that can be allocated. On 32-bit Windows, the maximum amount of RAM that could be allocated for the heap maxed out at around 1.6 GB.
If you see a lot of swapping with your application on the 32-bit machine, then switching to the 64-bit machine and adding sufficient RAM is likely to improve your performance. You might also be able to make design choices that favor faster, but more memory hungry algorithms where such choices exist.
As of this writing, you will probably not see significant difference between running your app on a 32-bit JVM and a 64-bit JVM on the exact same hardware. Eventually, support for 32-bit operating systems and JVMs will probably be discontinued, but that's a different concern than performance.
I strongly recommend you start out by profiling your app first to see where your performance hot spots are.
It's a common misconception that 64-bit automatically means better performance than 32-bit. See e.g. this JVM faq and this MS Windows 7 FAQ.
It really depends on the nature of your application and where your performance bottlenecks are.
If you have relatively un-tuned garbage collection, and your application is latency sensitive (i.e. must respond to a user request such as an http request quickly), adding more memory can actually worsen your GC pauses.
Is your application multi-threaded, as most web servers are? If so, going from 2 to 4 cores will very likely help if you don't have significant locking / contention issues.
If you look into GC tuning, you might want to try parallel GC on the 4 core cpu. This can significantly reduce GC pause times while incurring some extra overhead. For a latency sensitive app I worked on this was definitely worth it.
Please feel free to reply with more info - we could use some context on your app, it's workload, in-memory working set, etc.
Tomcat 5.5.x and 6.0.x
Grails 1.6.x
Java 1.6.x
OS CentOS 5.x (64bit)
VPS Server with memory as 384M
JAVA_OPTS : tried many combinations- including the following
export JAVA_OPTS='-Xms128M -Xmx512M -XX:MaxPermSize=1024m'
export JAVA_OPTS='-server -Xms128M -Xmx128M -XX:MaxPermSize=256M'
(As advised by http://www.grails.org/Deployment)
I have created a blank Grails application i.e simply by giving the command grails create-app and then WARed it
I am running Tomcat on a VPS Server
When I simply start the Tomcat server, with no apps deployed, the free memory is about 236M
and used memory is about 156M
When I deploy my "blank" application, the memory consumption spikes to 360M and finally the Tomcat instance is killed as soon as it takes up all free memory
As you have seen, my app is as light as it can be.
Not sure why the memory consumption is as high it is.
I am actually troubleshooting a real application, but have narrowed down to this scenario which is easier to share and explain.
UPDATE
I tested the same "blank" application on my local Tomcat 5.5.x on Windows and it worked fine
The memory consumption of the Java process shot from 32 M to 107M. But it did not crash and it remained under acceptable limits
So the hunt for answer continues... I wonder if something is wrong about my Linux box. Not sure what though...
UPDATE 2
Also see this http://www.grails.org/Grails+Test+On+Virtual+Server
It confirms my belief that my simple-blank app should work on my configuration.
It is a false economy to try to run a long running Java-based application in the minimal possible memory. The garbage collector, and hence the application will run much more efficiently if it has plenty of regular heap memory. Give an application too little heap and it will spend too much time garbage collecting.
(This may seem a bit counter-intuitive, but trust me: the effect is predictable in theory and observable in practice.)
EDIT
In practical terms, I'd suggest the following approach:
Start by running Tomcat + Grails with as much memory as you can possibly give it so that you have something that runs. (Set the permgen size to the default ... unless you have clear evidence that Tomcat + Grails are exhausting permgen.)
Run the app for a bit to get it to a steady state and figure out what its average working set is. You should be able to figure that out from a memory profiler, or by examining the GC logging.
Then set the Java heap size to be (say) twice the measured working set size or more. (This is the point I was trying to make above.)
Actually, there is another possible cause for your problems. Even though you are telling Java to use heaps of a given size, it may be that it is unable to do this. When the JVM requests memory from the OS, there are a couple of situations where the OS will refuse.
If the machine (real or virtual) that you are running the OS does not have any more unallocated "real" memory, and the OS's swap space is fully allocated, it will have to refuse requests for more memory.
It is also possible (though unlikely) that per-process memory limits are in force. That would cause the OS to refuse requests beyond that limit.
Finally, note that Java uses more virtual memory that can be accounted for by simply adding the stack, heap and permgen numbers together. There is the memory used by the executable + DLLs, memory used for I/O buffers, and possibly other stuff.
384MB is pretty small. I'm running a small Grails app in a 512MB VPS at enjoyvps.net (not affiliated in any way, just a happy customer) and it's been running for months at just under 200MB. I'm running a 32-bit Linux and JDK though, no sense wasting all that memory in 64-bit pointers if you don't have access to much memory anyway.
Can you try deploying a tomcat monitoring webapp e.g. psiprobe and see where the memory is being used?
I've got a somewhat dated Java EE application running on Sun Application Server 8.1 (aka SJSAS, precursor to Glassfish). With 500+ simultaneous users the application becomes unacceptably slow and I'm trying to assist in identifying where most of the execution time is spent and what can be done to speed it up. So far, we've been experimenting and measuring with LoadRunner, the app server logs, Oracle statpack, snoop, adjusting the app server acceptor and session (worker) threads, adjusting Hibernate batch size and join fetch use, etc but after some initial gains we're struggling to improve matters more.
Ok, with that introduction to the problem, here's the real question: If you had a slow Java EE application running on a box whose CPU and memory use never went above 20% and while running with 500+ users you showed two things: 1) that requesting even static files within the same app server JVM process was exceedingly slow, and 2) that requesting a static file outside of the app server JVM process but on the same box was fast, what would you investigate?
My thoughts initially jumped to the application server threads, both acceptor and session threads, thinking that even requests for static files were being queued, waiting for an available thread, and if the CPU/memory weren't really taxed then more threads were in order. But then we upped both the acceptor and session threads substantially and there was no improvement.
Clarification Edits:
1) Static files should be served by a web server rather than an app server. I am using the fact that in our case this (unfortunately) is not the configuration so that I can see the app server performance for files that it doesn't execute -- therefore excluding any database performance costs, etc.
2) I don't think there is a proxy between the requesters and the app server but even if there was it doesn't seem to be overloaded because static files requested from the same application server machine but outside of the application's JVM instance return immediately.
3) The JVM heap size (Xmx) is set to 1GB.
Thanks for any help!
SunONE itself is a pain in the ass. I have a very same problem, and you know what? A simple redeploy of the same application to Weblogic reduced the memory consumption and CPU consumption by about 30%.
SunONE is a reference implementation server, and shouldn't be used for production (don't know about Glassfish).
I know, this answer doesn't really helps, but I've noticed considerable pauses even in a very simple operations, such as getting a bean instance from a pool.
May be, trying to deploy JBoss or Weblogic on the same machine would give you a hint?
P.S. You shouldn't serve static content from under application server (though I do it too sometimes, when CPU is abundant).
P.P.S. 500 concurrent users is quite high a load, I'd definetely put SunONE behind a caching proxy or Apache which serves static content.
After using a Sun performance monitoring tool we found that the garbage collector was running every couple seconds and that only about 100MB out of the 1GB heap was being used. So we tried adding the following JVM options and, so far, this new configuration as greatly improved performance.
-XX:+DisableExplicitGC -XX:+AggressiveHeap
See http://java.sun.com/docs/performance/appserver/AppServerPerfFaq.html
Our lesson: don't leave JVM option tuning and garbage collection adjustments to the end. If you're having performance trouble, look at these settings early in your troubleshooting process.