What is the benefit of setting the -Xms parameter, and having the initial memory larger for example, then the default calculated one (64 MB in my case, according to Java GC tunning:
http://java.sun.com/javase/technologies/hotspot/gc/gc_tuning_6.html#par_gc.ergonomics.default_size)?
Also, is there any good to setting both the initial and maximum memories to same size?
Thanks.
The benefit is that there is a performance penalty when you use up enough of the heap that it has to be resized. If you set it initially to 64MB but it turns out your application under load needs 250MB, when you hit near 64MB the JVM will allocate more heap space and possibly move around some objects and do other book-keeping. This of course takes time.
When your application is under load, you want all resources dedicated to making it run, so this extra work can make the application slower to respond, or even in some instances it can crash if it runs out of memory before the heap is resized.
Sometimes when using a Java app, you'll see instructions like "set Xms and Xmx to the same value". This is done to avoid the resizing altogether, so that your application launches with its heap already as big as it will ever be.
The linked article explains it clear enough:
Default values: -Xms 3670k
-Xmx 64m
[...]
Large
server applications often experience
two problems with these defaults. One
is slow startup, because the initial
heap is small and must be resized over
many major collections. A more
pressing problem is that the default
maximum heap size is unreasonably
small for most server applications.
The rules of thumb for server
applications are:
Unless you have problems with
pauses, try granting as much memory
as possible to the virtual machine.
The default size (64MB) is often too
small.
Setting -Xms and -Xmx to the
same value increases predictability
by removing the most important
sizing decision from the virtual
machine. However, the virtual
machine is then unable to compensate
if you make a poor choice.
In general, increase the memory as you
increase the number of processors,
since allocation can be
parallelized.
You can also be interested in this discussion of the problem.
What is the benefit of setting the -Xms parameter, and having the initial memory larger for example, then the default calculated one
If the initial heap is small and must be resized over many major collections, the startup will be slow.
Also, is there any good to setting both the initial and maximum memories to same size?
Setting -Xms and -Xmx to the same value gives you predictability. This is especially important when sizing the JVM during performance tuning. But the JVM won't be able to compensate any bad decision.
I tend to use the same values for production servers (which are tuned during performance testing).
If it is normal for your application to require more than 64 MB of heap memory, setting Xms to a larger value should improve the application's performance somewhat because the VM would not have to request additional memory as many times.
In a production system I consider setting Xms and Xmx to the same value sensible. It's basically saying "this is the amount of heap memory the VM can get and I'm dedicating it right away".
Related
I have a Java application, based on Java8 OpenJDK. It's running in a docker container which has a limit of 20GB.
The -Xms and -Xmx settings of tomcat is set as follows:
-Xms = 60% of the container memory (as dictated by the cgroup) - so 12GB
-Xmx = 80% of the container memory (as dictated by the cgroup) - so 16GB
This leaves 4GB free on the container, which is usually fine, but sometimes under load I see the docker container exited (and java process killed) with OOM due to the fact container memory usage has exceeded 20GB.
I know that the -Xmx settings is for the heap, and not the whole Java process and JVM, therefore would expect that the 4GB 'headroom' on the container is enough, but it appears not.
I know all use cases are wildly different, but my question is whether, in general terms, setting the -Xmx setting is too high for a container whose memory limit is 20GB.
I was toying with the idea of using the MaxRAM setting, which again, I know only dictates the heap memory, but I'm unsure whether that would have any impact in positive terms?
Is it generally the case that you use either MaxRAM or -Xmx, or is there any benefit to setting both?
If I were to use MaxRAM instead of -Xmx, how would java allocate memory to the heap? Is there a simple algorithm for this, for example, 50% of the MaxRAM setting? Will java manage memory any more efficiently doing it that way?
whether, in general terms, setting the -Xmx setting is too high for a container whose memory limit is 20GB
It depends. An application can use less RAM than the specified -Xmx, as well as 2x or 3x more RAM than the specified -Xmx. I've seen many applications of both kinds.
See what takes memory in a Java process.
Instead of trying to guess an appropriate heap size basing on the given container limit (those can be completely unrelated), why don't you set -Xmx to the value, which is comfortable enough for your particular application? I mean, if your application works fine with just 8 GB heap, there is no need to give it more, even if the container permits.
Is it generally the case that you use either MaxRAM or -Xmx, or is there any benefit to setting both?
Setting both is meaningless, as Xmx overrides MaxRAM.
I were to use MaxRAM instead of -Xmx, how would java allocate memory to the heap? Is there a simple algorithm for this, for example, 50% of the MaxRAM setting?
See What is the difference between xmx and MaxRAM JVM parameters?
Will java manage memory any more efficiently doing it that way?
No. MaxRAM only affects calculation of the heap size and the default garbage collector (when not explicitly specified).
I'm getting a few exceptions from my desktop application that is wrapped using launch4j about running of memory. Specifically:
OutOfMemoryError: Java heap space
Since I don't know how much RAM is in those computers, what's the appropriate strategy to minimize this sort of errors?
Are there any dangers in passing a humongous -Xmx, such as -Xmx64g? I understand my application might run out of actual physical RAM, but that's a problem the user can improve by adding more RAM, whereas having a limited maximum heap is not something they can do anything about.
But that makes me think, why isn't -Xmx essentially infinite and leave it up to the OS and the user to kill the application if it's trying to use more RAM than available.
-Xmx is an important memory tuning parameter. Generally, more heap space is better, but it's a very situational setting, so it's up to the user to decide how much is appropriate. Obviously there are problems in trying to use a larger heap than the system has memory, as you will run into swapping. If unspecified the JVM will use up to 1/4 of the system ram by default.
Java will keep claiming memory up to the maximum so you need to tell it where to stop. If there was no upper limit, the heap would just keep getting bigger and bigger. The JVM doesn't clear unneeded objects from memory until the heap gets full, so "unlimited size" would mean that the heap never gets full, and just keeps growing forever and unneeded memory would never get released.
While bigger is typically better for heap, this isn't a hard rule and it will require testing and tuning to find the best amount. It will help throughput, but can hurt latency since the bigger the heap, the longer GC pause times will be since there is more memory to clear.
Another factor is that if you have more than 32GB of heap, you need to give at least 40-42GB. Something in the middle like 36GB will actually hurt performance and give less usable memory. This is because for small heaps the JVM is able to optimize object pointers, but it can't do that for heaps larger than 32GB.
Note that just adding more heap isn't necessarily the solution to an out of memory error. It can be just as likely that an improvement to the program to use less memory is feasible, and if it is that's typically the preferred solution. Especially if your program is leaking memory somehow, more heap will just make it take longer before you get out of memory.
I have an application that can be executed when I use the jvm command -Xmx65m. Although it is running fine, I want to allow the application to consume more memory, because it has some features that require it. The problem is that if I increase the -Xmx option, the JVM will alocate more memory to run the features that it can handle with only 65 MB.
Is it possible to configure the JVM to only request more memory to the SO only when it is running out of options and it is about to throw an OutOfMemoryError?
Please add the min , max memory settings. So that JVM can start with the min required memory and keeps allocating more memory as and when its required.
-Xms65m -Xmx512m
Hope this helps.
The JVM reserves the maximum heap size as virtual memory on start up but it only uses the amount it needs (even if you set a minimum size it might not use that much) If it uses a large amount of memory but doesn't need it any more it can give back to the OS (but often doesn't AFAIK)
Perhaps you are not seeing a gradual increase as your maximum is so small. Try it with a maximum of -mx1g and look in jvisualvm as to how the maximum heap size grows.
Is it possible to configure the JVM to only request more memory to the SO only when it is running out of options and it is about to throw an OutOfMemoryError?
As the JVM gets close to finally running out of space, it runs the GC more and more frequently, and application throughput (in terms of useful work done per CPU second) falls dramatically. You don't want that happening if you can avoid it.
There is one GC tuning option that you could use to discourage the JVM from growing the heap. The -XX:MinHeapFreeRatio option sets the "minimum percentage of heap free after GC to avoid expansion". If you reduce this from the default value of 40% to (say) 20% the GC will be less eager to expand the heap.
The down-side is that if you reduce -XX:MinHeapFreeRatio, the JVM as a whole will spend a larger percentage of its time running the garbage collector. Go too far and the effect could possibly be quite severe. (Personally, I would not recommend changing this setting at all ... )
I have a jvm server in my machine, now I want to have 2 apservers of mine sitting in same machine, however I want the standby one to have a really low amount of memory allocated with xmx because its passive, one the main server (active) goes down I want to allocate more memory to my passive server which is already up without restarting it (I have have them both having too much xmx - note they would consume memory at startup and I cant allow possibility of outOfMemory).
So I want passive - low xmx
once active goes down I want my passive to receive much more xmx.
is there a way for me to achieve that.
Thanks
It would be nice, but as far as I know it's not an option with the Sun provided JVMs.
The Xmx option is to specify maximum memory, it's there to prevent the JVM from consuming the entire machine's free memory. If you want to set it higher, it won't require the JVM allocate all of that memory. Why not just set it to a very high number and let the JVM grow into it over time?
To make sure your JVM doesn't start off with too little memory (creating lots of pauses as it grows the memory to the required size), adjust Xms to the size you want to allocate for the JVM at startup.
The short answer is unless your particular JVM allows for these values to be changed after initialization, you cannot (I believe this is the case for HotSpot).
However, you may be able to accomplish your goals without changing Xmx on the fly. For example, you could use a small -Xms setting, but keep -Xmx relatively high. If the passive server is not using much memory / generating garbage while still serving as the backup, then memory will stay near the Xms value. However, once the backup server takes over it would be allowed to expand allocated memory up to the Xmx value on an as-needed basis.
See java (windows) or java (*nix) as appropriate (though -Xms and -Xmx have the same general meaning on all platforms).
You don't need to adjust Xmx on the standby instance as long as it's not doing anything (or much of anything) because it should stay close to the value you set with Xms until it starts doing real work.
The Xmx switch governs the maximum amount of heap size the Java instance may consume. Xms governs the startup amount.
If you set Xms small on your standby instance and Xmx to whatever maximum your program needs, and then switch over to the Standby instance (killing the regular instance) it should work out fine.
It may be necessary to actually stop/kill the regular Java process depending on your available memory in order for the standby process to allocate all of the heap it needs as it moves from the initial lower heap size to toward it's maximum.
For the JVM to fill all the heap, you'd have to generate enough objects that survive the young generation collection. That would be unlikely on the lightly-loaded stand-by server.
To improve your chances of catching all the garbage in the young generation, configure your young generation heap accordingly: larger sizes, more generations before objects age out. This is a compromise between confining your standby server to young generation and the collection profile you need in your primary server.
Update: the new G1 collector uses different configuration options. PLease look at http://www.oracle.com/technetwork/tutorials/tutorials-1876574.html to learn more. The option most relevant to your case would be
-XX:InitiatingHeapOccupancyPercent=45 - Percentage of the (entire) heap occupancy to start a concurrent GC cycle. It is used by G1 to trigger a concurrent GC cycle based on the occupancy of the entire heap, not just one of the generations. A value of 0 denotes 'do constant GC cycles'. The default value is 45 (i.e., 45% full or occupied).
IOW, the equivalent of young generation collection will start when the current heap (the min heap size initially) is 45% used up. Your light-load server should never leave the min heap size (unless it produces relatively long-living objects, in which case see -XX:MaxTenuringThreshold).
Does the Sun JVM slow down when more memory is available and used via -Xmx? (Assumption: The machine has enough physical memory so that virtual memory swapping is not a problem.)
I ask because my production servers are to receive a memory upgrade. I'd like to bump up the -Xmx value to something decadent. The idea is to prevent any heap space exhaustion failures due to my own programming errors that occur from time to time. Rare events, but they could be avoided with my rapidly evolving webapp if I had an obscene -Xmx value, like 2048mb or higher. The application is heavily monitored, so unusual spikes in JVM memory consumption would be noticed and any flaws fixed.
Possible important details:
Java 6 (runnign in 64-bit mode)
4-core Xeon
RHEL4 64-bit
Spring, Hibernate
High disk and network IO
EDIT: I tried to avoid posting the configuration of my JVM, but clearly that makes the question ridiculously open ended. So, here we go with relevant configuration parameters:
-Xms256m
-Xmx1024m
-XX:+UseConcMarkSweepGC
-XX:+AlwaysActAsServerClassMachine
-XX:MaxGCPauseMillis=1000
-XX:MaxGCMinorPauseMillis=1000
-XX:+PrintGCTimeStamps
-XX:+HeapDumpOnOutOfMemoryError
By adding more memory, it will take longer for the heap to fill up. Consequently, it will reduce the frequency of garbage collections. However, depending on how mortal your objects are, you may find that how long it takes to do any single GC increases.
The primary factor for how long a GC takes is how many live objects there are. Thus, if virtually all of your objects die young and once you get established, none of them escape the young heap, you may not notice much of a change in how long it takes to do a GC. However, whenever you have to cycle the tenured heap, you may find everything halting for an unreasonable amount of time since most of these objects will still be around. Tune the sizes accordingly.
If you just throw more memory at the problem, you will have better throughput in your application, but your responsiveness can go down if you're not on a multi core system using the CMS garbage collector. This is because fewer GCs will occur, but they will have more work to do. The upside is that you will get more memory freed up with your GCs, so allocation will continue to be very cheap, hence the higher througput.
You seem to be confusing -Xmx and -Xms, by the way. -Xms just sets the initial heap size, whereas -Xmx is your max heap size.
More memory usually gives you better performance in garbage collected environments, at least as long as this does not lead to virtual memory usage / swapping.
The GC only tracks references, not memory per se. In the end, the VM will allocate the same number of (mostly short-lived, temporary) objects, but the garbage collector needs to be invoked less often - the total amount of garbage collector work will therefore not be more - even less, since this can also help with caching mechanisms which use weak references.
I'm not sure if there is still a server and a client VM for 64 bit (there is for 32 bit), so you may want to investigate that also.
According to my experience, it does not slow down BUT the JVM tries to cut back to Xms all the time and try to stay at the lower boundary or close to. So if you can effort it, bump Xms as well. Sun is recommending both at the same size. Add some -XX:NewSize=512m (just a made up number) to avoid the costly pile up of old data in the old generation with leads to longer/heavier GCs on the way. We are running our web app with 700 MB NewSize because most data is short-lived.
So, bottom line: I do not expect a slow down, but put your more of memory to work. Set a larger new size area and set Xms to Xmx to lower the stress on the GC, because it does not need to try to cut back to Xms limits...
It typically will not help your peformance/throughput,if you increase -Xmx.
Theoretically there could be longer "stop the world" phases but in practice with the CMS that's not a real problem.
Of course you should not set -Xmx to some insane value like 300Gbyte unless you really need it :)