I have a Java application, based on Java8 OpenJDK. It's running in a docker container which has a limit of 20GB.
The -Xms and -Xmx settings of tomcat is set as follows:
-Xms = 60% of the container memory (as dictated by the cgroup) - so 12GB
-Xmx = 80% of the container memory (as dictated by the cgroup) - so 16GB
This leaves 4GB free on the container, which is usually fine, but sometimes under load I see the docker container exited (and java process killed) with OOM due to the fact container memory usage has exceeded 20GB.
I know that the -Xmx settings is for the heap, and not the whole Java process and JVM, therefore would expect that the 4GB 'headroom' on the container is enough, but it appears not.
I know all use cases are wildly different, but my question is whether, in general terms, setting the -Xmx setting is too high for a container whose memory limit is 20GB.
I was toying with the idea of using the MaxRAM setting, which again, I know only dictates the heap memory, but I'm unsure whether that would have any impact in positive terms?
Is it generally the case that you use either MaxRAM or -Xmx, or is there any benefit to setting both?
If I were to use MaxRAM instead of -Xmx, how would java allocate memory to the heap? Is there a simple algorithm for this, for example, 50% of the MaxRAM setting? Will java manage memory any more efficiently doing it that way?
whether, in general terms, setting the -Xmx setting is too high for a container whose memory limit is 20GB
It depends. An application can use less RAM than the specified -Xmx, as well as 2x or 3x more RAM than the specified -Xmx. I've seen many applications of both kinds.
See what takes memory in a Java process.
Instead of trying to guess an appropriate heap size basing on the given container limit (those can be completely unrelated), why don't you set -Xmx to the value, which is comfortable enough for your particular application? I mean, if your application works fine with just 8 GB heap, there is no need to give it more, even if the container permits.
Is it generally the case that you use either MaxRAM or -Xmx, or is there any benefit to setting both?
Setting both is meaningless, as Xmx overrides MaxRAM.
I were to use MaxRAM instead of -Xmx, how would java allocate memory to the heap? Is there a simple algorithm for this, for example, 50% of the MaxRAM setting?
See What is the difference between xmx and MaxRAM JVM parameters?
Will java manage memory any more efficiently doing it that way?
No. MaxRAM only affects calculation of the heap size and the default garbage collector (when not explicitly specified).
Related
How do I start a JVM with no heap max memory restriction? So that it can take as much memory as it can ?
I searched if there is such an option, but I only seem to find the -Xmx and -Xms options.
EDIT:
Let's say I have a server with 4GB of RAM and it only runs my application. And, let's say I have another server with 32GB of RAM. I don't want to start my application with 4GB of memory limit because the second machine should be able to handle more objects
-XX:MaxRAMFraction=1 will auto-configure the max heap size to 100% of your physical ram or the limits imposed by cgroups if UseCGroupMemoryLimitForHeap is set.
OpenJDK 10 will also support a percentage based option MaxRAMPercentage allowing more fine-grained selection JDK-8186248. This is important to leave some spare capacity for non-heap data structures to avoid swapping.
You can't (or at least, I don't know a JVM implementation that supports it). The JVM needs to know at start up how much memory it can allocate, to ensure that it can reserve a contiguous range in virtual memory. This allows - among others - for simpler reasoning in memory management.
If virtual memory would be expanded at runtime, this could lead to fragmented virtual memory ranges, making tracking and referencing memory harder.
However, recent Java versions have introduced options like -XX:MaxRAMPercentage=n, which allows you to specify the percentage of memory to allocate to the Java heap, instead of an absolute in bytes. For example -XX:MaxRAMPercentage=80 will allocate 80% of the available memory to the Java heap (the default is 25%).
The -XX:MaxRAMPercentage only works for systems with more than 200MB of memory (otherwise you need to use -XX:MinRAMPercentage, default 50%).
You can also use -XX:InitialRAMPercentage to specify the initial memory allocated to Java (MaxRAMPercentage is similar to -Xmx, InitialRAMPercentage is similar to -Xms).
JVM is running on physical computer that has limited memory. Therefore JVM cannot have unlimited memory as far as the memory is limited by definition. You can however supply very huge limit in -Xmx option.
The question is however why do you need the unlimited memory and even more correct question is how much memory do you really need?
On Linux, you can use the free and awk commands to calculate an inline value like this:
JAVA_OPT_MAX_MEM=$(free -m | awk '/Mem:/ {printf "-Xmx%dm", 0.80*$2}')
Example Result (on a machine with 3950m of free memory):
JAVA_OPT_MAX_MEM=-Xmx3160m
The calculated option is 80% of the total reported memory.
JVM Default Setting :
-Xms32m -Xmx128m -Xss128k -Xoss128k -XX:ThreadStackSize=128
I need change Default Heap settings and want to increase heap memory at 64gb server.
what problems can occur when default settings will be changed
What is the limit to increase jvm heap memory
And How Can Change these parameter of jvm
How do I change JVM default heap settings?
It depends on what you mean by the "default" settings.
If you mean the "default" settings as implemented by the java command, then you can override the default setting using the -Xmx... (etcetera) command line options as described by #BetaRide. However, you cannot change what the java commmand's defaults are / how they are calculated.
If you mean the "default" settings used by some Java-based application, then there is no general answer. Different applications specify the heap size to be used in different ways. and provide different was to change the heap size. A common mechanism is to set a $JAVA_OPTS environment variable, but that is by no means universal. Check the application documentation or read the launch script.
What problems can occur when the default settings are changed?
If you make the heap too small, you can cause the application to suffer OutOfMemoryErrors. Depending on how well the application is written, it will either error out (a good thing), go into a "death spiral", or get into an indeterminate state. (The last happens if the OOMEs happen on a worker thread and the thread dies. The solution is to add a default uncaught exception handler that specifically causes the application to exit whenever it sees an Error or Error subclass.)
If you make the heap significantly bigger than you have physical memory, then you risk making the JVM thrash virtual memory when it does a garbage collection. That leads to bad performance, and on some OSes it can lead to the OS terminating your application; e.g. see https://unix.stackexchange.com/questions/479575/why-is-the-linux-oom-killer-terminating-my-programs
An overly large heap can also lead to unresponsiveness during garbage collection ... simply because certain phases of the GC (or the entire GC) will "stop the world", and the length of the stoppage is bigger for a bigger heap. (In some Java GC's, it is just the size of the "new" space that matters. In others, it is the entire heap size that matters. Refer to the Oracle documentation on GC Ergonomics for more details ... depending on your Java version.)
What is the limit to increase JVM heap memory?
There are a number of relevant limits.
Your platform's processor architecture and OS may limit you. For instance on a 32-bit intel platform, the maximum addressible memory for a JVM process would be 2^32 bytes ... and the OS will reserve a significant amount of that for its own purposes.
The OS will typically limit aggregate virtual memory usage for all processes based on the amount of available physical memory and swap space.
Some OSes (and containers) allow the administrator (or user) to place external limits on the virtual memory used by a process or group of processes.
Independent of the above, there are the practical limits I mentioned above. (A heap that is too big can cause problems with responsiveness, virtual memory thrashing ... and the OOM killer.)
The only option you need is
java -Xmx65536m your.main.Class
To answer your questions:
It's no longer limited to the default heaps size.
See What is the largest possible heap size with a 64-bit JVM?
We've run into a Java.lang.OutOfMemoryError: PermGen space error and looking at the tomcat JVM params, other than the -Xms and -Xmx params we also specify -XX:MaxPermSize=128m. After a bit of profiling I can see occasionally garbage collection happening on the PermGen space saving it from running full.
My question is: other than increasing the -XX:MaxPermSize what would be the difference if I specify as well -XX:PermSize? I know the total memory then would be Xmx+maxPermSize but is there any other reason why -XX:PermSize should not be there when -XX:MaxPermSize is specified?
Please do share if you have real-world experience dealing with these JVM parameters.
ps. The JVM is HotSpot 64bit Server VM build 16.2-b04
-XX:PermSize specifies the initial size that will be allocated during startup of the JVM. If necessary, the JVM will allocate up to -XX:MaxPermSize.
By playing with parameters as -XX:PermSize and -Xms you can tune the performance of - for example - the startup of your application. I haven't looked at it recently, but a few years back the default value of -Xms was something like 32MB (I think), if your application required a lot more than that it would trigger a number of cycles of fill memory - full garbage collect - increase memory etc until it had loaded everything it needed. This cycle can be detrimental for startup performance, so immediately assigning the number required could improve startup.
A similar cycle is applied to the permanent generation. So tuning these parameters can improve startup (amongst others).
WARNING The JVM has a lot of optimization and intelligence when it comes to allocating memory, dividing eden space and older generations etc, so don't do things like making -Xms equal to -Xmx or -XX:PermSize equal to -XX:MaxPermSize as it will remove some of the optimizations the JVM can apply to its allocation strategies and therefor reduce your application performance instead of improving it.
As always: make non-trivial measurements to prove your changes actually improve performance overall (for example improving startup time could be disastrous for performance during use of the application)
If you're doing some performance tuning it's often recommended to set both -XX:PermSize and -XX:MaxPermSize to the same value to increase JVM efficiency.
Here is some information:
Support for large page heap on x86 and amd64 platforms
Java Support for Large Memory Pages
Setting the Permanent Generation Size
You can also specify -XX:+CMSClassUnloadingEnabled to enable class unloading
option if you are using CMS GC. It may help to decrease the probability of Java.lang.OutOfMemoryError: PermGen space
I have a jvm server in my machine, now I want to have 2 apservers of mine sitting in same machine, however I want the standby one to have a really low amount of memory allocated with xmx because its passive, one the main server (active) goes down I want to allocate more memory to my passive server which is already up without restarting it (I have have them both having too much xmx - note they would consume memory at startup and I cant allow possibility of outOfMemory).
So I want passive - low xmx
once active goes down I want my passive to receive much more xmx.
is there a way for me to achieve that.
Thanks
It would be nice, but as far as I know it's not an option with the Sun provided JVMs.
The Xmx option is to specify maximum memory, it's there to prevent the JVM from consuming the entire machine's free memory. If you want to set it higher, it won't require the JVM allocate all of that memory. Why not just set it to a very high number and let the JVM grow into it over time?
To make sure your JVM doesn't start off with too little memory (creating lots of pauses as it grows the memory to the required size), adjust Xms to the size you want to allocate for the JVM at startup.
The short answer is unless your particular JVM allows for these values to be changed after initialization, you cannot (I believe this is the case for HotSpot).
However, you may be able to accomplish your goals without changing Xmx on the fly. For example, you could use a small -Xms setting, but keep -Xmx relatively high. If the passive server is not using much memory / generating garbage while still serving as the backup, then memory will stay near the Xms value. However, once the backup server takes over it would be allowed to expand allocated memory up to the Xmx value on an as-needed basis.
See java (windows) or java (*nix) as appropriate (though -Xms and -Xmx have the same general meaning on all platforms).
You don't need to adjust Xmx on the standby instance as long as it's not doing anything (or much of anything) because it should stay close to the value you set with Xms until it starts doing real work.
The Xmx switch governs the maximum amount of heap size the Java instance may consume. Xms governs the startup amount.
If you set Xms small on your standby instance and Xmx to whatever maximum your program needs, and then switch over to the Standby instance (killing the regular instance) it should work out fine.
It may be necessary to actually stop/kill the regular Java process depending on your available memory in order for the standby process to allocate all of the heap it needs as it moves from the initial lower heap size to toward it's maximum.
For the JVM to fill all the heap, you'd have to generate enough objects that survive the young generation collection. That would be unlikely on the lightly-loaded stand-by server.
To improve your chances of catching all the garbage in the young generation, configure your young generation heap accordingly: larger sizes, more generations before objects age out. This is a compromise between confining your standby server to young generation and the collection profile you need in your primary server.
Update: the new G1 collector uses different configuration options. PLease look at http://www.oracle.com/technetwork/tutorials/tutorials-1876574.html to learn more. The option most relevant to your case would be
-XX:InitiatingHeapOccupancyPercent=45 - Percentage of the (entire) heap occupancy to start a concurrent GC cycle. It is used by G1 to trigger a concurrent GC cycle based on the occupancy of the entire heap, not just one of the generations. A value of 0 denotes 'do constant GC cycles'. The default value is 45 (i.e., 45% full or occupied).
IOW, the equivalent of young generation collection will start when the current heap (the min heap size initially) is 45% used up. Your light-load server should never leave the min heap size (unless it produces relatively long-living objects, in which case see -XX:MaxTenuringThreshold).
What is the benefit of setting the -Xms parameter, and having the initial memory larger for example, then the default calculated one (64 MB in my case, according to Java GC tunning:
http://java.sun.com/javase/technologies/hotspot/gc/gc_tuning_6.html#par_gc.ergonomics.default_size)?
Also, is there any good to setting both the initial and maximum memories to same size?
Thanks.
The benefit is that there is a performance penalty when you use up enough of the heap that it has to be resized. If you set it initially to 64MB but it turns out your application under load needs 250MB, when you hit near 64MB the JVM will allocate more heap space and possibly move around some objects and do other book-keeping. This of course takes time.
When your application is under load, you want all resources dedicated to making it run, so this extra work can make the application slower to respond, or even in some instances it can crash if it runs out of memory before the heap is resized.
Sometimes when using a Java app, you'll see instructions like "set Xms and Xmx to the same value". This is done to avoid the resizing altogether, so that your application launches with its heap already as big as it will ever be.
The linked article explains it clear enough:
Default values: -Xms 3670k
-Xmx 64m
[...]
Large
server applications often experience
two problems with these defaults. One
is slow startup, because the initial
heap is small and must be resized over
many major collections. A more
pressing problem is that the default
maximum heap size is unreasonably
small for most server applications.
The rules of thumb for server
applications are:
Unless you have problems with
pauses, try granting as much memory
as possible to the virtual machine.
The default size (64MB) is often too
small.
Setting -Xms and -Xmx to the
same value increases predictability
by removing the most important
sizing decision from the virtual
machine. However, the virtual
machine is then unable to compensate
if you make a poor choice.
In general, increase the memory as you
increase the number of processors,
since allocation can be
parallelized.
You can also be interested in this discussion of the problem.
What is the benefit of setting the -Xms parameter, and having the initial memory larger for example, then the default calculated one
If the initial heap is small and must be resized over many major collections, the startup will be slow.
Also, is there any good to setting both the initial and maximum memories to same size?
Setting -Xms and -Xmx to the same value gives you predictability. This is especially important when sizing the JVM during performance tuning. But the JVM won't be able to compensate any bad decision.
I tend to use the same values for production servers (which are tuned during performance testing).
If it is normal for your application to require more than 64 MB of heap memory, setting Xms to a larger value should improve the application's performance somewhat because the VM would not have to request additional memory as many times.
In a production system I consider setting Xms and Xmx to the same value sensible. It's basically saying "this is the amount of heap memory the VM can get and I'm dedicating it right away".