JVM heap space allocation is confusing - java

I have a java application running with a max heap size of 8 GB .
On a 32 GB memory, the slice of Old gen was 7.4 GB(approx) . In a 128 GB memory, the same application gets a slice of Old Gen of 6.2 GB(approx).
I would like to know how this is done by the JVM internally? Is there a math that it uses. Actually , am in the phase of GC tuning and would be helpful if I get to know how this number is arrived by default. I use JDK 1.7.

It does not have to do with the total RAM in the system. The GC ratios affect how much memory can exist in various regions.
-XX:NewRatio=n Ratio of old/new generation sizes. The default value is 2.
-XX:SurvivorRatio=n Ratio of eden/survivor space size. The default value is 8.
-XX:MaxHeapFreeRatio=70 Maximum percentage of heap free after GC to avoid shrinking.
Use the first 2 ratios to tweak the ratio of the heap internally within the VM itself. Use the free heap ratios to tweak the amount of memory that the heap can grow / shrink by.
Recommended reading - Oracle provided GC tuning guide.

Related

How to configure the JVM heap memory size to be increased after jvm reached the maximum heap size set by the Xmx option

We have a requirement where the total application jvm memory is too high and is varying based on the input dataset. So we have no idea about the maximum heap size to be set using the -Xmx command line option. The total memory needed is greater than the default maximum heap size( 1/4 -th of the total physical memory).
When we are not giving any GC ergonomics command line parameters, memory is not growing after 9-9.5 GB( The total physical memory in the system is 38GB). And application would get stuck at this point.
If we give the Xmx value to 20 GB, application is running. But we are not sure about the maximum heap size value since it can change according to the input data.
Please advise on how to proceed in this case. Do we have any option to increase the heap memory beyond the Xmx value? Thanks for the help.
Try using -Xmx along with -Xms. Provide -Xms with the minimum required value (20GB) and -Xmx with the maximum possible value for your operating system.
Since memory is not growing beyond 9-9.5GB initially,
set -Xms as 9 GB
and set -Xmx as maximum available physical memory - 1 or 2 GB if no other process is running ( 30 GB for 32 GB RAM machine )
You have fine-tune garbage collection algorithm apart from these memory setting.
CMS is effective if both -Xms and Xmx have same value.
For larger heaps, G1GC is very effective. You can have different values for minimum and maximum heap and it works fine if you properly fine-tune relevant parameters.
Have a look at related SE question:
Java 7 (JDK 7) garbage collection and documentation on G1
and
Selecting a collector article

does large value for -Xmx postpone Garbage Collection?

I have a lot of JVMs running on a Linux Redhat with 32GB of physical memory and a 32GB of virtual memory. these JVMs are configured to have a total value for Xmx which is more than 32GB and might have the Linux used its virtual memory.
my question is that if I specify the Xmx more than required heap size would it delay the Garbage Collection and as a result allocate more heapsize than necessary? so that it will cause the OS to allocate memory from its virtual memory which leads to a dwindled performance.
The JVM reserves the maximum heap size (as well as other memory areas) on startup. This means if you have a maximum heap size of 32 GB it will use about 33-35 GB of virtual memory including shared libraries threads etc.
Its is worth nothing that if you make the maximum heap size 32 GB it has to use 64-bit references and you can end up with less usable memory than a JVM with a maximum heap of 24 GB and it will use 32-bit references. Some people have estimated that if you make the heap size 32 Gb or more you have to increase it to 48 GB to get more usable memory.
Given the size of your machine I suggest you limit the heap to 24 GB (or less) and where possible use off heap memory as this can have both performance advantages and greater scalability.
If you have a low GC program and you want to avoid collection you can create a massive eden size and GC once per day or once per week. To do this you have to keep your garbage discarded to a minimum and you could create an eden size of say 20 GB in which case provided you create less than 20 Gb in a day you can avoid triggering any GCs (even minor ones) and run a full GC as an over night maintainence task. e.g. at 2 am in the morning.
If you use a large heap, you want to avoid using the swap at all costs. This is because the GC needs random access to the heap and as soon as one triggers which is large enough to swap, your machine will thrash for quite a long time (possibly hours), and possibly lock up. You might even have to reboot to get you machine to behave normally. (it is hard to kill a process/machine which is in such a state)

Regarding memory usage of Java Application

I have a question regarding memory usage of java web applications..
In Task manager, Tomcat service seems to occupy 8,677,544,000 Bytes of memory
In jvisualvm, memory usage of java application deployed under that tomcat is as follows
Heap:
Used: 2,304,233,184 B
Size: 8,589,934,592 B
Max: 10,737,418,240 B
Permgen:
Used: 80,272,056 B
Size: 536,870,912 B
Max: 536,870,912 B
Memory Parameters in Tomcat's Service.bat file:
-Xms8192m;-Xmx10240m;-XX:PermSize=512m;-XX:MaxPermSize=512m
Now, my question is no matter what I set MaxHeapFreeRatio the free space is not shrinking even though the used space is shrinking at times.
Can anyone, kindly tell me why is this behaving like this.. Is there a way to shrink the free space so that other processes runnning on the system can utilize it?
I am using latest versions of JDK 1.7 & Tomcat 7.
With the parameter -Xms8192m you're telling Tomcat (well, the JVM) to allocate at least 8 GiB of heap space on startup.
The JVisualVM stats support that, but also tell you that around 2 GiB is being used by the application.
Reduce the start-up value to a lower value (start at 2 GiB). Note that if the application needs more heap space, you've told it you can use up to 10 GiB -Xmx10240m so it may be worth trimming this value down too (maybe to 4 GiB).
Obviously, if you start to get OOME's, you'll need to increase the values until the application has enough to run happily (assuming no memory leaks etc.).
Also, note that the process size will always be larger than the heap size as it also includes the perm gen space (-XX:MaxPermSize=512m) and internal memory / libraries for the JVM itself.
Basic examples:
-Xms512m;-Xmx1536m;-XX:PermSize=64m;-XX:MaxPermSize=256m
The minimum values here are 512 MiB heap, 64 MiB perm gen, so the minimum OS process size would be around 600 - 650 MiB.
As the application allocates more space up to the max values (1.5 GiB heap, 256 MiB perm gen) I'd expect the process size to reach about 2 GiB.
Now if we use your values:
-Xms8192m;-Xmx10240m;-XX:PermSize=512m;-XX:MaxPermSize=512m
The minimum values here are 8 GiB heap, 512 MiB perm gen, so the minimum OS process size would be around 8.6 GiB - pretty much exactly what you're seeing.
Using the max values, the process size would reach nearly 11 GiB.

GC heap sizing and program memory overhead

I'm trying to figure out what's happening with my application.
Problems:
GC invoking doesn't reduce unused heap size as much as it should, but it should since I'm using serial GC (or UseParNewGC) and agressive heap ratios.
The program's memory in use is always a lot bigger than the current used and unused heap, too much in my opinion even with other JVM memory included + heap
Command line used:
java -XX:+UseSerialGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -Xmx2500M -cp XXXXXX.jar xxxx.xxxx.xxxx
pause
tried with UseParNewGC, same results
System:
Win7 SP1
4GB RAM + 4GB swapfile
2.99GHZ
Java 1.7 + JDK 1.7
Please see picture to make things more clear:
http://i.stack.imgur.com/i3sxw.jpg
Rather than set free ratios, try to set the New Generation to a size that enables the short-lived objects to die. Trying to promote as few as possible to the Old Generation.
Bear in mind that larges Young Generation turns into large collections.
Then set the max Old Generation to a size that does not take too long to full GC but does not constantly runs them.

Tomcat memory consumption is more than heap + permgen space

I am observing a mismatch in Tomcat RAM consumption between what the OS says and what jVisualVM says.
From htop, the Tomcat JVM is has 993 MB of resident memory
From jVisualVM, the Tomcat JVM is using
Heap Max: 1,070,399,488 B
Heap Size: 298.438.656 B
Heap Used: variable, between 170MB and and 270MB
PermGen Max: 268,435,456 B
PermGen Size: 248,872,960 B
PermGen Used: slightly variable, around 150MB
From my understanding the OS memory consumption should be Heap Size + PermGen Size ~= 522 MB. But that's 471 MB less than what I'm observing.
Anyone got an idea what am I missing here?
PS: I know that my max heap is much higher than what is used, but I'm assuming that should have no effect if the JVM does not use it (i.e. Heap Size is lower).
Thanks!
Marc
From my understanding the OS memory consumption should be Heap Size + PermGen Size ~= 522 MB. But that's 471 MB less than what I'm observing. Anyone got an idea what am I missing here?
If I understand the question what you are seeing is a combination of memory fragmentation and JVM memory overhead in other areas. We often see 2 times the memory usage for our production programs than we would expect to see from our memory settings.
Memory fragmentation can mean that although the JVM thinks that the OS has given it some number of bytes, there is a certain addition number of bytes that had to be given because of memory subsystem optimizations.
In terms of JVM overhead, there are a number of other storage areas that are not included in the standard memory configs. Here's a good discussion about this. To quote:
The following
are examples of things that are not part of the garbage collected heap
and yet are part of the memory required by the process:
Code to implement the JVM
The C manual heap for data structures implementing the JVM
Stacks for all of the threads in the system (app + JVM)
Cached Java bytecode (for libraries and the application)
JITed machine code (for libraries and the application)
Static variables of all loaded classes
The first thing we have to bear in mind is that: JVM process heap (OS process) = Java object heap + [Permanent space + Code generation + Socket buffers + Thread stacks + Direct memory space + JNI code + JNI allocated memory + Garbage collection], where in this "collection" permSpace is usually the bigest chunk.
Given that, I guess the key here is the JVM option -XX:MinFreeHeapRatio=n, where n is from 0 to 100, and it specifies that the heap should be expanded if less than n% of the heap is free. It is usually 40 by default (Sun), so when the JVM allocates memory, it gets enough to get 40% free (this is not applicable if you have -Xms == -Xmx). Its "twin option", -XX:MaxHeapFreeRatio usually defaults to 70 (Sun).
Therefore, in a Sun JVM the ratio of living objects at each garbage collection is kept within 40-70%. If less than 40% of the heap is free after a GC, then the heap is expanded. So assuming you are running a Sun JVM, I would guess that the size of the "java object heap" has reached a peak of about 445Mb, thus producing an expanded "object heap" of about 740 Mb (to guarantee a 40% free). Then, (object heap) + (perm space) = 740 + 250 = 990 Mb.
Maybe you can try to output GC details or use jconsole to verify the evolution of the heap size.
P.S.: when dealing with issues like this, it is good to post OS and JVM details.
During the startup of your application the JVM will reserve memory equal to roughly the size of your Heap Max value (-Xmx) plus a bit more for other stuff. This prevents the JVM from having to go back to the OS to reserve more memory later.
Even if your application is only using 298mb of heap space, there will still be the 993mb reserved with the OS. You will need to read more into reserved vs committed memory.
Most of the articles you will read when talking about garbage collection will refer to allocation from a heap perspective and not the OS level. By reserving the memory at start-up for your application, the garbage collection can work in its own space.
If you need more details, read the article Tuning Garbage Collection
Here are some important exerts from the document
At initialization, a maximum address space is virtually reserved but
not allocated to physical memory unless it is needed.
Also look at section 3.2 (iv) in the document
At initialization of the virtual machine, the entire space for the
heap is reserved. The size of the space reserved can be specified with
the -Xmx option. If the value of the -Xms parameter is smaller than
the value of the -Xmx parameter, not all of the space that is reserved
is immediately committed to the virtual machine.
The OS will report the memory used by the JVM + the memory used by your program. So it will always be higher than what the JVM reports as memory usage. There is a certain amount of memory need by the JVM itself in order execute your program and the OS can't tell the difference.
Unfortunately using the system memory tools isn't a very precise way to track your programs memory consumption. JVM's typically allocate large blocks of memory so object creation is quick, but it doesn't mean your program is consuming that memory.
A better way of knowing what your program is actually doing is to run jconsole and look at the memory usage there. That's a very simple tool for looking at memory that's easy to set up.

Categories