Change in GC behaviour after move from Java5 to 6 - java

We've recently migrated our systems from Sun Java 5 to Java6 server VM (specifically, 1.6.0_16 on Linux 32 bit). We've noticed that the garbage collection behaviour has changed in such a way as to trigger our heap-warning monitoring system.
The heap usage graphs indicate a much "spikier" memory usage profile than we saw with Java5, with the VM letting heap usage get very high before running a big GC. It doesn't appear to be a problem with the application system itself (it never actually runs out of memory), but it's giving the monitoring system the occasional spurious "hair on fire" signals whenever the usage spike approaches the threshold.
We could increase the heap max and hope the spike doesn't simply get bigger, but I'd much rather find out if there's a way we can tune the JVM parameters in such a way that we get a smoother profile, even if we loose a bit of performance.
I'm guessing there might be some -XX option we can set to achieve this, but I an't see any such thing in the docs. Anyone know of such an option?

It sounds like you would really like to have something more like a concurrent collection (as opposed to standard big-bang collections):
The concurrent collector is designed
for applications that prefer shorter
garbage collection pauses and that can
afford to share processor resources
with the garbage collector while the
application is running.
Perhaps even more important, you should ensure that you're using the correct VM with the right options, over and above the specific garbage collection options. For example, I've tripped over the client vs. server VM issue multiple times in my own life.

Have fun reading and playing (Java 6 GC tuning :-)

Can you confirm the same GC scheme/mechanism is employed? Do you calculate higher GC overhead in 1.6 or are pause times greater over any given duration?
Max and min heap free directives may help with some of your heap ergonomics too.
-XX:MinHeapFreeRatio and -XX:MaxHeapFreeRatio
http://java.sun.com/javase/technologies/hotspot/gc/gc_tuning_6.html#generation_sizing.total_heap

Related

In what circumstances does HotSpot JVM's GC release memory back to the OS? [duplicate]

When the garbage collector runs and releases memory does this memory go back to the OS or is it being kept as part of the process. I was under the strong impression that the memory is never actually released back to OS but kept as part of the memory area/pool to be reused by the same process.
As a result the actual memory of a process would never decrease. An article that reminded me was this and Java’s Runtime is written in C/C++ so I guess the same thing applies?
Update
My question is about Java. I am mentioning C/C++ since I assume the Java’s allocation/deallocation is done by JRE using some form of malloc/delete
The HotSpot JVM does release memory back to the OS, but does so reluctantly since resizing the heap is expensive and it is assumed that if you needed that heap once you'll need it again.
In general shrinking ability and behavior depends on the chosen garbage collector, the JVM version since shrinking capability was often introduced in later versions long after the GC itself was added. Some collectors may also require additional options to be passed to opt into shrinking. And some most likely never will support it, e.g. EpsilonGC.
So if heap shrinking is desired it should be tested for a particular JVM version and GC configuration.
JDK 8 and earlier
There are no explicit options for prompt memory reclamation in these versions but you can make the GC more aggressive in general by setting -XX:GCTimeRatio=19 -XX:MinHeapFreeRatio=20 -XX:MaxHeapFreeRatio=30 which will allow it to spend more CPU time on collecting and constrain the amount of allocated-but-unused heap memory after a GC cycle.
If you're using a concurrent collector you can also set -XX:InitiatingHeapOccupancyPercent=N with N to some low value to let the GC run concurrent collections almost continuously, which will consume even more CPU cycles but shrink the heap sooner. This generally is not a good idea, but on some types of machines with lots of spare CPU cores but short on memory it can make sense.
If you're using G1GC note that it only gained the ability to yield back unused chunks in the middle of the heap with jdk8u20, earlier versions were only able to return chunks at the end of the heap which put significant limits on how much could be reclaimed.
If you're using a collector with a default pause time goal (e.g. CMS or G1) you can also relax that goal to place fewer constraints on the collector, or you can switch go the parallel collector to prioritize footprint over pause times.
To verify that shrinking occurs or to diagnose why a GC decides not to shrink you can use GC Logging with -XX:+PrintAdaptiveSizePolicy may also provide insight, e.g. when the JVM tries to use more memory for the young generation to meet some goals.
JDK 9
Added the -XX:-ShrinkHeapInSteps option which can be be used to apply the shrinking caused by the options mentioned in the previous section more aggressively. Relevant OpenJDK bug.
For logging -XX:+PrintAdaptiveSizePolicy has been replaced with -Xlog:gc+ergo
JDK 12
Introduced the option to enable prompt memory release for G1GC via the G1PeriodicGCInterval (JEP 346), again at the expense of some additional CPU. The JEP also mentions similar features in Shenandoah and the OpenJ9 VM.
JDK 13
Adds similar behavior for ZGC, in this case it is enabled by default. Additionally XXSoftMaxHeapSize can be helpful for some workloads to keep the average heap size below some threshold while still allowing transient spikes.
The JVM does release back memory under some circumstances, but (for performance reasons) this does not happen whenever some memory is garbage collected. It also depends on the JVM, OS, garbage collector etc. You can watch the memory consumption of your app with JConsole, VisualVM or another profiler.
Also see this related bug report
If you use the G1 collector and call System.gc() occasionally (I do it once a minute), Java will reliably shrink the heap and give memory back to the OS.
Since Java 12, G1 does this automatically if the application is idle.
I recommend using these options combined with the above suggestion for a very compact resident process size:
-XX:+UseG1GC -XX:MaxHeapFreeRatio=30 -XX:MinHeapFreeRatio=10
Been using these options daily for months with a big application (a whole Java-based guest OS) with dynamically loading and unloading classes - and the Java process almost always stays between 400 and 800 MB.
this article here explains how the GC work in Java 7. In a nutshell, there are many different garbage collectors available. Usually the memory is kept for the Java process and only some GCs release it to the system (upon request I think). But, the memory used by the Java process will not grow indefinitely, as there is an upper limit defined by the Xmx option (which is 256m usually, but I think it is OS/machine dependant).
ZGC released in 13 java and it can return unused heap memory to the operating system
Please see the link

Java: how to trace/monitor GC times for the CMS garbage collector?

I'm having trouble figuring out a way to monitor the JVM GC for memory exhaustion issues.
With the serial GC, we could just look at the full GC pause times and have a pretty good notion if the JVM was in trouble (if it took more than a few seconds, for example).
CMS seems to behave differently.
When querying lastGcInfo from the java.lang:type=GarbageCollector,name=ConcurrentMarkSweep MXBean (via JMX), the reported duration is the sum of all GC steps, and is usually several seconds long. This does not indicate an issue with GC, to the contrary, I've found that too short GC times are usually more of an indicator of trouble (which happens, for example, if the JVM goes into a CMS-concurrent-mark-start-> concurrent mode failure loop).
I've tried jstat as well, which gives the cumulative time spent garbage collecting (unsure if it's for old or newgen GC). This can be graphed, but it's not trivial to use for monitoring purposes. For example, I could parse jstat -gccause output and calculate differences over time, and trace+monitor that (e.g. amount of time spent GC'ing over the last X minutes).
I'm using the following JVM arguments for GC logging:
-Xloggc:/xxx/gc.log
-XX:+PrintGCDetails
-verbose:gc
-XX:+PrintGCDateStamps
-XX:+PrintReferenceGC
-XX:+PrintPromotionFailure
Parsing gc.log is also an option if nothing else is available, but the optimal solution would be to have a java-native way to get at the relevant information.
The information must be machine-readable (to send to monitoring platforms) so visual tools are not an option. I'm running a production environment with a mix of JDK 6/7/8 instances, so version-agnostic solutions are better.
Is there a simple(r) way to monitor CMS garbage collection? What indicators should I be looking at?
Fundamentally one wants two things from the CMS concurrent collector
the throughput of the concurrent cycle to keep up with the promotion rate, i.e. the objects surviving into the old gen per unit of time
enough room in the old generation for objects promoted during a concurrent cycle
So let's say the IHOP is fixed to 70% then you probably are approaching a problem when it reaches >90% at some point. Maybe even earlier if you do some large allocations that don't fit into the young generation or outlive it (that's entirely application-specific).
Additionally you usually want it to spend more time outside the concurrent cycle than in it, although that depends on how tightly you tune the collector, in principle you could have the concurrent cycle running almost all the time, but then you have very little throughput margin and burn a lot of CPU time on concurrent collections.
If you really really want to avoid even the occasional Full GC then you'll need even more safety margins due to fragmentation (CMS is non-compacting). I think this can't be monitored via MX beans, you'll have to to enable some CMS-specific GC logging to get fragmentation info.
For viewing GC logs:
If you have already enabled GC logging, I suggest GCViewer - this is an open source tool that can be used to view GC logs and look at parameters like throughput, pause times etc.
For profiling:
I don't see a JDK version mentioned in the question. For JDK 6, I would recommend visualvm to profile an application. For JDK 7/8 I would suggest mission control. You can find these in JDK\lib folder. These tools can be used to see how the application performs over a period of time and during GC (can trigger GC via visualvm UI).

How to release memory to OS in Java [duplicate]

When the garbage collector runs and releases memory does this memory go back to the OS or is it being kept as part of the process. I was under the strong impression that the memory is never actually released back to OS but kept as part of the memory area/pool to be reused by the same process.
As a result the actual memory of a process would never decrease. An article that reminded me was this and Java’s Runtime is written in C/C++ so I guess the same thing applies?
Update
My question is about Java. I am mentioning C/C++ since I assume the Java’s allocation/deallocation is done by JRE using some form of malloc/delete
The HotSpot JVM does release memory back to the OS, but does so reluctantly since resizing the heap is expensive and it is assumed that if you needed that heap once you'll need it again.
In general shrinking ability and behavior depends on the chosen garbage collector, the JVM version since shrinking capability was often introduced in later versions long after the GC itself was added. Some collectors may also require additional options to be passed to opt into shrinking. And some most likely never will support it, e.g. EpsilonGC.
So if heap shrinking is desired it should be tested for a particular JVM version and GC configuration.
JDK 8 and earlier
There are no explicit options for prompt memory reclamation in these versions but you can make the GC more aggressive in general by setting -XX:GCTimeRatio=19 -XX:MinHeapFreeRatio=20 -XX:MaxHeapFreeRatio=30 which will allow it to spend more CPU time on collecting and constrain the amount of allocated-but-unused heap memory after a GC cycle.
If you're using a concurrent collector you can also set -XX:InitiatingHeapOccupancyPercent=N with N to some low value to let the GC run concurrent collections almost continuously, which will consume even more CPU cycles but shrink the heap sooner. This generally is not a good idea, but on some types of machines with lots of spare CPU cores but short on memory it can make sense.
If you're using G1GC note that it only gained the ability to yield back unused chunks in the middle of the heap with jdk8u20, earlier versions were only able to return chunks at the end of the heap which put significant limits on how much could be reclaimed.
If you're using a collector with a default pause time goal (e.g. CMS or G1) you can also relax that goal to place fewer constraints on the collector, or you can switch go the parallel collector to prioritize footprint over pause times.
To verify that shrinking occurs or to diagnose why a GC decides not to shrink you can use GC Logging with -XX:+PrintAdaptiveSizePolicy may also provide insight, e.g. when the JVM tries to use more memory for the young generation to meet some goals.
JDK 9
Added the -XX:-ShrinkHeapInSteps option which can be be used to apply the shrinking caused by the options mentioned in the previous section more aggressively. Relevant OpenJDK bug.
For logging -XX:+PrintAdaptiveSizePolicy has been replaced with -Xlog:gc+ergo
JDK 12
Introduced the option to enable prompt memory release for G1GC via the G1PeriodicGCInterval (JEP 346), again at the expense of some additional CPU. The JEP also mentions similar features in Shenandoah and the OpenJ9 VM.
JDK 13
Adds similar behavior for ZGC, in this case it is enabled by default. Additionally XXSoftMaxHeapSize can be helpful for some workloads to keep the average heap size below some threshold while still allowing transient spikes.
The JVM does release back memory under some circumstances, but (for performance reasons) this does not happen whenever some memory is garbage collected. It also depends on the JVM, OS, garbage collector etc. You can watch the memory consumption of your app with JConsole, VisualVM or another profiler.
Also see this related bug report
If you use the G1 collector and call System.gc() occasionally (I do it once a minute), Java will reliably shrink the heap and give memory back to the OS.
Since Java 12, G1 does this automatically if the application is idle.
I recommend using these options combined with the above suggestion for a very compact resident process size:
-XX:+UseG1GC -XX:MaxHeapFreeRatio=30 -XX:MinHeapFreeRatio=10
Been using these options daily for months with a big application (a whole Java-based guest OS) with dynamically loading and unloading classes - and the Java process almost always stays between 400 and 800 MB.
this article here explains how the GC work in Java 7. In a nutshell, there are many different garbage collectors available. Usually the memory is kept for the Java process and only some GCs release it to the system (upon request I think). But, the memory used by the Java process will not grow indefinitely, as there is an upper limit defined by the Xmx option (which is 256m usually, but I think it is OS/machine dependant).
ZGC released in 13 java and it can return unused heap memory to the operating system
Please see the link

Explain observed JVM Garbage Collection on JBoss Server

With VisualVM I am observing the following heap usage on a JBoss server:
The server is started with the following (relevant) JVM options:
-Xrs -Xms3072m -Xmx3072m -XX:MaxPermSize=512m -XX:+UseParallelOldGC -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000
And we currently also have enabled GC logging:
-XX:+PrintGC -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -Xloggc:log\gc.log
Basically I am happy with the observed pattern, since it looks like we don't have any memory leaks (the pattern repeats itself over days).
However I am wondering if there is room for optimization?
First of all, I don't understand why the garbage collection already kicks in when the heap usage reaches about 2GB? It looks to me like it could kick in later since the heap would have 3GB available?
Further more I would be interested in tips regarding the observed heap usage pattern and the used JVM options:
Does the observed pattern allow me to draw conclusions about the used GC strategy (UseParallelOldGC)? Ist this strategy the right one, or should I try to use another one given the observed heap usage?
Can I optimize the GC process, so that the full heap size (3GB) is used?
Right now it looks like the full 3GB are never used, should I reduce the Xms/Xmx to 2.5GB?
Are there any obvious GC optimizations that I am missing? Like tuning -XX:NewSize or -XX:NewRatio?
Any other tips that come to mind?
Thanks!
I'd say the GC behaviour in your screen-shot looks 'normal'.
You'd usually want major collections to trigger before the heap space gets too full or it would be very easy to encounter OutOfMemoryError's, based on a number of scenarios.
Also, are you aware that Java's heap space is divided into distinct areas for new (eden), current (survivor) and old (tenured) objects?
This answer provides some excellent information on the subject, so I won't repeat it here:
How is the java memory pool divided?
Very basically, each area of the heap triggers its own collections. The eden space is normally collected often and 'quickly' the survivor and tenured spaces are usually larger and take longer to collect.
Could you reduce your heap size based on the above graph?
Yes. However, your current configuration allows your application some breathing room, if it's ever likely to encounter busier periods or spikes in load.
Can you optimize GC?
Yes, but there are no magic settings. The first question is do you really need to? If your application is just a non-interactive 'processor', I really wouldn't bother. If you have a genuine need for a low pause application, then there are some tweaks available. The trade off is generally that you'll need more resources to achieve the same result.
My experience is that low-pause JVM configurations have a very noticeable fall-off point when load increases. If your application is usually fairly idle, but you expect a 'quick' response when it is called, low pause may be appropriate. On a busier system, with peaks in traffic / load, you may prefer a more traditional approach.
Summary
In any case, don't be tempted to make arbitrary changes to 'improve' your configuration. Be scientific and professional about your approach.
If you don't have production metrics available, consider using tools like Apache JMeter to build load test scenarios to simulate the typical live load on you application, increased load (by say, 10%, 20% or 50% etc.) and intermittent peak load.
Use metrics for both the GC and the application, measuring at least:
Average throughput.
Peak throughput.
Average load (CPU and memory).
Peak load.
Application pause times (total and individual pauses).
Time spent performing collections.
Reliability (OOME's etc.).
Once you're happy that you've recorded an accurate benchmark on the performance of you application with its current configuration, only then should you start making any changes.
Obviously, record you configuration and its metrics. Document any changes and then perform the same benchmark tests. Then you'll be able to see any performance gain (or loss) and any trade-off that may be applicable.
Here's the some further reading from Oracle on the subject to get you started:
Java SE 6 Virtual Machine Garbage Collection Tuning

Does the Sun JVM slow down when more memory is allocated via -Xmx?

Does the Sun JVM slow down when more memory is available and used via -Xmx? (Assumption: The machine has enough physical memory so that virtual memory swapping is not a problem.)
I ask because my production servers are to receive a memory upgrade. I'd like to bump up the -Xmx value to something decadent. The idea is to prevent any heap space exhaustion failures due to my own programming errors that occur from time to time. Rare events, but they could be avoided with my rapidly evolving webapp if I had an obscene -Xmx value, like 2048mb or higher. The application is heavily monitored, so unusual spikes in JVM memory consumption would be noticed and any flaws fixed.
Possible important details:
Java 6 (runnign in 64-bit mode)
4-core Xeon
RHEL4 64-bit
Spring, Hibernate
High disk and network IO
EDIT: I tried to avoid posting the configuration of my JVM, but clearly that makes the question ridiculously open ended. So, here we go with relevant configuration parameters:
-Xms256m
-Xmx1024m
-XX:+UseConcMarkSweepGC
-XX:+AlwaysActAsServerClassMachine
-XX:MaxGCPauseMillis=1000
-XX:MaxGCMinorPauseMillis=1000
-XX:+PrintGCTimeStamps
-XX:+HeapDumpOnOutOfMemoryError
By adding more memory, it will take longer for the heap to fill up. Consequently, it will reduce the frequency of garbage collections. However, depending on how mortal your objects are, you may find that how long it takes to do any single GC increases.
The primary factor for how long a GC takes is how many live objects there are. Thus, if virtually all of your objects die young and once you get established, none of them escape the young heap, you may not notice much of a change in how long it takes to do a GC. However, whenever you have to cycle the tenured heap, you may find everything halting for an unreasonable amount of time since most of these objects will still be around. Tune the sizes accordingly.
If you just throw more memory at the problem, you will have better throughput in your application, but your responsiveness can go down if you're not on a multi core system using the CMS garbage collector. This is because fewer GCs will occur, but they will have more work to do. The upside is that you will get more memory freed up with your GCs, so allocation will continue to be very cheap, hence the higher througput.
You seem to be confusing -Xmx and -Xms, by the way. -Xms just sets the initial heap size, whereas -Xmx is your max heap size.
More memory usually gives you better performance in garbage collected environments, at least as long as this does not lead to virtual memory usage / swapping.
The GC only tracks references, not memory per se. In the end, the VM will allocate the same number of (mostly short-lived, temporary) objects, but the garbage collector needs to be invoked less often - the total amount of garbage collector work will therefore not be more - even less, since this can also help with caching mechanisms which use weak references.
I'm not sure if there is still a server and a client VM for 64 bit (there is for 32 bit), so you may want to investigate that also.
According to my experience, it does not slow down BUT the JVM tries to cut back to Xms all the time and try to stay at the lower boundary or close to. So if you can effort it, bump Xms as well. Sun is recommending both at the same size. Add some -XX:NewSize=512m (just a made up number) to avoid the costly pile up of old data in the old generation with leads to longer/heavier GCs on the way. We are running our web app with 700 MB NewSize because most data is short-lived.
So, bottom line: I do not expect a slow down, but put your more of memory to work. Set a larger new size area and set Xms to Xmx to lower the stress on the GC, because it does not need to try to cut back to Xms limits...
It typically will not help your peformance/throughput,if you increase -Xmx.
Theoretically there could be longer "stop the world" phases but in practice with the CMS that's not a real problem.
Of course you should not set -Xmx to some insane value like 300Gbyte unless you really need it :)

Categories