When the garbage collector runs and releases memory does this memory go back to the OS or is it being kept as part of the process. I was under the strong impression that the memory is never actually released back to OS but kept as part of the memory area/pool to be reused by the same process.
As a result the actual memory of a process would never decrease. An article that reminded me was this and Java’s Runtime is written in C/C++ so I guess the same thing applies?
Update
My question is about Java. I am mentioning C/C++ since I assume the Java’s allocation/deallocation is done by JRE using some form of malloc/delete
The HotSpot JVM does release memory back to the OS, but does so reluctantly since resizing the heap is expensive and it is assumed that if you needed that heap once you'll need it again.
In general shrinking ability and behavior depends on the chosen garbage collector, the JVM version since shrinking capability was often introduced in later versions long after the GC itself was added. Some collectors may also require additional options to be passed to opt into shrinking. And some most likely never will support it, e.g. EpsilonGC.
So if heap shrinking is desired it should be tested for a particular JVM version and GC configuration.
JDK 8 and earlier
There are no explicit options for prompt memory reclamation in these versions but you can make the GC more aggressive in general by setting -XX:GCTimeRatio=19 -XX:MinHeapFreeRatio=20 -XX:MaxHeapFreeRatio=30 which will allow it to spend more CPU time on collecting and constrain the amount of allocated-but-unused heap memory after a GC cycle.
If you're using a concurrent collector you can also set -XX:InitiatingHeapOccupancyPercent=N with N to some low value to let the GC run concurrent collections almost continuously, which will consume even more CPU cycles but shrink the heap sooner. This generally is not a good idea, but on some types of machines with lots of spare CPU cores but short on memory it can make sense.
If you're using G1GC note that it only gained the ability to yield back unused chunks in the middle of the heap with jdk8u20, earlier versions were only able to return chunks at the end of the heap which put significant limits on how much could be reclaimed.
If you're using a collector with a default pause time goal (e.g. CMS or G1) you can also relax that goal to place fewer constraints on the collector, or you can switch go the parallel collector to prioritize footprint over pause times.
To verify that shrinking occurs or to diagnose why a GC decides not to shrink you can use GC Logging with -XX:+PrintAdaptiveSizePolicy may also provide insight, e.g. when the JVM tries to use more memory for the young generation to meet some goals.
JDK 9
Added the -XX:-ShrinkHeapInSteps option which can be be used to apply the shrinking caused by the options mentioned in the previous section more aggressively. Relevant OpenJDK bug.
For logging -XX:+PrintAdaptiveSizePolicy has been replaced with -Xlog:gc+ergo
JDK 12
Introduced the option to enable prompt memory release for G1GC via the G1PeriodicGCInterval (JEP 346), again at the expense of some additional CPU. The JEP also mentions similar features in Shenandoah and the OpenJ9 VM.
JDK 13
Adds similar behavior for ZGC, in this case it is enabled by default. Additionally XXSoftMaxHeapSize can be helpful for some workloads to keep the average heap size below some threshold while still allowing transient spikes.
The JVM does release back memory under some circumstances, but (for performance reasons) this does not happen whenever some memory is garbage collected. It also depends on the JVM, OS, garbage collector etc. You can watch the memory consumption of your app with JConsole, VisualVM or another profiler.
Also see this related bug report
If you use the G1 collector and call System.gc() occasionally (I do it once a minute), Java will reliably shrink the heap and give memory back to the OS.
Since Java 12, G1 does this automatically if the application is idle.
I recommend using these options combined with the above suggestion for a very compact resident process size:
-XX:+UseG1GC -XX:MaxHeapFreeRatio=30 -XX:MinHeapFreeRatio=10
Been using these options daily for months with a big application (a whole Java-based guest OS) with dynamically loading and unloading classes - and the Java process almost always stays between 400 and 800 MB.
this article here explains how the GC work in Java 7. In a nutshell, there are many different garbage collectors available. Usually the memory is kept for the Java process and only some GCs release it to the system (upon request I think). But, the memory used by the Java process will not grow indefinitely, as there is an upper limit defined by the Xmx option (which is 256m usually, but I think it is OS/machine dependant).
ZGC released in 13 java and it can return unused heap memory to the operating system
Please see the link
Related
When the garbage collector runs and releases memory does this memory go back to the OS or is it being kept as part of the process. I was under the strong impression that the memory is never actually released back to OS but kept as part of the memory area/pool to be reused by the same process.
As a result the actual memory of a process would never decrease. An article that reminded me was this and Java’s Runtime is written in C/C++ so I guess the same thing applies?
Update
My question is about Java. I am mentioning C/C++ since I assume the Java’s allocation/deallocation is done by JRE using some form of malloc/delete
The HotSpot JVM does release memory back to the OS, but does so reluctantly since resizing the heap is expensive and it is assumed that if you needed that heap once you'll need it again.
In general shrinking ability and behavior depends on the chosen garbage collector, the JVM version since shrinking capability was often introduced in later versions long after the GC itself was added. Some collectors may also require additional options to be passed to opt into shrinking. And some most likely never will support it, e.g. EpsilonGC.
So if heap shrinking is desired it should be tested for a particular JVM version and GC configuration.
JDK 8 and earlier
There are no explicit options for prompt memory reclamation in these versions but you can make the GC more aggressive in general by setting -XX:GCTimeRatio=19 -XX:MinHeapFreeRatio=20 -XX:MaxHeapFreeRatio=30 which will allow it to spend more CPU time on collecting and constrain the amount of allocated-but-unused heap memory after a GC cycle.
If you're using a concurrent collector you can also set -XX:InitiatingHeapOccupancyPercent=N with N to some low value to let the GC run concurrent collections almost continuously, which will consume even more CPU cycles but shrink the heap sooner. This generally is not a good idea, but on some types of machines with lots of spare CPU cores but short on memory it can make sense.
If you're using G1GC note that it only gained the ability to yield back unused chunks in the middle of the heap with jdk8u20, earlier versions were only able to return chunks at the end of the heap which put significant limits on how much could be reclaimed.
If you're using a collector with a default pause time goal (e.g. CMS or G1) you can also relax that goal to place fewer constraints on the collector, or you can switch go the parallel collector to prioritize footprint over pause times.
To verify that shrinking occurs or to diagnose why a GC decides not to shrink you can use GC Logging with -XX:+PrintAdaptiveSizePolicy may also provide insight, e.g. when the JVM tries to use more memory for the young generation to meet some goals.
JDK 9
Added the -XX:-ShrinkHeapInSteps option which can be be used to apply the shrinking caused by the options mentioned in the previous section more aggressively. Relevant OpenJDK bug.
For logging -XX:+PrintAdaptiveSizePolicy has been replaced with -Xlog:gc+ergo
JDK 12
Introduced the option to enable prompt memory release for G1GC via the G1PeriodicGCInterval (JEP 346), again at the expense of some additional CPU. The JEP also mentions similar features in Shenandoah and the OpenJ9 VM.
JDK 13
Adds similar behavior for ZGC, in this case it is enabled by default. Additionally XXSoftMaxHeapSize can be helpful for some workloads to keep the average heap size below some threshold while still allowing transient spikes.
The JVM does release back memory under some circumstances, but (for performance reasons) this does not happen whenever some memory is garbage collected. It also depends on the JVM, OS, garbage collector etc. You can watch the memory consumption of your app with JConsole, VisualVM or another profiler.
Also see this related bug report
If you use the G1 collector and call System.gc() occasionally (I do it once a minute), Java will reliably shrink the heap and give memory back to the OS.
Since Java 12, G1 does this automatically if the application is idle.
I recommend using these options combined with the above suggestion for a very compact resident process size:
-XX:+UseG1GC -XX:MaxHeapFreeRatio=30 -XX:MinHeapFreeRatio=10
Been using these options daily for months with a big application (a whole Java-based guest OS) with dynamically loading and unloading classes - and the Java process almost always stays between 400 and 800 MB.
this article here explains how the GC work in Java 7. In a nutshell, there are many different garbage collectors available. Usually the memory is kept for the Java process and only some GCs release it to the system (upon request I think). But, the memory used by the Java process will not grow indefinitely, as there is an upper limit defined by the Xmx option (which is 256m usually, but I think it is OS/machine dependant).
ZGC released in 13 java and it can return unused heap memory to the operating system
Please see the link
I have a server application that, in rare occasions, can allocate large chunks of memory.
It's not a memory leak, as these chunks can be claimed back by the garbage collector by executing a full garbage collection. Normal garbage collection frees amounts of memory that are too small: it is not adequate in this context.
The garbage collector executes these full GCs when it deems appropriate, namely when the memory footprint of the application nears the allotted maximum specified with -Xmx.
That would be ok, if it wasn't for the fact that these problematic memory allocations come in bursts, and can cause OutOfMemoryErrors due to the fact that the jvm is not able to perform a GC quickly enough to free the required memory. If I manually call System.gc() beforehand, I can prevent this situation.
Anyway, I'd prefer not having to monitor my jvm's memory allocation myself (or insert memory management into my application's logic); it would be nice if there was a way to run the virtual machine with a memory threshold, over which full GCs would be executed automatically, in order to release very early the memory I'm going to need.
Long story short: I need a way (a command line option?) to configure the jvm in order to release early a good amount of memory (i.e. perform a full GC) when memory occupation reaches a certain threshold, I don't care if this slows my application down every once in a while.
All I've found till now are ways to modify the size of the generations, but that's not what I need (at least not directly).
I'd appreciate your suggestions,
Silvio
P.S. I'm working on a way to avoid large allocations, but it could require a long time and meanwhile my app needs a little stability
UPDATE: analyzing the app with jvisualvm, I can see that the problem is in the old generation
From here (this is a 1.4.2 page, but the same option should exist in all Sun JVMs):
assuming you're using the CMS garbage collector (which I believe the server turns on by default), the option you want is
-XX:CMSInitiatingOccupancyFraction=<percent>
where % is the % of memory in use that will trigger a full GC.
Insert standard disclaimers here that messing with GC parameters can give you severe performance problems, varies wildly by machine, etc.
When you allocate large objects that do not fit into the young generation, they are immediately allocated in the tenured generation space. This space is only GC'ed when a full-GC is run which you try to force.
However I am not sure this would solve your problem. You say "JVM is not able to perform a GC quickly enough". Even if your allocations come in bursts, each allocation will cause the VM to check if it has enough space available to do it. If not - and if the object is too large for the young generation - it will cause a full GC which should "stop the world", thereby preventing new allocations from taking place in the first place. Once the GC is complete, your new object will be allocated.
If shortly after that the second large allocation is requested in your burst, it will do the same thing again. Depending on whether the initial object is still needed, it will either be able to succeed in GC'ing it, thereby making room for the next allocation, or fail if the first instance is still referenced.
You say "I need a way [...] to release early a good amount of memory (i.e. perform a full GC) when memory occupation reaches a certain threshold". This by definition can only succeed, if that "good amount of memory" is not referenced by anything in your application anymore.
From what I understand here, you might have a race condition which you might sometimes avoid by interspersing manual GC requests. In general you should never have to worry about these things - from my experience an OutOfMemoryError only occurs if there are in fact too many allocations to be fit into the heap concurrently. In all other situations the "only" problem should be a performance degradation (which might become extreme, depending on the circumstances, but this is a different problem).
I suggest you do further analysis of the exact problem to rule this out. I recommend the VisualVM tool that comes with Java 6. Start it and install the VisualGC plugin. This will allow you to see the different memory generations and their sizes. Also there is a plethora of GC related logging options, depending on which VM you use. Some options have been mentioned in other answers.
The other options for choosing which GC to use and how to tweak thresholds should not matter in your case, because they all depend on enough memory being available to contain all the objects that your application needs at any given time. These options can be helpful if you have performance problems related to heavy GC activity, but I fear they will not lead to a solution in your particular case.
Once you are more confident in what is actually happening, finding a solution will become easier.
Do you know which of the garbage collection pools are growing too large?....i.e. eden vs. survivor space? (try the JVM option -Xloggc:<file> log GC status to a file with time stamps)...When you know this, you should be able to tweak the size of the effected pool with one of the options mentioned here: hotspot options for Java 1.4
I know that page is for the 1.4 JVM, I can't seem to find the same -X options on my current 1.6 install help options, unless setting those individual pool sizes is a non-standard, non-standard feature!
The JVM is only supposed to throw an OutOfMemoryError after it has attempted to release memory via garbage collection (according to both the API docs for OutOfMemoryError and the JVM specification). Therefore your attempts to force garbage collection shouldn't make any difference. So there might be something more significant going on here - either a problem with your program not properly clearing references or, less likely, a JVM bug.
There's a very detailed explanation of how GC works here and it lists parameters to control memory available to different memory pools/generations.
Try to use -server option. It will enable parallel gc and you will have some performance increase if you use multi core processor.
Have you tried playing with G1 gc? It should be available in 1.6.0u14 onwards.
What is the best way to tune a server application written in Java that uses a native C++ library?
The environment is a 32-bit Windows machine with 4GB of RAM. The JDK is Sun 1.5.0_12.
The Java process is given 1024MB of memory (-Xmx) at startup but I often see OutOfMemoryErrors due to lack of heap space. If the memory is increased to 1200MB, the OutOfMemoryErrors occur due to lack of swap space. How is the memory shared between the JVM and the native process?
Does the Windows /3GB switch have any effect with native processes and Sun JVM?
I had lots of trouble with that setting (Java on 32-bit systems - msw and others) and they were all solved by reserving just *under 1GB of RAM to the JVM.
Otherwise, as stated, the actual occupied memory in the system for that process would be over 2GB; at that point I was having 'silent deaths' of the process - no errors, no warnings, just the process terminating very quietly.
I got more stability and performance running several JVM (each with under 1GB RAM) on the same system.
I found some info on JNI memory management here, and here's the JVM JNI section on memory management.
Well having a 3GB user space over a 2GB user space should help, but if your having problems running out of swap space at 2GB, I think 3GB is just going to make it worse. How big is your pagefile? Is it maxed out?
You can get a better idea on you heap allocation by hooking up jconsole to your jvm.
How is the memory shared between the JVM and the native process?
Sun's JVM's garbage collector is mark-and-sweep, with options to enable concurrent and incremental GC.
Well, more accurately, it's staged, and the above only applies to tenured (long-lived) objects. For young objects, GC is still done with a stop-and-copy collector, which is much better for working with short-lived objects (and all typical Java programs create many short-lived objects).
A copying collector walks over all elements in the heap, copying them to a new heap if they are referenced, and then discards the former heap. Thus 1M of live objects requires up to 2M of real memory: if every object is alive, there will be two copies of everything during garbage collection.
So the JVM requires far more system memory than is available to the code running within the VM, because there is a substantial overhead to management and garbage collection.
Does the Windows /3GB switch have any effect with native processes and Sun JVM?
The /3GB allows user virtual memory address space to be 3GB, but only for executables whose headers are marked with IMAGE_FILE_LARGE_ADDRESS_AWARE. As far as I am aware, Sun's java.exe is not. I don't have a Windows system here, so I can't verify.
You haven't explained your problem well enough, unfortunately. The real question is --- why is the Java process growing so much. Do you have a memory leak? Do you have a real reason to have that much data in the JVM?
Is the C++ library allocating its own memory from the C stack, or is it allocating memory from the Java object space, or is it doing something else entirely?
We've recently migrated our systems from Sun Java 5 to Java6 server VM (specifically, 1.6.0_16 on Linux 32 bit). We've noticed that the garbage collection behaviour has changed in such a way as to trigger our heap-warning monitoring system.
The heap usage graphs indicate a much "spikier" memory usage profile than we saw with Java5, with the VM letting heap usage get very high before running a big GC. It doesn't appear to be a problem with the application system itself (it never actually runs out of memory), but it's giving the monitoring system the occasional spurious "hair on fire" signals whenever the usage spike approaches the threshold.
We could increase the heap max and hope the spike doesn't simply get bigger, but I'd much rather find out if there's a way we can tune the JVM parameters in such a way that we get a smoother profile, even if we loose a bit of performance.
I'm guessing there might be some -XX option we can set to achieve this, but I an't see any such thing in the docs. Anyone know of such an option?
It sounds like you would really like to have something more like a concurrent collection (as opposed to standard big-bang collections):
The concurrent collector is designed
for applications that prefer shorter
garbage collection pauses and that can
afford to share processor resources
with the garbage collector while the
application is running.
Perhaps even more important, you should ensure that you're using the correct VM with the right options, over and above the specific garbage collection options. For example, I've tripped over the client vs. server VM issue multiple times in my own life.
Have fun reading and playing (Java 6 GC tuning :-)
Can you confirm the same GC scheme/mechanism is employed? Do you calculate higher GC overhead in 1.6 or are pause times greater over any given duration?
Max and min heap free directives may help with some of your heap ergonomics too.
-XX:MinHeapFreeRatio and -XX:MaxHeapFreeRatio
http://java.sun.com/javase/technologies/hotspot/gc/gc_tuning_6.html#generation_sizing.total_heap
Does the Sun JVM slow down when more memory is available and used via -Xmx? (Assumption: The machine has enough physical memory so that virtual memory swapping is not a problem.)
I ask because my production servers are to receive a memory upgrade. I'd like to bump up the -Xmx value to something decadent. The idea is to prevent any heap space exhaustion failures due to my own programming errors that occur from time to time. Rare events, but they could be avoided with my rapidly evolving webapp if I had an obscene -Xmx value, like 2048mb or higher. The application is heavily monitored, so unusual spikes in JVM memory consumption would be noticed and any flaws fixed.
Possible important details:
Java 6 (runnign in 64-bit mode)
4-core Xeon
RHEL4 64-bit
Spring, Hibernate
High disk and network IO
EDIT: I tried to avoid posting the configuration of my JVM, but clearly that makes the question ridiculously open ended. So, here we go with relevant configuration parameters:
-Xms256m
-Xmx1024m
-XX:+UseConcMarkSweepGC
-XX:+AlwaysActAsServerClassMachine
-XX:MaxGCPauseMillis=1000
-XX:MaxGCMinorPauseMillis=1000
-XX:+PrintGCTimeStamps
-XX:+HeapDumpOnOutOfMemoryError
By adding more memory, it will take longer for the heap to fill up. Consequently, it will reduce the frequency of garbage collections. However, depending on how mortal your objects are, you may find that how long it takes to do any single GC increases.
The primary factor for how long a GC takes is how many live objects there are. Thus, if virtually all of your objects die young and once you get established, none of them escape the young heap, you may not notice much of a change in how long it takes to do a GC. However, whenever you have to cycle the tenured heap, you may find everything halting for an unreasonable amount of time since most of these objects will still be around. Tune the sizes accordingly.
If you just throw more memory at the problem, you will have better throughput in your application, but your responsiveness can go down if you're not on a multi core system using the CMS garbage collector. This is because fewer GCs will occur, but they will have more work to do. The upside is that you will get more memory freed up with your GCs, so allocation will continue to be very cheap, hence the higher througput.
You seem to be confusing -Xmx and -Xms, by the way. -Xms just sets the initial heap size, whereas -Xmx is your max heap size.
More memory usually gives you better performance in garbage collected environments, at least as long as this does not lead to virtual memory usage / swapping.
The GC only tracks references, not memory per se. In the end, the VM will allocate the same number of (mostly short-lived, temporary) objects, but the garbage collector needs to be invoked less often - the total amount of garbage collector work will therefore not be more - even less, since this can also help with caching mechanisms which use weak references.
I'm not sure if there is still a server and a client VM for 64 bit (there is for 32 bit), so you may want to investigate that also.
According to my experience, it does not slow down BUT the JVM tries to cut back to Xms all the time and try to stay at the lower boundary or close to. So if you can effort it, bump Xms as well. Sun is recommending both at the same size. Add some -XX:NewSize=512m (just a made up number) to avoid the costly pile up of old data in the old generation with leads to longer/heavier GCs on the way. We are running our web app with 700 MB NewSize because most data is short-lived.
So, bottom line: I do not expect a slow down, but put your more of memory to work. Set a larger new size area and set Xms to Xmx to lower the stress on the GC, because it does not need to try to cut back to Xms limits...
It typically will not help your peformance/throughput,if you increase -Xmx.
Theoretically there could be longer "stop the world" phases but in practice with the CMS that's not a real problem.
Of course you should not set -Xmx to some insane value like 300Gbyte unless you really need it :)