What is the best way to tune a server application written in Java that uses a native C++ library?
The environment is a 32-bit Windows machine with 4GB of RAM. The JDK is Sun 1.5.0_12.
The Java process is given 1024MB of memory (-Xmx) at startup but I often see OutOfMemoryErrors due to lack of heap space. If the memory is increased to 1200MB, the OutOfMemoryErrors occur due to lack of swap space. How is the memory shared between the JVM and the native process?
Does the Windows /3GB switch have any effect with native processes and Sun JVM?
I had lots of trouble with that setting (Java on 32-bit systems - msw and others) and they were all solved by reserving just *under 1GB of RAM to the JVM.
Otherwise, as stated, the actual occupied memory in the system for that process would be over 2GB; at that point I was having 'silent deaths' of the process - no errors, no warnings, just the process terminating very quietly.
I got more stability and performance running several JVM (each with under 1GB RAM) on the same system.
I found some info on JNI memory management here, and here's the JVM JNI section on memory management.
Well having a 3GB user space over a 2GB user space should help, but if your having problems running out of swap space at 2GB, I think 3GB is just going to make it worse. How big is your pagefile? Is it maxed out?
You can get a better idea on you heap allocation by hooking up jconsole to your jvm.
How is the memory shared between the JVM and the native process?
Sun's JVM's garbage collector is mark-and-sweep, with options to enable concurrent and incremental GC.
Well, more accurately, it's staged, and the above only applies to tenured (long-lived) objects. For young objects, GC is still done with a stop-and-copy collector, which is much better for working with short-lived objects (and all typical Java programs create many short-lived objects).
A copying collector walks over all elements in the heap, copying them to a new heap if they are referenced, and then discards the former heap. Thus 1M of live objects requires up to 2M of real memory: if every object is alive, there will be two copies of everything during garbage collection.
So the JVM requires far more system memory than is available to the code running within the VM, because there is a substantial overhead to management and garbage collection.
Does the Windows /3GB switch have any effect with native processes and Sun JVM?
The /3GB allows user virtual memory address space to be 3GB, but only for executables whose headers are marked with IMAGE_FILE_LARGE_ADDRESS_AWARE. As far as I am aware, Sun's java.exe is not. I don't have a Windows system here, so I can't verify.
You haven't explained your problem well enough, unfortunately. The real question is --- why is the Java process growing so much. Do you have a memory leak? Do you have a real reason to have that much data in the JVM?
Is the C++ library allocating its own memory from the C stack, or is it allocating memory from the Java object space, or is it doing something else entirely?
Related
When the garbage collector runs and releases memory does this memory go back to the OS or is it being kept as part of the process. I was under the strong impression that the memory is never actually released back to OS but kept as part of the memory area/pool to be reused by the same process.
As a result the actual memory of a process would never decrease. An article that reminded me was this and Java’s Runtime is written in C/C++ so I guess the same thing applies?
Update
My question is about Java. I am mentioning C/C++ since I assume the Java’s allocation/deallocation is done by JRE using some form of malloc/delete
The HotSpot JVM does release memory back to the OS, but does so reluctantly since resizing the heap is expensive and it is assumed that if you needed that heap once you'll need it again.
In general shrinking ability and behavior depends on the chosen garbage collector, the JVM version since shrinking capability was often introduced in later versions long after the GC itself was added. Some collectors may also require additional options to be passed to opt into shrinking. And some most likely never will support it, e.g. EpsilonGC.
So if heap shrinking is desired it should be tested for a particular JVM version and GC configuration.
JDK 8 and earlier
There are no explicit options for prompt memory reclamation in these versions but you can make the GC more aggressive in general by setting -XX:GCTimeRatio=19 -XX:MinHeapFreeRatio=20 -XX:MaxHeapFreeRatio=30 which will allow it to spend more CPU time on collecting and constrain the amount of allocated-but-unused heap memory after a GC cycle.
If you're using a concurrent collector you can also set -XX:InitiatingHeapOccupancyPercent=N with N to some low value to let the GC run concurrent collections almost continuously, which will consume even more CPU cycles but shrink the heap sooner. This generally is not a good idea, but on some types of machines with lots of spare CPU cores but short on memory it can make sense.
If you're using G1GC note that it only gained the ability to yield back unused chunks in the middle of the heap with jdk8u20, earlier versions were only able to return chunks at the end of the heap which put significant limits on how much could be reclaimed.
If you're using a collector with a default pause time goal (e.g. CMS or G1) you can also relax that goal to place fewer constraints on the collector, or you can switch go the parallel collector to prioritize footprint over pause times.
To verify that shrinking occurs or to diagnose why a GC decides not to shrink you can use GC Logging with -XX:+PrintAdaptiveSizePolicy may also provide insight, e.g. when the JVM tries to use more memory for the young generation to meet some goals.
JDK 9
Added the -XX:-ShrinkHeapInSteps option which can be be used to apply the shrinking caused by the options mentioned in the previous section more aggressively. Relevant OpenJDK bug.
For logging -XX:+PrintAdaptiveSizePolicy has been replaced with -Xlog:gc+ergo
JDK 12
Introduced the option to enable prompt memory release for G1GC via the G1PeriodicGCInterval (JEP 346), again at the expense of some additional CPU. The JEP also mentions similar features in Shenandoah and the OpenJ9 VM.
JDK 13
Adds similar behavior for ZGC, in this case it is enabled by default. Additionally XXSoftMaxHeapSize can be helpful for some workloads to keep the average heap size below some threshold while still allowing transient spikes.
The JVM does release back memory under some circumstances, but (for performance reasons) this does not happen whenever some memory is garbage collected. It also depends on the JVM, OS, garbage collector etc. You can watch the memory consumption of your app with JConsole, VisualVM or another profiler.
Also see this related bug report
If you use the G1 collector and call System.gc() occasionally (I do it once a minute), Java will reliably shrink the heap and give memory back to the OS.
Since Java 12, G1 does this automatically if the application is idle.
I recommend using these options combined with the above suggestion for a very compact resident process size:
-XX:+UseG1GC -XX:MaxHeapFreeRatio=30 -XX:MinHeapFreeRatio=10
Been using these options daily for months with a big application (a whole Java-based guest OS) with dynamically loading and unloading classes - and the Java process almost always stays between 400 and 800 MB.
this article here explains how the GC work in Java 7. In a nutshell, there are many different garbage collectors available. Usually the memory is kept for the Java process and only some GCs release it to the system (upon request I think). But, the memory used by the Java process will not grow indefinitely, as there is an upper limit defined by the Xmx option (which is 256m usually, but I think it is OS/machine dependant).
ZGC released in 13 java and it can return unused heap memory to the operating system
Please see the link
When the garbage collector runs and releases memory does this memory go back to the OS or is it being kept as part of the process. I was under the strong impression that the memory is never actually released back to OS but kept as part of the memory area/pool to be reused by the same process.
As a result the actual memory of a process would never decrease. An article that reminded me was this and Java’s Runtime is written in C/C++ so I guess the same thing applies?
Update
My question is about Java. I am mentioning C/C++ since I assume the Java’s allocation/deallocation is done by JRE using some form of malloc/delete
The HotSpot JVM does release memory back to the OS, but does so reluctantly since resizing the heap is expensive and it is assumed that if you needed that heap once you'll need it again.
In general shrinking ability and behavior depends on the chosen garbage collector, the JVM version since shrinking capability was often introduced in later versions long after the GC itself was added. Some collectors may also require additional options to be passed to opt into shrinking. And some most likely never will support it, e.g. EpsilonGC.
So if heap shrinking is desired it should be tested for a particular JVM version and GC configuration.
JDK 8 and earlier
There are no explicit options for prompt memory reclamation in these versions but you can make the GC more aggressive in general by setting -XX:GCTimeRatio=19 -XX:MinHeapFreeRatio=20 -XX:MaxHeapFreeRatio=30 which will allow it to spend more CPU time on collecting and constrain the amount of allocated-but-unused heap memory after a GC cycle.
If you're using a concurrent collector you can also set -XX:InitiatingHeapOccupancyPercent=N with N to some low value to let the GC run concurrent collections almost continuously, which will consume even more CPU cycles but shrink the heap sooner. This generally is not a good idea, but on some types of machines with lots of spare CPU cores but short on memory it can make sense.
If you're using G1GC note that it only gained the ability to yield back unused chunks in the middle of the heap with jdk8u20, earlier versions were only able to return chunks at the end of the heap which put significant limits on how much could be reclaimed.
If you're using a collector with a default pause time goal (e.g. CMS or G1) you can also relax that goal to place fewer constraints on the collector, or you can switch go the parallel collector to prioritize footprint over pause times.
To verify that shrinking occurs or to diagnose why a GC decides not to shrink you can use GC Logging with -XX:+PrintAdaptiveSizePolicy may also provide insight, e.g. when the JVM tries to use more memory for the young generation to meet some goals.
JDK 9
Added the -XX:-ShrinkHeapInSteps option which can be be used to apply the shrinking caused by the options mentioned in the previous section more aggressively. Relevant OpenJDK bug.
For logging -XX:+PrintAdaptiveSizePolicy has been replaced with -Xlog:gc+ergo
JDK 12
Introduced the option to enable prompt memory release for G1GC via the G1PeriodicGCInterval (JEP 346), again at the expense of some additional CPU. The JEP also mentions similar features in Shenandoah and the OpenJ9 VM.
JDK 13
Adds similar behavior for ZGC, in this case it is enabled by default. Additionally XXSoftMaxHeapSize can be helpful for some workloads to keep the average heap size below some threshold while still allowing transient spikes.
The JVM does release back memory under some circumstances, but (for performance reasons) this does not happen whenever some memory is garbage collected. It also depends on the JVM, OS, garbage collector etc. You can watch the memory consumption of your app with JConsole, VisualVM or another profiler.
Also see this related bug report
If you use the G1 collector and call System.gc() occasionally (I do it once a minute), Java will reliably shrink the heap and give memory back to the OS.
Since Java 12, G1 does this automatically if the application is idle.
I recommend using these options combined with the above suggestion for a very compact resident process size:
-XX:+UseG1GC -XX:MaxHeapFreeRatio=30 -XX:MinHeapFreeRatio=10
Been using these options daily for months with a big application (a whole Java-based guest OS) with dynamically loading and unloading classes - and the Java process almost always stays between 400 and 800 MB.
this article here explains how the GC work in Java 7. In a nutshell, there are many different garbage collectors available. Usually the memory is kept for the Java process and only some GCs release it to the system (upon request I think). But, the memory used by the Java process will not grow indefinitely, as there is an upper limit defined by the Xmx option (which is 256m usually, but I think it is OS/machine dependant).
ZGC released in 13 java and it can return unused heap memory to the operating system
Please see the link
I have a question on my mind. Let's assume that I have two parameters passed to JVM:
-Xms256mb -Xmx1024mb
At the beginning of the program 256MB is allocated. Next, some objects are created and JVM process tries to allocate more memory. Let's say that JVM needs to allocate 800MB. Xmx attribute allows that but the memory which is currently available on the system (let's say Linux/Windows) is 600MB. Is it possible that OutOfMemoryError will be thrown? Or maybe swap mechanism will play a role?
My second question is related to the quality of GC algorithms. Let's say that I have jdk1.5u7 and jdk1.5u22. Is it possible that in the latter JVM the memory leaks vanish and OutOfMemoryError does not occur? Can the quality of GC be better in the latest version?
The quality of the GC (barring a buggy GC) does not affect memory leaks, as memory leaks are an artifact of the application -- GC can't collect what isn't actual garbage.
If a JVM needs more memory, it will take it from the system. If the system can swap, it will swap (like any other process). If the system can not swap, your JVM will fail with a system error, not an OOM exception, because the system can not satisfy the request and and this point its effectively fatal.
As a rule, you NEVER want to have an active JVM partially swapped out. GC event will crush you as the system thrashes cycling pages through the virtual memory system. It's one thing to have a idle background JVM swapped out as a whole, but if you machine as 1G of RAM and your main process wants 1.5GB, then you have a major problem.
The JVM like room to breathe. I've seen JVMs in a GC death spiral when they didn't have enough memory, even though they didn't have memory leaks. They simply didn't have enough working set. Adding another chunk of heap transformed that JVM from awful to happy sawtooth GC graphs.
Give a JVM the memory it needs, you and it will be much happier.
"Memory" and "RAM" aren't the same thing. Memory includes virtual memory (swap), so you can allocate a total of free RAM+ free swap before you get the OutOfMemoryError.
Allocation depends on the used OS.
If you allocate too much memory, maybe you could end up having loaded portions into swap, which is slow.
If the your program runs fater os slower depends on how VM handle the memory.
I would not specify a heap that's not so big to make sure it don't occupy all the memory preventing the slows from VM.
Concerning your first question:
Actually if the machine can not allocate the 1024 MB that you asked as max heap size it will not even start the JVM.
I know this because I noticed it often trying to open eclipse with large heap size and the OS could not allocate the larger heap space the JVM failed to load. You could also try it out yourself to confirm. So the rest of the details are irrelevant to you. If course if your program uses too much swap (same as in all languages) then the performance will be horrible.
Concerning your second question:
the memory leaks vanish
Not possible as they are bugs you will have to fix
and OutOfMemoryError does not occur? Can the quality of GC be better
in the latest version?
This could happen, if for example some different algorithm in GC is used and it manages to kick-in before you seeing the exception. But if you have a memory leak then it would probable mask it or you would see it intermittent.
Also various JVMs have different GCs you can configure
Update:
I have to admit (after see #Orochi note) that I noticed the behavior on max heap on Windows. I can not say for sure that this applies to linux as well. But you could try it yourself.
Update 2:
As an answer to comments of #DennisCheung
From IBM(my emphasis):
The table shows both the maximum Java heap possible and a recommended limit for the maximum Java heap size setting ......It is important to have more physical memory than is required by all of the processes on the machine combined to prevent paging or swapping. Paging reduces the performance of the system and affects the performance of the Java memory management system.
I wrote an app for the production team to measure their scores, it runs fine for 2-3 weeks and then the machine that the copies are running on slows down and a restart fixes it.
What are the best practice steps for fixing this?
You need to analyse the Heap and find out what objects are being retained in there that shouldn't be.
One option:
Try reducing the -Xmx max heap size to expedite an out of memory exception, add this option to the jvm at startup -XX:+HeapDumpOnOutOfMemoryError and then load the heap dump that is generated into something like Eclipse Memory Analyzer.
Another option:
Dump the heap from your running process using jmap (probably needs sudo privileges)
jmap -heap:format=b <pid>
and again, load the heap dump binary into jhat or Eclipse Memory Analyzer.
If your app is slowing down but not throwing an OutOfMemoryError it is likely that you don't have a leak but you do need to do some JVM tuning because it's spending too much time doing GC.
You should be monitoring GC collection times (you can log them using -Xloggc:/tmp/gc.out) or you can use jstat to see how often GC takes place and how long it takes.
If you have an application with lots of medium lived objects is the Young Generation big enough (-XX:NewRatio=N) ? If not your app will spend to long promoting objects to the old gen only to have to GC them shortly after (GC in old gen is expensive relative to New Gen, especially when you have fragmented memory).
Also - have you enabled the CMS collector? If you have a multi-core machine I suggest you do (-XX:+UseConcMarkSweepGC).
There are no memory leaks in Java in the traditional sense unless you are using JNI.
Memory leak in Java usually refers to creating referenced objects that you are no longer using. The symptom typically is that the memory usage of the application keeps growing. Do you see the memory usage growing?
You would do well to search for the exact same question in Google and follow the links.
The best practice to address is it usually to use a Profiler to check your allocations. It may also point at performance bottlenecks not caused by the "memory leaks"
You can check the memory the JVM requires by using an OS utility such as a task manager or top.
You can use a profiler to check the memory of your java code, e.g. Java VisualVM.
Keep in mind that Java uses garbage collection, so the only way of "memory leakage" is by holding references to (a lot of) unused objects. Josh Bloch's Effective Java item 6 (Eliminate obsolete object references) explains these situations and how to prevent them very well.
You can also use further methods to check for this kind of "memory leakage", e.g. static analysis and pluggable type systems or jvm memory options.
One good thing to track down memory issues is to enable garbage collection logging by adding the following commands to java at startup -verbose:gc -XX:+PrintGCDetails and -XX:+PrintGCTimeStamps. Then you can analyze how the garbage collector behaves, i.e. how often the GC is running, how long time it takes for the garbage collector to reclaim memory, how much memory is being reclaimed and if the used memory of your application is increasing.
Here's a document explaining the gc logging output:
GC tuning guide for Java 6
I want to find a memory leak in a Java 1.5 application. I use JProfiler for profiling.
I see using the windows' task manager that the vm size for my application is about 790000KB (increased from approx 300000KB). In the profiler I see that that the allocated heap is 266MB (increasing also).
Probably it's a rookie question but, what else can occupy so much memory besides the heap so that it goes to approx 700MB vm size (or private bytes size)?
I mention that there are approx 1200 threads running, which can occupy, according to an answer from here quite some memory, but I think there still is some space until 700MB. By the way, how I can see how much memory the threads stacks occupy?
Thanks.
The JVM can use alot of virtual memory which may not use resident memory. On startup it allocates the heap, and maps in its shared libraries. Classes which are loaded use Perm Gen space. An application can use direct memory which can be as large as the heap maximum. As each thread is created a stack is allocated for each thread. In each case until this memory is used, it might not be allocated to the application i.e. not use physical memory. As the application warms up, more of the virtual memory can become physical memory.
If you believe your JVM is not running efficiently, the first thing I would try is Java 6 which has had many fixes and improvements since the last release of Java 5.0.