Related
How do you find a memory leak in Java (using, for example, JHat)? I have tried to load the heap dump up in JHat to take a basic look. However, I do not understand how I am supposed to be able to find the root reference (ref) or whatever it is called. Basically, I can tell that there are several hundred megabytes of hash table entries ([java.util.HashMap$Entry or something like that), but maps are used all over the place... Is there some way to search for large maps, or perhaps find general roots of large object trees?
[Edit]
Ok, I've read the answers so far but let's just say I am a cheap bastard (meaning I am more interested in learning how to use JHat than to pay for JProfiler). Also, JHat is always available since it is part of the JDK. Unless of course there is no way with JHat but brute force, but I can't believe that can be the case.
Also, I do not think I will be able to actually modify (adding logging of all map sizes) and run it for long enough for me to notice the leak.
I use following approach to finding memory leaks in Java. I've used jProfiler with great success, but I believe that any specialized tool with graphing capabilities (diffs are easier to analyze in graphical form) will work.
Start the application and wait until it get to "stable" state, when all the initialization is complete and the application is idle.
Run the operation suspected of producing a memory leak several times to allow any cache, DB-related initialization to take place.
Run GC and take memory snapshot.
Run the operation again. Depending on the complexity of operation and sizes of data that is processed operation may need to be run several to many times.
Run GC and take memory snapshot.
Run a diff for 2 snapshots and analyze it.
Basically analysis should start from greatest positive diff by, say, object types and find what causes those extra objects to stick in memory.
For web applications that process requests in several threads analysis gets more complicated, but nevertheless general approach still applies.
I did quite a number of projects specifically aimed at reducing memory footprint of the applications and this general approach with some application specific tweaks and trick always worked well.
Questioner here, I have got to say getting a tool that does not take 5 minutes to answer any click makes it a lot easier to find potential memory leaks.
Since people are suggesting several tools ( I only tried visual wm since I got that in the JDK and JProbe trial ) I though I should suggest a free / open source tool built on the Eclipse platform, the Memory Analyzer (sometimes referenced as the SAP memory analyzer) available on http://www.eclipse.org/mat/ .
What is really cool about this tool is that it indexed the heap dump when I first opened it which allowed it to show data like retained heap without waiting 5 minutes for each object (pretty much all operations were tons faster than the other tools I tried).
When you open the dump, the first screen shows you a pie chart with the biggest objects (counting retained heap) and one can quickly navigate down to the objects that are to big for comfort. It also has a Find likely leak suspects which I reccon can come in handy, but since the navigation was enough for me I did not really get into it.
A tool is a big help.
However, there are times when you can't use a tool: the heap dump is so huge it crashes the tool, you are trying to troubleshoot a machine in some production environment to which you only have shell access, etc.
In that case, it helps to know your way around the hprof dump file.
Look for SITES BEGIN. This shows you what objects are using the most memory. But the objects aren't lumped together solely by type: each entry also includes a "trace" ID. You can then search for that "TRACE nnnn" to see the top few frames of the stack where the object was allocated. Often, once I see where the object is allocated, I find a bug and I'm done. Also, note that you can control how many frames are recorded in the stack with the options to -Xrunhprof.
If you check out the allocation site, and don't see anything wrong, you have to start backward chaining from some of those live objects to root objects, to find the unexpected reference chain. This is where a tool really helps, but you can do the same thing by hand (well, with grep). There is not just one root object (i.e., object not subject to garbage collection). Threads, classes, and stack frames act as root objects, and anything they reference strongly is not collectible.
To do the chaining, look in the HEAP DUMP section for entries with the bad trace id. This will take you to an OBJ or ARR entry, which shows a unique object identifier in hexadecimal. Search for all occurrences of that id to find who's got a strong reference to the object. Follow each of those paths backward as they branch until you figure out where the leak is. See why a tool is so handy?
Static members are a repeat offender for memory leaks. In fact, even without a tool, it'd be worth spending a few minutes looking through your code for static Map members. Can a map grow large? Does anything ever clean up its entries?
Most of the time, in enterprise applications the Java heap given is larger than the ideal size of max 12 to 16 GB. I have found it hard to make the NetBeans profiler work directly on these big java apps.
But usually this is not needed. You can use the jmap utility that comes with the jdk to take a "live" heap dump , that is jmap will dump the heap after running GC. Do some operation on the application, wait till the operation is completed, then take another "live" heap dump. Use tools like Eclipse MAT to load the heapdumps, sort on the histogram, see which objects have increased, or which are the highest, This would give a clue.
su proceeuser
/bin/jmap -dump:live,format=b,file=/tmp/2930javaheap.hrpof 2930(pid of process)
There is only one problem with this approach; Huge heap dumps, even with the live option, may be too big to transfer out to development lap, and may need a machine with enough memory/RAM to open.
That is where the class histogram comes into picture. You can dump a live class histogram with the jmap tool. This will give only the class histogram of memory usage.Basically it won't have the information to chain the reference. For example it may put char array at the top. And String class somewhere below. You have to draw the connection yourself.
jdk/jdk1.6.0_38/bin/jmap -histo:live 60030 > /tmp/60030istolive1330.txt
Instead of taking two heap dumps, take two class histograms, like as described above; Then compare the class histograms and see the classes that are increasing. See if you can relate the Java classes to your application classes. This will give a pretty good hint. Here is a pythons script that can help you compare two jmap histogram dumps. histogramparser.py
Finally tools like JConolse and VisualVm are essential to see the memory growth over time, and see if there is a memory leak. Finally sometimes your problem may not be a memory leak , but high memory usage.For this enable GC logging;use a more advanced and new compacting GC like G1GC; and you can use jdk tools like jstat to see the GC behaviour live
jstat -gccause pid <optional time interval>
Other referecences to google for -jhat, jmap, Full GC, Humongous allocation, G1GC
There are tools that should help you find your leak, like JProbe, YourKit, AD4J or JRockit Mission Control. The last is the one that I personally know best. Any good tool should let you drill down to a level where you can easily identify what leaks, and where the leaking objects are allocated.
Using HashTables, Hashmaps or similar is one of the few ways that you can acually leak memory in Java at all. If I had to find the leak by hand I would peridically print the size of my HashMaps, and from there find the one where I add items and forget to delete them.
Well, there's always the low tech solution of adding logging of the size of your maps when you modify them, then search the logs for which maps are growing beyond a reasonable size.
NetBeans has a built-in profiler.
You really need to use a memory profiler that tracks allocations. Take a look at JProfiler - their "heap walker" feature is great, and they have integration with all of the major Java IDEs. It's not free, but it isn't that expensive either ($499 for a single license) - you will burn $500 worth of time pretty quickly struggling to find a leak with less sophisticated tools.
You can find out by measuring memory usage size after calling garbage collector multiple times:
Runtime runtime = Runtime.getRuntime();
while(true) {
...
if(System.currentTimeMillis() % 4000 == 0){
System.gc();
float usage = (float) (runtime.totalMemory() - runtime.freeMemory()) / 1024 / 1024;
System.out.println("Used memory: " + usage + "Mb");
}
}
If the output numbers were equal, there is no memory leak in your application, but if you saw difference between the numbers of memory usage (increasing numbers), there is memory leak in your project. For example:
Used memory: 14.603279Mb
Used memory: 14.737213Mb
Used memory: 14.772224Mb
Used memory: 14.802681Mb
Used memory: 14.840599Mb
Used memory: 14.900841Mb
Used memory: 14.942261Mb
Used memory: 14.976143Mb
Note that sometimes it takes some time to release memory by some actions like streams and sockets. You should not judge by first outputs, You should test it in a specific amount of time.
Checkout this screen cast about finding memory leaks with JProfiler.
It's visual explanation of #Dima Malenko Answer.
Note: Though JProfiler is not freeware, But Trial version can deal with current situation.
As most of us use Eclipse already for writing code, Why not use the Memory Analyser Tool(MAT) in Eclipse. It works great.
The Eclipse MAT is a set of plug-ins for the Eclipse IDE which provides tools to analyze heap dumps from Java application and to identify memory problems in the application.
This helps the developer to find memory leaks with the following features
Acquiring a memory snapshot (Heap Dump)
Histogram
Retained Heap
Dominator Tree
Exploring Paths to the GC Roots
Inspector
Common Memory Anti-Patterns
Object Query Language
I have recently dealt with a memory leak in our application. Sharing my experience here
The garbage collector removes unreferenced objects periodically, but it never collects the objects that are still being referenced. This is where memory leaks can occur.
Here is some options to find out the referenced objects.
Using jvisualvm which is located in JDK/bin folder
Options : Watch Heap space
If you see that the heap space keep increasing, definitely there is a memory leak.
To find out the cause, you can use memory sampler under sampler .
Get a Java heap histogram by using jmap ( which is also available in JDK/bin folder) in different time span of the application
jmap -histo <pid> > histo1.txt
Here the object reference can be analyzed. If some of the objects are never garbage collected, that is the potential memory leak.
You can read the some of the most common cause of memory leak here in this article : Understanding Memory Leaks in Java
How do you find a memory leak in Java (using, for example, JHat)? I have tried to load the heap dump up in JHat to take a basic look. However, I do not understand how I am supposed to be able to find the root reference (ref) or whatever it is called. Basically, I can tell that there are several hundred megabytes of hash table entries ([java.util.HashMap$Entry or something like that), but maps are used all over the place... Is there some way to search for large maps, or perhaps find general roots of large object trees?
[Edit]
Ok, I've read the answers so far but let's just say I am a cheap bastard (meaning I am more interested in learning how to use JHat than to pay for JProfiler). Also, JHat is always available since it is part of the JDK. Unless of course there is no way with JHat but brute force, but I can't believe that can be the case.
Also, I do not think I will be able to actually modify (adding logging of all map sizes) and run it for long enough for me to notice the leak.
I use following approach to finding memory leaks in Java. I've used jProfiler with great success, but I believe that any specialized tool with graphing capabilities (diffs are easier to analyze in graphical form) will work.
Start the application and wait until it get to "stable" state, when all the initialization is complete and the application is idle.
Run the operation suspected of producing a memory leak several times to allow any cache, DB-related initialization to take place.
Run GC and take memory snapshot.
Run the operation again. Depending on the complexity of operation and sizes of data that is processed operation may need to be run several to many times.
Run GC and take memory snapshot.
Run a diff for 2 snapshots and analyze it.
Basically analysis should start from greatest positive diff by, say, object types and find what causes those extra objects to stick in memory.
For web applications that process requests in several threads analysis gets more complicated, but nevertheless general approach still applies.
I did quite a number of projects specifically aimed at reducing memory footprint of the applications and this general approach with some application specific tweaks and trick always worked well.
Questioner here, I have got to say getting a tool that does not take 5 minutes to answer any click makes it a lot easier to find potential memory leaks.
Since people are suggesting several tools ( I only tried visual wm since I got that in the JDK and JProbe trial ) I though I should suggest a free / open source tool built on the Eclipse platform, the Memory Analyzer (sometimes referenced as the SAP memory analyzer) available on http://www.eclipse.org/mat/ .
What is really cool about this tool is that it indexed the heap dump when I first opened it which allowed it to show data like retained heap without waiting 5 minutes for each object (pretty much all operations were tons faster than the other tools I tried).
When you open the dump, the first screen shows you a pie chart with the biggest objects (counting retained heap) and one can quickly navigate down to the objects that are to big for comfort. It also has a Find likely leak suspects which I reccon can come in handy, but since the navigation was enough for me I did not really get into it.
A tool is a big help.
However, there are times when you can't use a tool: the heap dump is so huge it crashes the tool, you are trying to troubleshoot a machine in some production environment to which you only have shell access, etc.
In that case, it helps to know your way around the hprof dump file.
Look for SITES BEGIN. This shows you what objects are using the most memory. But the objects aren't lumped together solely by type: each entry also includes a "trace" ID. You can then search for that "TRACE nnnn" to see the top few frames of the stack where the object was allocated. Often, once I see where the object is allocated, I find a bug and I'm done. Also, note that you can control how many frames are recorded in the stack with the options to -Xrunhprof.
If you check out the allocation site, and don't see anything wrong, you have to start backward chaining from some of those live objects to root objects, to find the unexpected reference chain. This is where a tool really helps, but you can do the same thing by hand (well, with grep). There is not just one root object (i.e., object not subject to garbage collection). Threads, classes, and stack frames act as root objects, and anything they reference strongly is not collectible.
To do the chaining, look in the HEAP DUMP section for entries with the bad trace id. This will take you to an OBJ or ARR entry, which shows a unique object identifier in hexadecimal. Search for all occurrences of that id to find who's got a strong reference to the object. Follow each of those paths backward as they branch until you figure out where the leak is. See why a tool is so handy?
Static members are a repeat offender for memory leaks. In fact, even without a tool, it'd be worth spending a few minutes looking through your code for static Map members. Can a map grow large? Does anything ever clean up its entries?
Most of the time, in enterprise applications the Java heap given is larger than the ideal size of max 12 to 16 GB. I have found it hard to make the NetBeans profiler work directly on these big java apps.
But usually this is not needed. You can use the jmap utility that comes with the jdk to take a "live" heap dump , that is jmap will dump the heap after running GC. Do some operation on the application, wait till the operation is completed, then take another "live" heap dump. Use tools like Eclipse MAT to load the heapdumps, sort on the histogram, see which objects have increased, or which are the highest, This would give a clue.
su proceeuser
/bin/jmap -dump:live,format=b,file=/tmp/2930javaheap.hrpof 2930(pid of process)
There is only one problem with this approach; Huge heap dumps, even with the live option, may be too big to transfer out to development lap, and may need a machine with enough memory/RAM to open.
That is where the class histogram comes into picture. You can dump a live class histogram with the jmap tool. This will give only the class histogram of memory usage.Basically it won't have the information to chain the reference. For example it may put char array at the top. And String class somewhere below. You have to draw the connection yourself.
jdk/jdk1.6.0_38/bin/jmap -histo:live 60030 > /tmp/60030istolive1330.txt
Instead of taking two heap dumps, take two class histograms, like as described above; Then compare the class histograms and see the classes that are increasing. See if you can relate the Java classes to your application classes. This will give a pretty good hint. Here is a pythons script that can help you compare two jmap histogram dumps. histogramparser.py
Finally tools like JConolse and VisualVm are essential to see the memory growth over time, and see if there is a memory leak. Finally sometimes your problem may not be a memory leak , but high memory usage.For this enable GC logging;use a more advanced and new compacting GC like G1GC; and you can use jdk tools like jstat to see the GC behaviour live
jstat -gccause pid <optional time interval>
Other referecences to google for -jhat, jmap, Full GC, Humongous allocation, G1GC
There are tools that should help you find your leak, like JProbe, YourKit, AD4J or JRockit Mission Control. The last is the one that I personally know best. Any good tool should let you drill down to a level where you can easily identify what leaks, and where the leaking objects are allocated.
Using HashTables, Hashmaps or similar is one of the few ways that you can acually leak memory in Java at all. If I had to find the leak by hand I would peridically print the size of my HashMaps, and from there find the one where I add items and forget to delete them.
Well, there's always the low tech solution of adding logging of the size of your maps when you modify them, then search the logs for which maps are growing beyond a reasonable size.
NetBeans has a built-in profiler.
You really need to use a memory profiler that tracks allocations. Take a look at JProfiler - their "heap walker" feature is great, and they have integration with all of the major Java IDEs. It's not free, but it isn't that expensive either ($499 for a single license) - you will burn $500 worth of time pretty quickly struggling to find a leak with less sophisticated tools.
You can find out by measuring memory usage size after calling garbage collector multiple times:
Runtime runtime = Runtime.getRuntime();
while(true) {
...
if(System.currentTimeMillis() % 4000 == 0){
System.gc();
float usage = (float) (runtime.totalMemory() - runtime.freeMemory()) / 1024 / 1024;
System.out.println("Used memory: " + usage + "Mb");
}
}
If the output numbers were equal, there is no memory leak in your application, but if you saw difference between the numbers of memory usage (increasing numbers), there is memory leak in your project. For example:
Used memory: 14.603279Mb
Used memory: 14.737213Mb
Used memory: 14.772224Mb
Used memory: 14.802681Mb
Used memory: 14.840599Mb
Used memory: 14.900841Mb
Used memory: 14.942261Mb
Used memory: 14.976143Mb
Note that sometimes it takes some time to release memory by some actions like streams and sockets. You should not judge by first outputs, You should test it in a specific amount of time.
Checkout this screen cast about finding memory leaks with JProfiler.
It's visual explanation of #Dima Malenko Answer.
Note: Though JProfiler is not freeware, But Trial version can deal with current situation.
As most of us use Eclipse already for writing code, Why not use the Memory Analyser Tool(MAT) in Eclipse. It works great.
The Eclipse MAT is a set of plug-ins for the Eclipse IDE which provides tools to analyze heap dumps from Java application and to identify memory problems in the application.
This helps the developer to find memory leaks with the following features
Acquiring a memory snapshot (Heap Dump)
Histogram
Retained Heap
Dominator Tree
Exploring Paths to the GC Roots
Inspector
Common Memory Anti-Patterns
Object Query Language
I have recently dealt with a memory leak in our application. Sharing my experience here
The garbage collector removes unreferenced objects periodically, but it never collects the objects that are still being referenced. This is where memory leaks can occur.
Here is some options to find out the referenced objects.
Using jvisualvm which is located in JDK/bin folder
Options : Watch Heap space
If you see that the heap space keep increasing, definitely there is a memory leak.
To find out the cause, you can use memory sampler under sampler .
Get a Java heap histogram by using jmap ( which is also available in JDK/bin folder) in different time span of the application
jmap -histo <pid> > histo1.txt
Here the object reference can be analyzed. If some of the objects are never garbage collected, that is the potential memory leak.
You can read the some of the most common cause of memory leak here in this article : Understanding Memory Leaks in Java
I have such problem that jvm is not able to perform gc in time and application freezes. "Solution" for that is to connect to application using jconsole and suggest jvm to make garbage collections. I do not have to say that it is very poor behavior of application. Are there some option for jvm to suggest to it to perform gc sooner/more often? Maybe there are some other real solution to this problem?
The problem appears not to be not enough of memory but that gc is not able to do collection in time before new data is send to application. It is so because gc appears to start to collect data to late. If is is suggested early enough by System.gc() button of jconsole then problem does not occur.
Young generation is collected by 'PS Scavenge' which is parallel collector.
Old generation is collected by 'PS MarkSweep' which is parallel mark and sweep collector.
You should check for memory leaks.
I'm pretty sure you won't get OutOfMemoryException unless there's no memory to be released and no more available memory.
There is System.gc() that does exactly what you described: It suggests to the JVM that a garbage collection should take place. (There are also command-line arguments for the JVM that can serve as directives for the memory manager.)
However, if you're running out of memory during an allocation, it typically means that the JVM did attempt a garbage collection first and it failed to release the necessary memory. In that case, you probably have memory leaks (in the sense of keeping unnecessary references) and you should get a memory profiler to check that. This is important because if you have memory leaks, then more frequent garbage collections will not solve your problem - except that maybe they will postpone its manifestation, giving you a false sense of security.
From the Java specification:
OutOfMemoryError: The Java Virtual Machine implementation has run out
of either virtual or physical memory, and the automatic storage
manager was unable to reclaim enough memory to satisfy an object
creation request.
You can deploy java melody on your server and add your application on it, it will give you detailed report of your memory leaks and memory usage. With this you will be able to optimize your system and code correctly.
I guess, either your application requires more memory to run efficiently, try tuning your JVM by setting parameters like -Xms512M -Xmx1024M.
Or,
There is memory leak which is exhausting the memory.
You should check the memory consumption pattern of your application. e.g. what memory it is occupying when it is processing more vs remain idle.
If you observe a constant surge in memory peaks, it could suggest towards a possible memory leak.
One of the best thread on memory leak issue is How to find a Java Memory Leak
Another good one is http://www.ibm.com/developerworks/library/j-leaks/
Additionally,
you may receive an OOME if you're loading a lot of classes (let's say, all classes present in your rt.jar). Since loaded classes reside in PermGen rather than heap memory, you may also want to increase your PermGen size using -XX:MaxPermSize switch.
And, of course, you're free to choose a garbage collector – ParallelGC, ConcMarkSweepGC (CMS) or G1GC (G1).
Please be aware that there're APIs in Java that may cause memory leaks by themselves (w/o any programmer's error) -- e. g. java.lang.String#substring() (see here)
If your application freezes, but gets unfrozen by a forced GC, then your problem is very probably not the memory, but some other resource leak, which is alleviated by running finalizers on dead objects. Properly written code must never rely on finalizers to do the cleanup, so try to find any unclosed resources in your application.
You can start the jvm with more memory
java -Xms512M -Xmx1024M
will start the jvm with 512Mb of memory, allowing it to grow to a gigabyte.
You can use System.gc() to suggest to the VM to run the garbage collector. There is no guarantee that it will run immediately.
I doubt if that will help, but it might work. Another thing you could look at is increasing the maximum memory size of the JVM. You can do this by giving the command line argument -Xmx512m. This would give 512 megabytes of heap size instead of the default 128.
You can use JConsole to view the memory usage of your application. This can help to see how the memory usage develops which is useful in detecting memory leaks.
I have a server application that, in rare occasions, can allocate large chunks of memory.
It's not a memory leak, as these chunks can be claimed back by the garbage collector by executing a full garbage collection. Normal garbage collection frees amounts of memory that are too small: it is not adequate in this context.
The garbage collector executes these full GCs when it deems appropriate, namely when the memory footprint of the application nears the allotted maximum specified with -Xmx.
That would be ok, if it wasn't for the fact that these problematic memory allocations come in bursts, and can cause OutOfMemoryErrors due to the fact that the jvm is not able to perform a GC quickly enough to free the required memory. If I manually call System.gc() beforehand, I can prevent this situation.
Anyway, I'd prefer not having to monitor my jvm's memory allocation myself (or insert memory management into my application's logic); it would be nice if there was a way to run the virtual machine with a memory threshold, over which full GCs would be executed automatically, in order to release very early the memory I'm going to need.
Long story short: I need a way (a command line option?) to configure the jvm in order to release early a good amount of memory (i.e. perform a full GC) when memory occupation reaches a certain threshold, I don't care if this slows my application down every once in a while.
All I've found till now are ways to modify the size of the generations, but that's not what I need (at least not directly).
I'd appreciate your suggestions,
Silvio
P.S. I'm working on a way to avoid large allocations, but it could require a long time and meanwhile my app needs a little stability
UPDATE: analyzing the app with jvisualvm, I can see that the problem is in the old generation
From here (this is a 1.4.2 page, but the same option should exist in all Sun JVMs):
assuming you're using the CMS garbage collector (which I believe the server turns on by default), the option you want is
-XX:CMSInitiatingOccupancyFraction=<percent>
where % is the % of memory in use that will trigger a full GC.
Insert standard disclaimers here that messing with GC parameters can give you severe performance problems, varies wildly by machine, etc.
When you allocate large objects that do not fit into the young generation, they are immediately allocated in the tenured generation space. This space is only GC'ed when a full-GC is run which you try to force.
However I am not sure this would solve your problem. You say "JVM is not able to perform a GC quickly enough". Even if your allocations come in bursts, each allocation will cause the VM to check if it has enough space available to do it. If not - and if the object is too large for the young generation - it will cause a full GC which should "stop the world", thereby preventing new allocations from taking place in the first place. Once the GC is complete, your new object will be allocated.
If shortly after that the second large allocation is requested in your burst, it will do the same thing again. Depending on whether the initial object is still needed, it will either be able to succeed in GC'ing it, thereby making room for the next allocation, or fail if the first instance is still referenced.
You say "I need a way [...] to release early a good amount of memory (i.e. perform a full GC) when memory occupation reaches a certain threshold". This by definition can only succeed, if that "good amount of memory" is not referenced by anything in your application anymore.
From what I understand here, you might have a race condition which you might sometimes avoid by interspersing manual GC requests. In general you should never have to worry about these things - from my experience an OutOfMemoryError only occurs if there are in fact too many allocations to be fit into the heap concurrently. In all other situations the "only" problem should be a performance degradation (which might become extreme, depending on the circumstances, but this is a different problem).
I suggest you do further analysis of the exact problem to rule this out. I recommend the VisualVM tool that comes with Java 6. Start it and install the VisualGC plugin. This will allow you to see the different memory generations and their sizes. Also there is a plethora of GC related logging options, depending on which VM you use. Some options have been mentioned in other answers.
The other options for choosing which GC to use and how to tweak thresholds should not matter in your case, because they all depend on enough memory being available to contain all the objects that your application needs at any given time. These options can be helpful if you have performance problems related to heavy GC activity, but I fear they will not lead to a solution in your particular case.
Once you are more confident in what is actually happening, finding a solution will become easier.
Do you know which of the garbage collection pools are growing too large?....i.e. eden vs. survivor space? (try the JVM option -Xloggc:<file> log GC status to a file with time stamps)...When you know this, you should be able to tweak the size of the effected pool with one of the options mentioned here: hotspot options for Java 1.4
I know that page is for the 1.4 JVM, I can't seem to find the same -X options on my current 1.6 install help options, unless setting those individual pool sizes is a non-standard, non-standard feature!
The JVM is only supposed to throw an OutOfMemoryError after it has attempted to release memory via garbage collection (according to both the API docs for OutOfMemoryError and the JVM specification). Therefore your attempts to force garbage collection shouldn't make any difference. So there might be something more significant going on here - either a problem with your program not properly clearing references or, less likely, a JVM bug.
There's a very detailed explanation of how GC works here and it lists parameters to control memory available to different memory pools/generations.
Try to use -server option. It will enable parallel gc and you will have some performance increase if you use multi core processor.
Have you tried playing with G1 gc? It should be available in 1.6.0u14 onwards.
What is the best way to tune a server application written in Java that uses a native C++ library?
The environment is a 32-bit Windows machine with 4GB of RAM. The JDK is Sun 1.5.0_12.
The Java process is given 1024MB of memory (-Xmx) at startup but I often see OutOfMemoryErrors due to lack of heap space. If the memory is increased to 1200MB, the OutOfMemoryErrors occur due to lack of swap space. How is the memory shared between the JVM and the native process?
Does the Windows /3GB switch have any effect with native processes and Sun JVM?
I had lots of trouble with that setting (Java on 32-bit systems - msw and others) and they were all solved by reserving just *under 1GB of RAM to the JVM.
Otherwise, as stated, the actual occupied memory in the system for that process would be over 2GB; at that point I was having 'silent deaths' of the process - no errors, no warnings, just the process terminating very quietly.
I got more stability and performance running several JVM (each with under 1GB RAM) on the same system.
I found some info on JNI memory management here, and here's the JVM JNI section on memory management.
Well having a 3GB user space over a 2GB user space should help, but if your having problems running out of swap space at 2GB, I think 3GB is just going to make it worse. How big is your pagefile? Is it maxed out?
You can get a better idea on you heap allocation by hooking up jconsole to your jvm.
How is the memory shared between the JVM and the native process?
Sun's JVM's garbage collector is mark-and-sweep, with options to enable concurrent and incremental GC.
Well, more accurately, it's staged, and the above only applies to tenured (long-lived) objects. For young objects, GC is still done with a stop-and-copy collector, which is much better for working with short-lived objects (and all typical Java programs create many short-lived objects).
A copying collector walks over all elements in the heap, copying them to a new heap if they are referenced, and then discards the former heap. Thus 1M of live objects requires up to 2M of real memory: if every object is alive, there will be two copies of everything during garbage collection.
So the JVM requires far more system memory than is available to the code running within the VM, because there is a substantial overhead to management and garbage collection.
Does the Windows /3GB switch have any effect with native processes and Sun JVM?
The /3GB allows user virtual memory address space to be 3GB, but only for executables whose headers are marked with IMAGE_FILE_LARGE_ADDRESS_AWARE. As far as I am aware, Sun's java.exe is not. I don't have a Windows system here, so I can't verify.
You haven't explained your problem well enough, unfortunately. The real question is --- why is the Java process growing so much. Do you have a memory leak? Do you have a real reason to have that much data in the JVM?
Is the C++ library allocating its own memory from the C stack, or is it allocating memory from the Java object space, or is it doing something else entirely?