Java System.GC() and Memory Leakage - java

I have read up and searched through the forums about leaking memory and gradually increasing RAM. I tried to use the call of System.GC() method every 60 seconds in my program and it seems to be working given that my RAM usage drops every call. Why is it a good idea not to use this method? In every post I have read they seemed to vaguely explain why the method does not free up memory, yet my program seems to say otherwise. Some even said the method did nothing at all but suggest to the Garbage Collector clean itself up. NOTE : My leak is not from static methods I know because I removed them from my entire project and the RAM still increased. I would post my code, but it is rather large so I doubt anyone is up to reading it.
Thanks for the help.

As you stated, System.gc() is just a suggestion. It's not guaranteed to force the garbage collection, though in practice it frequently does.
The Java garbage collection runs on its own periodically. If you see that your memory is increasing over time and you're not reclaiming it, then you have a memory leak. Calling System.gc() won't fix that. If your memory is leaking, eventually there will be nothing to collect.
In general, you shouldn't need to force GC. As I mentioned, the GC will run on its own. You can tweak its behavior - http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html.

The original problem is comes from memory leak.
Sympthom is
* because of memory leak, there are not enough memory space
* so the JVM will try to do GC again again again.
* but still have enough memory. so GC again again.
So the System.GC or kind of GC tuning is not helpful.
To fix this problem, u have to find our where is the memory leakage point.
In JVM, there are tools that dumps current memory foot print (heapdump).
You can find out the leakage point by using this.
For more information, please refer this - http://www.oracle.com/technetwork/java/javase/memleaks-137499.html

Related

How to find which Finalizer is time consuming

I am working on an application whose purpose is to compute reports has fast as possible.
My application uses a big amount of memory; more than 100 Go.
Since our last release, I notice a big performance slowdown. My investigation shows that, during the computation, I get many garbage collection between 40 and 60 seconds!!!
(JMC tells me that they are SerialOld but I don't know what it exactly means) and, of course, when the JVM is garbage collecting, the application is absolutely freezed
I am now investigating the origin of these garbage collections... and this is a very hard work.
I suspect that, if these garbage collections are so long, it is because they are spending many times in finalize functions (I know that, among all the libraries we integrate from other teams, some of them uses finalizers)
However, I don't know how to confrim (or not) this hypothesis; How to find which finalizer is time consuming.
I am looking for a good tool or even a good methodology
Here is data collected via JVisualVM
As you can see, I always have many "Pending Finalizers" when I have a
log Old Garbage
What is surprising is that when I am using JVisualVM, the above graph
scrolls regularly from right to left. When the Old Garbage is
triggered, the scrolling stops (until here, it looks normal, this is
end-of-world). However, when the scrolling suddenly restart, it does
not from the end of Old Garbage but from the end of Pending Serializer
This lets me think that the finalizers were blocking the JVM
Does anyone has an explaination for this?
Thank you very much
Philippe
My application uses a big amount of memory; more than 100 Go.
JMC tells me that they are SerialOld but I don't know what it exactly means
If you are using the serial collector for a 100GB heap then long pauses are to be expected because the serial collector is single-threaded and one core can only only chomp through so much memory per unit of time.
Simply choosing any one of the multi-threaded collectors should yield lower pause times.
However, I don't know how to confrim (or not) this hypothesis; How to find which finalizer is time consuming.
Generally: Gather more data. For GC-related things you need to enabled GC logging, for time spent in java code (be it your application or 3rd party libraries) you need a profiler.
Here is what I would do to investigate your finalizer theory.
Start the JVM using your favorite Java profiler.
Leave it running for long enough to get a full heap.
Start the profiler.
Trigger garbage collection.
Stop profiler.
Now you can use the profiler information to figure out which (if any) finalize methods are using a large amount of time.
However, I suspect that the real problem will be a memory leak, and that your JVM is getting to the point where the heap is filling up with unreclaimable objects. That could explain the frequent "SerialOld" garbage collections.
Alternatively, this could just be a big heap problem. 100Gb is ... big.

What is effect of "System.gc()" in J2ME?

I'm developing a mobile application in J2ME. Here I'm facing memory problem. I'm facing out of memory error. So please give the ideas of how it get rid out of this kind of error/exception, garbage collection, memory management in J2ME.
I had one doubt what is the effect System.gc() in the J2ME.
What is the difference between System.gc() and Runtime.getRuntime().gc() in J2ME/Java.
Thanks & Regards,
Calling System.gc() will not fix an "OutOfMemoryError". An OOME only happens after the system has made a "best effort" attempt to release memory by garbage collecting (and other means) ... and failed to free enough memory to continue.
The way to fix OOME errors is to find out what is using all of the memory and try to do something about it.
Possible problems that can lead to OOMEs include:
Memory leaks; i.e. something in your app is causing lots of objects to remain "reachable" after they are no longer required.
Memory hungry data structures or algorithms.
Not enough memory to run the app with that input data.
Your first step to solving this problem should be to use a profiler to see if there are any significant leaks, and to find out more generally what data structures are using all of the memory.
Runs the garbage collector.
Calling the gc method suggests that the Java Virtual Machine expend
effort toward recycling unused objects in order to make the memory
they currently occupy available for quick reuse. When control returns
from the method call, the Java Virtual Machine has made a best effort
to reclaim space from all discarded objects.
The call System.gc() is effectively equivalent to the call:
Runtime.getRuntime().gc()
-> http://download.oracle.com/javase/6/docs/api/java/lang/System.html#gc%28%29
System.gc() and Runtime.getRuntime().gc() are equivalent. They suggest a garbage collection, but there is no guarantee that this will actually happen.
So, don't rely on it, and in fact, it is very rare that you want to call this at all.

Tune Java GC, so that it would immediately throw OOME, rather than slow down indeterminately

I've noticed, that sometimes, when memory is nearly exhausted, the GC is trying to complete at any price of performance (causes nearly freeze of the program, sometimes multiple minutes), rather that just throw an OOME (OutOfMemoryError) immediately.
Is there a way to tune the GC concerning this aspect?
Slowing down the program to nearly zero-speed makes it unresponsive. In certain cases it would be better to have a response "I'm dead" rather than no response at all.
Something like what you're after is built into recent JVMs.
If you:
are using Hotspot VM from (at least) Java 6
are using the Parallel or Concurrent garbage collectors
have the option UseGCOverheadLimit enabled (it's on by default with those collectors, so more specifically if you haven't disabled it)
then you will get an OOM before actually running out of memory: if more than 98% of recent time has been spent in GC for recovery of <2% of the heap size, you'll get a preemptive OOM.
Tuning these parameters (the 98% in particular) sounds like it would be useful to you, however there is no way as far as I'm aware to tune those thresholds.
However, check that you qualify under the three points above; if you're not using those collectors with that flag, this may help your situation.
It's worth reading the HotSpot JVM tuning guide, which can be a big help with this stuff.
I am not aware of any way to configure the Java garbage collector in the manner you describe.
One way might be for your application to proactively monitor the amount of free memory, e.g. using Runtime.freeMemory(), and declare the "I'm dead" condition if that drops below a certain threshold and can't be rectified with a forced garbage collection cycle.
The idea is to pick the value for the threshold that's large enough for the process to never get into the situation you describe.
I strongly advice against this, Java trying to GC rather than immediately throwing an OutOfMemoryException makes far much more sense - don't make your application fall over unless every alternative has been exhausted.
If your application is running out of memory, you should be increasing your max heap size or looking at it's performance in terms of memory allocation and seeing if it can be optimised.
Some things to look at would be:
Use weak references in places where your objects would not be required if not referenced anywhere else.
Don't allocated larger objects than you need (ie storing a huge array of 100 objects when you are only going to need access to three of them through the array lifecycle), or using a long datatype when you only need to store eight values.
Don't hold onto references to objects longer than you would need!
Edit: I think you misunderstand my point. If you accidentally leave a live reference to an object that no longer needs to be used it will obviously still not be garbage collected. This is nothing to do with nulling just incase - a typical example to this would be using a large object for a specific purpose, but when it goes out of scope it is not GC because a live reference has accidentally been left elsewhere, somewhere that you don't know about causing a leak. A typical example of this would be in a hashtable lookup which can be solved with weak references as it will be eligible for GC when only weakly reachable.
Regardless these are just general ideas off the top of my head on how to improve performance with memory allocation. The point I am trying to make is that asking how to throw an OutOfMemory error quicker rather than letting Java GC try it's best to free up space on the heap is not a great idea IMO. Optimize your application instead.
Well, turns out, there is a solution since Java8 b92:
-XX:+ExitOnOutOfMemoryError
When you enable this option, the JVM exits on the first occurrence of an out-of-memory error. It can be used if you prefer restarting an instance of the JVM rather than handling out of memory errors.
-XX:+CrashOnOutOfMemoryError
If this option is enabled, when an out-of-memory error occurs, the JVM crashes and produces text and binary crash files (if core files are enabled).
A good idea is to combine one of the above options with the good old -XX:+HeapDumpOnOutOfMemoryError
I tested these options, they actually work as expected!
Links
See the feature description
See List of changes in that Java release

How configure JVM to wait instead throwing OutOfMemoryError

How to wait for garbage collector instead throwing OutOfMemoryError by JVM? Is there is some setting for JVM or othrer options (like code practices)?
I don't want to incement JVM memory settings or tuning GC - only wait for GC with no OutOfMemoryError cause I know there is no memory leaks, just garbage preventing new allocation.
I'm afraid, your question doesn't make a lot of sense.
An OutOfMemoryException is normally thrown after the GC has run and has been unsuccessful in reclaiming enough memory for you to proceed. Waiting for the GC to run (again) is unlikely to help. And if it doesn't help, the result is that your application will just freeze.
Besides, there isn't a way to do it.
You can probably tune the threshold for when the JVM will give up and throw OOM, but this is what the JVM does by design when it detects that garbage collection is not accomplishing anything. Note that the JVM will not throw an OOM because of bad timing or just because you've created a lot of objects. It will detect that it has repeatedly run GC and GC hasn't freed up any significant amount of memory.
Some possibilities:
You are using a lot of memory on a permanent basis. This isn't necessarily a memory leak, maybe you just load some huge data and don't realize how big it is in memory.
You have a memory leak or maybe you prefer "memory used in unexpected ways". Java offers lots of easy places to lose memory. I've been killed by ThreadLocal caches in a JSON library and failure to call new String(string) when appropriate.
Temporary data is drifting into PermGen because it doesn't act all that temporary.
You don't have any big problems, but you're pushing the envelope for the amount of memory you have allocated and you haven't tuned properly. Turn on concurrent mark sweep garbage collector, turn on GC logging, and see if behavior matches with your expectations of what the app is doing.
Lastly, run a profiler to see what you're using memory on. The first iteration of any program almost always has huge low hanging fruit to clean up.

Force full garbage collection when memory occupation goes beyond a certain threshold

I have a server application that, in rare occasions, can allocate large chunks of memory.
It's not a memory leak, as these chunks can be claimed back by the garbage collector by executing a full garbage collection. Normal garbage collection frees amounts of memory that are too small: it is not adequate in this context.
The garbage collector executes these full GCs when it deems appropriate, namely when the memory footprint of the application nears the allotted maximum specified with -Xmx.
That would be ok, if it wasn't for the fact that these problematic memory allocations come in bursts, and can cause OutOfMemoryErrors due to the fact that the jvm is not able to perform a GC quickly enough to free the required memory. If I manually call System.gc() beforehand, I can prevent this situation.
Anyway, I'd prefer not having to monitor my jvm's memory allocation myself (or insert memory management into my application's logic); it would be nice if there was a way to run the virtual machine with a memory threshold, over which full GCs would be executed automatically, in order to release very early the memory I'm going to need.
Long story short: I need a way (a command line option?) to configure the jvm in order to release early a good amount of memory (i.e. perform a full GC) when memory occupation reaches a certain threshold, I don't care if this slows my application down every once in a while.
All I've found till now are ways to modify the size of the generations, but that's not what I need (at least not directly).
I'd appreciate your suggestions,
Silvio
P.S. I'm working on a way to avoid large allocations, but it could require a long time and meanwhile my app needs a little stability
UPDATE: analyzing the app with jvisualvm, I can see that the problem is in the old generation
From here (this is a 1.4.2 page, but the same option should exist in all Sun JVMs):
assuming you're using the CMS garbage collector (which I believe the server turns on by default), the option you want is
-XX:CMSInitiatingOccupancyFraction=<percent>
where % is the % of memory in use that will trigger a full GC.
Insert standard disclaimers here that messing with GC parameters can give you severe performance problems, varies wildly by machine, etc.
When you allocate large objects that do not fit into the young generation, they are immediately allocated in the tenured generation space. This space is only GC'ed when a full-GC is run which you try to force.
However I am not sure this would solve your problem. You say "JVM is not able to perform a GC quickly enough". Even if your allocations come in bursts, each allocation will cause the VM to check if it has enough space available to do it. If not - and if the object is too large for the young generation - it will cause a full GC which should "stop the world", thereby preventing new allocations from taking place in the first place. Once the GC is complete, your new object will be allocated.
If shortly after that the second large allocation is requested in your burst, it will do the same thing again. Depending on whether the initial object is still needed, it will either be able to succeed in GC'ing it, thereby making room for the next allocation, or fail if the first instance is still referenced.
You say "I need a way [...] to release early a good amount of memory (i.e. perform a full GC) when memory occupation reaches a certain threshold". This by definition can only succeed, if that "good amount of memory" is not referenced by anything in your application anymore.
From what I understand here, you might have a race condition which you might sometimes avoid by interspersing manual GC requests. In general you should never have to worry about these things - from my experience an OutOfMemoryError only occurs if there are in fact too many allocations to be fit into the heap concurrently. In all other situations the "only" problem should be a performance degradation (which might become extreme, depending on the circumstances, but this is a different problem).
I suggest you do further analysis of the exact problem to rule this out. I recommend the VisualVM tool that comes with Java 6. Start it and install the VisualGC plugin. This will allow you to see the different memory generations and their sizes. Also there is a plethora of GC related logging options, depending on which VM you use. Some options have been mentioned in other answers.
The other options for choosing which GC to use and how to tweak thresholds should not matter in your case, because they all depend on enough memory being available to contain all the objects that your application needs at any given time. These options can be helpful if you have performance problems related to heavy GC activity, but I fear they will not lead to a solution in your particular case.
Once you are more confident in what is actually happening, finding a solution will become easier.
Do you know which of the garbage collection pools are growing too large?....i.e. eden vs. survivor space? (try the JVM option -Xloggc:<file> log GC status to a file with time stamps)...When you know this, you should be able to tweak the size of the effected pool with one of the options mentioned here: hotspot options for Java 1.4
I know that page is for the 1.4 JVM, I can't seem to find the same -X options on my current 1.6 install help options, unless setting those individual pool sizes is a non-standard, non-standard feature!
The JVM is only supposed to throw an OutOfMemoryError after it has attempted to release memory via garbage collection (according to both the API docs for OutOfMemoryError and the JVM specification). Therefore your attempts to force garbage collection shouldn't make any difference. So there might be something more significant going on here - either a problem with your program not properly clearing references or, less likely, a JVM bug.
There's a very detailed explanation of how GC works here and it lists parameters to control memory available to different memory pools/generations.
Try to use -server option. It will enable parallel gc and you will have some performance increase if you use multi core processor.
Have you tried playing with G1 gc? It should be available in 1.6.0u14 onwards.

Categories