We have a Java API that is a wrapper around a C API.
As such, we end up with several Java classes that are wrappers around C++ classes.
These classes implement the finalize method in order to free the memory that has been allocated for them.
Generally, this works fine. However, in high-load scenarios we get out of memory exceptions.
Memory dumps indicate that virtually all the memory (around 6Gb in this case) is filled with the finalizer queue and the objects waiting to be finalized.
For comparison, the C API on its own never goes over around 150 Mb of memory usage.
Under low load, the Java implementation can run indefinitely. So this doesn't seem to be a memory leak as such. It just seem to be that under high load, new objects that require finalizing are generated faster than finalizers get executed.
Obviously, the 'correct' fix is to reduce the number of objects being created. However, that's a significant undertaking and will take a while. In the meantime, is there a mechanism that might help alleviate this issue? For example, by giving the GC more resources.
Java was designed around the idea that finalizers could be used as the primary cleanup mechanism for objects that go out of scope. Such an approach may have been almost workable when the total number of objects was small enough that the overhead of an "always scan everything" garbage collector would have been acceptable, but there are relatively few cases where finalization would be appropriate cleanup measure in a system with a generational garbage collector (which nearly all JVM implementations are going to have, because it offers a huge speed boost compared to always scanning everything).
Using Closable along with a try-with-resources constructs is a vastly superior approach whenever it's workable. There is no guarantee that finalize methods will get called with any degree of timeliness, and there are many situations where patterns of interrelated objects may prevent them from getting called at all. While finalize can be useful for some purposes, such as identifying objects which got improperly abandoned while holding resources, there are relatively few purposes for which it would be the proper tool.
If you do need to use finalizers, you should understand an important principle: contrary to popular belief, finalizers do not trigger when an object is actually garbage collected"--they fire when an object would have been garbage collected but for the existence of a finalizer somewhere [including, but not limited to, the object's own finalizer]. No object can actually be garbage collected while any reference to it exists in any local variable, in any other object to which any reference exists, or any object with a finalizer that hasn't run to completion. Further, to avoid having to examine all objects on every garbage-collection cycle, objects which have been alive for awhile will be given a "free pass" on most GC cycles. Thus, if an object with a finalizer is alive for awhile before it is abandoned, it may take quite awhile for its finalizer to run, and it will keep objects to which it holds references around long enough that they're likely to also earn a "free pass".
I would thus suggest that to the extent possible, even when it's necessary to use finalizer, you should limit their use to privately-held objects which in turn avoid holding strong references to anything which isn't explicitly needed for their cleanup task.
Phantom references is an alternative to finalizers available in Java.
Phantom references allow you to better control resource reclamation process.
you can combine explicit resource disposal (e.g. try with resources construct) with GC base disposal
you can employ multiple threads for postmortem housekeeping
Using phantom references is complicated tough. In this article you can find a minimal example of phantom reference base resource housekeeping.
In modern Java there are also Cleaner class which is based on phantom reference too, but provides infrastructure (reference queue, worker threads etc) for ease of use.
Alright, so I know that you should always close your streams and other native resources, but I am currently not sure why.
I see one reason for why and that is because of the limited amount of resources available and you want to release them as soon as you are done with them.
But lets assume that your application doesn't use that many resources, then there shouldn't be a need to close the resource right?
Especially since you have the finalize() block that should close all native resources when the GC gets to it.
The assumption that “you have the finalize() block that should close all native resources when the GC gets to it” is wrong in the first place. There is no guaranty that every object representing a native resource has such a finalize() method.
Second, a resource isn’t necessarily a native resource.
When you have, e.g. a BufferedOutputStream, an ObjectOutputStream, or a ZipOutputStream wrapping a FileOutputStream, the latter likely has a finalize() method to free the underlying native resource (that’s implementation dependent), but that doesn’t write any pending data of the wrapper stream needed to have correctly written data. Closing the wrapper stream is mandatory to ensure that the written output is complete.
Naively adding a finalize() method to these wrapper stream classes to close them would not work. When the stream object gets collected, it implies that there is no reference to it, in other words there is no directed application→wrapper stream→native stream graph anymore. Since object graphs could be cyclic, there is no guaranty that an expensive search for an order among dead objects would succeed, and that’s why the JVM doesn’t even try.
Or, as the specification puts it:
The Java programming language imposes no ordering on finalize method calls. Finalizers may be called in any order, or even concurrently.
Therefore, a finalize() method of a wrapper would not be guaranteed to be called before the finalize() method of underlying native stream, thus, the underlying native stream might have been finalized and closed before the wrapper stream making it impossible to write the pending data.
And the rabbit hole goes even deeper. Object instances are maintained by the JVM as needed by the code, but the code can get optimized to use encapsulated data directly. If the wrapper stream class had a finalize() method, you might find out that the wrapper instance can be freed even earlier than expected, as discussed in finalize() called on strongly reachable object in Java 8.
Or, in short, explicit closing is the only way to ensure that it happens exactly at the right point of time.
Simple: you always strive to do the right thing.
Building an application on assumptions such as "it doesn't use many resources" is a wrong approach. Instead: you focus on getting your logic right.
You see: the thing with real world application is: when they are helpful, they will be used. And as soon as you have users, you will be dealing with additional requirements. That results in you enhancing and maintaining your code. And as a result of that, any "compromise" that you made earlier on ("it's just a small thing, so who cares") has the potential to make such activities much harder than the ought be.
In other words: you should strive to build applications that are "correct". You don't write sloppy code because it "doesn't matter". Because you very often can not predict if your code doesn't become "important" at some point - and then your sins will come back biting you.
And no, finalize isn't the answer to anything ( see here ). You care about all resources that your code is using, and you carefully make sure that resources have a well defined life cycle.
Firstly finalize is never guaranteed to be called by the GC, so you cannot rely on it.
Secondly, in a real world application the amount of resources needed by your application changes, hence you shouldn't rely on assumptions and you should free up as many resources as you can to provide the best possible performance.
In my opinion the key words are performance, availability, scalability.
As a Java developer, you have little control over when, or even if, finalizers are invoked. If your resources are in limited supply (database connections, for example), or create additional threads that have to be serviced, or hold locks, or use substantial memory, then you need to exercise control over when they are allocated and freed. It isn't always obvious what the implications are for keeping a resource allocated, and it's always safer to minimize the scope and duration of your resource usage, so far as the logic of the application allows.
Let's say I'm writing an API in java that refers to some native C libraries, that requires destructors to be called explicitly. If the destructors are not called, I run out of native memory.
Is there a way to protect users of my API from calling the destructors explicitly, by having the garbage collector call the destructors somehow? (perhaps based on some estimate I make of the size of the used native memory?)
I know Java doesn't have its garbage collector as part of the Java API, but perhaps there is some way to get this implemented?
One alternative if you have control over creation of your objects is to reference them with a WeakReference using the constructor that takes a ReferenceQueue. When they get out of scope, the Reference will be queued and you can have your own thread polling the queue and call some clean up function.
Why?
Well, it is slightly more efficient than adding finalizers to your classes (because it forces the gc to do some special handling of them).
Edit: The following two (variations of the same article) describes it:
http://java.sun.com/developer/technicalArticles/javase/finalization/
http://www.devx.com/Java/Article/30192
Peter Lawrey has a very good point when he says:
Even so, waiting for the GC to cleanup can be inefficient and you may want to expose a means of explicitly cleaning up the resource if its required.
Whenever you can assume your users to be on Java7, take a look at java.lang.AutoCloseable as it will help them do that automatically when using the new try-with-resources.
In addition to use finalize(), you may need to trigger a GC if you run out of resources to make the call, however a GC hasn't been run.
The ByteBuffer.allocateDirect() has this issue. It need the GC to clean up its ByteBuffers, However, you can reach your maximum direct memory before a GC is triggered, so the code has to detect this and triggers a System.gc() explicitly.
Even so, waiting for the GC to cleanup can be inefficient and you may want to expose a means of explicitly cleaning up the resource if its required.
Garbage collector will call finalize() of Java objects when the Java object is about to be GCd, and inside the finalize, you could call the destructor. Just make a new Java object for every destructor that needs to be called, and keep reference to that Java object until when you want to call the destructor.
In practice, finalize() will be called sooner or later (even though technically Java makes no guarantee that any particular object will ever be GCd). The only exception is if the object is still around when the process is shutting down: then it may indeed never get GCd.
After answering a question about how to force-free objects in Java (the guy was clearing a 1.5GB HashMap) with System.gc(), I was told it's bad practice to call System.gc() manually, but the comments were not entirely convincing. In addition, no one seemed to dare to upvote, nor downvote my answer.
I was told there that it's bad practice, but then I was also told that garbage collector runs don't systematically stop the world anymore, and that it could also effectively be used by the JVM only as a hint, so I'm kind of at loss.
I do understand that the JVM usually knows better than you when it needs to reclaim memory. I also understand that worrying about a few kilobytes of data is silly. I also understand that even megabytes of data isn't what it was a few years back. But still, 1.5 gigabytes? And you know there's like 1.5 GB of data hanging around in memory; it's not like it's a shot in the dark. Is System.gc() systematically bad, or is there some point at which it becomes okay?
So the question is actually double:
Why is or isn't it bad practice to call System.gc()? Is it really merely a hint to the JVM under certain implementations, or is it always a full collection cycle? Are there really garbage collector implementations that can do their work without stopping the world? Please shed some light over the various assertions people have made in the comments to my answer.
Where's the threshold? Is it never a good idea to call System.gc(), or are there times when it's acceptable? If so, what are those times?
The reason everyone always says to avoid System.gc() is that it is a pretty good indicator of fundamentally broken code. Any code that depends on it for correctness is certainly broken; any that rely on it for performance are most likely broken.
You don't know what sort of garbage collector you are running under. There are certainly some that do not "stop the world" as you assert, but some JVMs aren't that smart or for various reasons (perhaps they are on a phone?) don't do it. You don't know what it's going to do.
Also, it's not guaranteed to do anything. The JVM may just entirely ignore your request.
The combination of "you don't know what it will do," "you don't know if it will even help," and "you shouldn't need to call it anyway" are why people are so forceful in saying that generally you shouldn't call it. I think it's a case of "if you need to ask whether you should be using this, you shouldn't"
EDIT to address a few concerns from the other thread:
After reading the thread you linked, there's a few more things I'd like to point out.
First, someone suggested that calling gc() may return memory to the system. That's certainly not necessarily true - the Java heap itself grows independently of Java allocations.
As in, the JVM will hold memory (many tens of megabytes) and grow the heap as necessary. It doesn't necessarily return that memory to the system even when you free Java objects; it is perfectly free to hold on to the allocated memory to use for future Java allocations.
To show that it's possible that System.gc() does nothing, view
JDK bug 6668279
and in particular that there's a -XX:DisableExplicitGC VM option:
By default calls to System.gc() are enabled (-XX:-DisableExplicitGC). Use -XX:+DisableExplicitGC to disable calls to System.gc(). Note that the JVM still performs garbage collection when necessary.
It has already been explained that calling system.gc() may do nothing, and that any code that "needs" the garbage collector to run is broken.
However, the pragmatic reason that it is bad practice to call System.gc() is that it is inefficient. And in the worst case, it is horribly inefficient! Let me explain.
A typical GC algorithm identifies garbage by traversing all non-garbage objects in the heap, and inferring that any object not visited must be garbage. From this, we can model the total work of a garbage collection consists of one part that is proportional to the amount of live data, and another part that is proportional to the amount of garbage; i.e. work = (live * W1 + garbage * W2).
Now suppose that you do the following in a single-threaded application.
System.gc(); System.gc();
The first call will (we predict) do (live * W1 + garbage * W2) work, and get rid of the outstanding garbage.
The second call will do (live* W1 + 0 * W2) work and reclaim nothing. In other words we have done (live * W1) work and achieved absolutely nothing.
We can model the efficiency of the collector as the amount of work needed to collect a unit of garbage; i.e. efficiency = (live * W1 + garbage * W2) / garbage. So to make the GC as efficient as possible, we need to maximize the value of garbage when we run the GC; i.e. wait until the heap is full. (And also, make the heap as big as possible. But that is a separate topic.)
If the application does not interfere (by calling System.gc()), the GC will wait until the heap is full before running, resulting in efficient collection of garbage1. But if the application forces the GC to run, the chances are that the heap won't be full, and the result will be that garbage is collected inefficiently. And the more often the application forces GC, the more inefficient the GC becomes.
Note: the above explanation glosses over the fact that a typical modern GC partitions the heap into "spaces", the GC may dynamically expand the heap, the application's working set of non-garbage objects may vary and so on. Even so, the same basic principal applies across the board to all true garbage collectors2. It is inefficient to force the GC to run.
1 - This is how the "throughput" collector works. Concurrent collectors such as CMS and G1 use different criteria to decide when to start the garbage collector.
2 - I'm also excluding memory managers that use reference counting exclusively, but no current Java implementation uses that approach ... for good reason.
Lots of people seem to be telling you not to do this. I disagree. If, after a large loading process like loading a level, you believe that:
You have a lot of objects that are unreachable and may not have been gc'ed. and
You think the user could put up with a small slowdown at this point
there is no harm in calling System.gc(). I look at it like the c/c++ inline keyword. It's just a hint to the gc that you, the developer, have decided that time/performance is not as important as it usually is and that some of it could be used reclaiming memory.
Advice to not rely on it doing anything is correct. Don't rely on it working, but giving the hint that now is an acceptable time to collect is perfectly fine. I'd rather waste time at a point in the code where it doesn't matter (loading screen) than when the user is actively interacting with the program (like during a level of a game.)
There is one time when i will force collection: when attempting to find out is a particular object leaks (either native code or large, complex callback interaction. Oh and any UI component that so much as glances at Matlab.) This should never be used in production code.
People have been doing a good job explaining why NOT to use, so I will tell you a couple situations where you should use it:
(The following comments apply to Hotspot running on Linux with the CMS collector, where I feel confident saying that System.gc() does in fact always invoke a full garbage collection).
After the initial work of starting up your application, you may be a terrible state of memory usage. Half your tenured generation could be full of garbage, meaning that you are that much closer to your first CMS. In applications where that matters, it is not a bad idea to call System.gc() to "reset" your heap to the starting state of live data.
Along the same lines as #1, if you monitor your heap usage closely, you want to have an accurate reading of what your baseline memory usage is. If the first 2 minutes of your application's uptime is all initialization, your data is going to be messed up unless you force (ahem... "suggest") the full gc up front.
You may have an application that is designed to never promote anything to the tenured generation while it is running. But maybe you need to initialize some data up-front that is not-so-huge as to automatically get moved to the tenured generation. Unless you call System.gc() after everything is set up, your data could sit in the new generation until the time comes for it to get promoted. All of a sudden your super-duper low-latency, low-GC application gets hit with a HUGE (relatively speaking, of course) latency penalty for promoting those objects during normal operations.
It is sometimes useful to have a System.gc call available in a production application for verifying the existence of a memory leak. If you know that the set of live data at time X should exist in a certain ratio to the set of live data at time Y, then it could be useful to call System.gc() a time X and time Y and compare memory usage.
This is a very bothersome question, and I feel contributes to many being opposed to Java despite how useful of a language it is.
The fact that you can't trust "System.gc" to do anything is incredibly daunting and can easily invoke "Fear, Uncertainty, Doubt" feel to the language.
In many cases, it is nice to deal with memory spikes that you cause on purpose before an important event occurs, which would cause users to think your program is badly designed/unresponsive.
Having ability to control the garbage collection would be very a great education tool, in turn improving people's understanding how the garbage collection works and how to make programs exploit it's default behavior as well as controlled behavior.
Let me review the arguments of this thread.
It is inefficient:
Often, the program may not be doing anything and you know it's not doing anything because of the way it was designed. For instance, it might be doing some kind of long wait with a large wait message box, and at the end it may as well add a call to collect garbage because the time to run it will take a really small fraction of the time of the long wait but will avoid gc from acting up in the middle of a more important operation.
It is always a bad practice and indicates broken code.
I disagree, it doesn't matter what garbage collector you have. Its' job is to track garbage and clean it.
By calling the gc during times where usage is less critical, you reduce odds of it running when your life relies on the specific code being run but instead it decides to collect garbage.
Sure, it might not behave the way you want or expect, but when you do want to call it, you know nothing is happening, and user is willing to tolerate slowness/downtime. If the System.gc works, great! If it doesn't, at least you tried. There's simply no down side unless the garbage collector has inherent side effects that do something horribly unexpected to how a garbage collector is suppose to behave if invoked manually, and this by itself causes distrust.
It is not a common use case:
It is a use case that cannot be achieved reliably, but could be if the system was designed that way. It's like making a traffic light and making it so that some/all of the traffic lights' buttons don't do anything, it makes you question why the button is there to begin with, javascript doesn't have garbage collection function so we don't scrutinize it as much for it.
The spec says that System.gc() is a hint that GC should run and the VM is free to ignore it.
what is a "hint"? what is "ignore"? a computer cannot simply take hints or ignore something, there are strict behavior paths it takes that may be dynamic that are guided by the intent of the system. A proper answer would include what the garbage collector is actually doing, at implementation level, that causes it to not perform collection when you request it. Is the feature simply a nop? Is there some kind of conditions that must me met? What are these conditions?
As it stands, Java's GC often seems like a monster that you just don't trust. You don't know when it's going to come or go, you don't know what it's going to do, how it's going to do it. I can imagine some experts having better idea of how their Garbage Collection works on per-instruction basis, but vast majority simply hopes it "just works", and having to trust an opaque-seeming algorithm to do work for you is frustrating.
There is a big gap between reading about something or being taught something, and actually seeing the implementation of it, the differences across systems, and being able to play with it without having to look at the source code. This creates confidence and feeling of mastery/understanding/control.
To summarize, there is an inherent problem with the answers "this feature might not do anything, and I won't go into details how to tell when it does do something and when it doesn't and why it won't or will, often implying that it is simply against the philosophy to try to do it, even if the intent behind it is reasonable".
It might be okay for Java GC to behave the way it does, or it might not, but to understand it, it is difficult to truly follow in which direction to go to get a comprehensive overview of what you can trust the GC to do and not to do, so it's too easy simply distrust the language, because the purpose of a language is to have controlled behavior up to philosophical extent(it's easy for a programmer, especially novices to fall into existential crisis from certain system/language behaviors) you are capable of tolerating(and if you can't, you just won't use the language until you have to), and more things you can't control for no known reason why you can't control them is inherently harmful.
Sometimes (not often!) you do truly know more about past, current and future memory usage then the run time does. This does not happen very often, and I would claim never in a web application while normal pages are being served.
Many year ago I work on a report generator, that
Had a single thread
Read the “report request” from a queue
Loaded the data needed for the report from the database
Generated the report and emailed it out.
Repeated forever, sleeping when there were no outstanding requests.
It did not reuse any data between reports and did not do any cashing.
Firstly as it was not real time and the users expected to wait for a report, a delay while the GC run was not an issue, but we needed to produce reports at a rate that was faster than they were requested.
Looking at the above outline of the process, it is clear that.
We know there would be very few live objects just after a report had been emailed out, as the next request had not started being processed yet.
It is well known that the cost of running a garbage collection cycle is depending on the number of live objects, the amount of garbage has little effect on the cost of a GC run.
That when the queue is empty there is nothing better to do then run the GC.
Therefore clearly it was well worth while doing a GC run whenever the request queue was empty; there was no downside to this.
It may be worth doing a GC run after each report is emailed, as we know this is a good time for a GC run. However if the computer had enough ram, better results would be obtained by delaying the GC run.
This behaviour was configured on a per installation bases, for some customers enabling a forced GC after each report greatly speeded up the production of reports. (I expect this was due to low memory on their server and it running lots of other processes, so hence a well time forced GC reduced paging.)
We never detected an installation that did not benefit from a forced GC run every time the work queue was empty.
But, let be clear, the above is not a common case.
These days I would be more inclined to run each report in a seperate process leaving the operating system to clear up memory rather then the garbage collector and having the custom queue manager service use mulple working processes on large servers.
GC efficiency relies on a number of heuristics. For instance, a common heuristic is that write accesses to objects usually occur on objects which were created not long ago. Another is that many objects are very short-lived (some objects will be used for a long time, but many will be discarded a few microseconds after their creation).
Calling System.gc() is like kicking the GC. It means: "all those carefully tuned parameters, those smart organizations, all the effort you just put into allocating and managing the objects such that things go smoothly, well, just drop the whole lot, and start from scratch". It may improve performance, but most of the time it just degrades performance.
To use System.gc() reliably(*) you need to know how the GC operates in all its fine details. Such details tend to change quite a bit if you use a JVM from another vendor, or the next version from the same vendor, or the same JVM but with slightly different command-line options. So it is rarely a good idea, unless you want to address a specific issue in which you control all those parameters. Hence the notion of "bad practice": that's not forbidden, the method exists, but it rarely pays off.
(*) I am talking about efficiency here. System.gc() will never break a correct Java program. It will neither conjure extra memory that the JVM could not have obtained otherwise: before throwing an OutOfMemoryError, the JVM does the job of System.gc(), even if as a last resort.
Maybe I write crappy code, but I've come to realize that clicking the trash-can icon on eclipse and netbeans IDEs is a 'good practice'.
Some of what I am about to write is simply a summarization of what has already been written in other answers, and some is new.
The question "Why is it bad practice to call System.gc()?" does not compute. It assumes that it is bad practice, while it is not. It greatly depends on what you are trying to accomplish.
The vast majority of programmers out there have no need for System.gc(), and it will never do anything useful to them in the vast majority of use cases. So, for the majority, calling it is bad practice because it will not do whatever it is that they think it will do, it will only add overhead.
However, there are a few rare cases where invoking System.gc() is actually beneficial:
When you are absolutely sure that you have some CPU time to spare now, and you want to improve the throughput of code that will run later. For example, a web server that discovers that there are no pending web requests at the moment can initiate garbage collection now, so as to reduce the chances that garbage collection will be needed during the processing of a barrage of web requests later on. (Of course this can hurt if a web request arrives during collection, but the web server could be smart about it and abandon collection if a request comes in.) Desktop GUIs are another example: on the idle event (or, more broadly, after a period of inactivity,) you can give the JVM a hint that if it has any garbage collection to do, now is better than later.
When you want to detect memory leaks. This is often done in combination with a debug-mode-only finalizer, or with the java.lang.ref.Cleaner class from Java 9 onwards. The idea is that by forcing garbage collection now, and thus discovering memory leaks now as opposed to some random point in time in the future, you can detect the memory leaks as soon as possible after they have happened, and therefore be in a better position to tell precisely which piece of code has leaked memory and why. (Incidentally, this is also one of, or perhaps the only, legitimate use cases for finalizers or the Cleaner. The practice of using finalization for recycling of unmanaged resources is flawed, despite being very widespread and even officially recommended, because it is non-deterministic. For more on this topic, read this: https://blog.michael.gr/2021/01/object-lifetime-awareness.html)
When you are measuring the performance of code, (benchmarking,) in order to reduce/minimize the chances of garbage collection occurring during the benchmark, or in order to guarantee that whatever overhead is suffered due to garbage collection during the benchmark is due to garbage generated by the code under benchmark, and not by unrelated code. A good benchmark always starts with an as thorough as possible garbage collection.
When you are measuring the memory consumption of code, in order to determine how much garbage is generated by a piece of code. The idea is to perform a full garbage collection so as to start in a clean state, run the code under measurement, obtain the heap size, then do another full garbage collection, obtain the heap size again, and take the difference. (Incidentally, the ability to temporarily suppress garbage collection while running the code under measurement would be useful here, alas, the JVM does not support that. This is deplorable.)
Note that of the above use cases, only one is in a production scenario; the rest are in testing / diagnostics scenarios.
This means that System.gc() can be quite useful under some circumstances, which in turn means that it being "only a hint" is problematic.
(For as long as the JVM is not offering some deterministic and guaranteed means of controlling garbage collection, the JVM is broken in this respect.)
Here is how you can turn System.gc() into a bit less of a hint:
private static void runGarbageCollection()
{
for( WeakReference<Object> ref = new WeakReference<>( new Object() ); ; )
{
System.gc(); //optional
Runtime.getRuntime().runFinalization(); //optional
if( ref.get() == null )
break;
Thread.yield();
}
}
This still does not guarantee that you will get a full GC, but it gets a lot closer. Specifically, it will give you some amount of garbage collection even if the -XX:DisableExplicitGC VM option has been used. (So, it truly uses System.gc() as a hint; it does not rely on it.)
Yes, calling System.gc() doesn't guarantee that it will run, it's a request to the JVM that may be ignored. From the docs:
Calling the gc method suggests that the Java Virtual Machine expend effort toward recycling unused objects
It's almost always a bad idea to call it because the automatic memory management usually knows better than you when to gc. It will do so when its internal pool of free memory is low, or if the OS requests some memory be handed back.
It might be acceptable to call System.gc() if you know that it helps. By that I mean you've thoroughly tested and measured the behaviour of both scenarios on the deployment platform, and you can show it helps. Be aware though that the gc isn't easily predictable - it may help on one run and hurt on another.
First, there is a difference between spec and reality. The spec says that System.gc() is a hint that GC should run and the VM is free to ignore it. The reality is, the VM will never ignore a call to System.gc().
Calling GC comes with a non-trivial overhead to the call and if you do this at some random point in time it's likely you'll see no reward for your efforts. On the other hand, a naturally triggered collection is very likely to recoup the costs of the call. If you have information that indicates that a GC should be run than you can make the call to System.gc() and you should see benefits. However, it's my experience that this happens only in a few edge cases as it's very unlikely that you'll have enough information to understand if and when System.gc() should be called.
One example listed here, hitting the garbage can in your IDE. If you're off to a meeting why not hit it. The overhead isn't going to affect you and heap might be cleaned up for when you get back. Do this in a production system and frequent calls to collect will bring it to a grinding halt! Even occasional calls such as those made by RMI can be disruptive to performance.
In my experience, using System.gc() is effectively a platform-specific form of optimization (where "platform" is the combination of hardware architecture, OS, JVM version and possible more runtime parameters such as RAM available), because its behaviour, while roughly predictable on a specific platform, can (and will) vary considerably between platforms.
Yes, there are situations where System.gc() will improve (perceived) performance. On example is if delays are tolerable in some parts of your app, but not in others (the game example cited above, where you want GC to happen at the start of a level, not during the level).
However, whether it will help or hurt (or do nothing) is highly dependent on the platform (as defined above).
So I think it is valid as a last-resort platform-specific optimization (i.e. if other performance optimizations are not enough). But you should never call it just because you believe it might help(without specific benchmarks), because chances are it will not.
Since objects are dynamically allocated by using the new operator,
you might be wondering how such objects are destroyed and their
memory released for later reallocation.
In some languages, such as C++, dynamically allocated objects must
be manually released by use of a delete operator.
Java takes a different approach; it handles deallocation for you
automatically.
The technique that accomplishes this is called garbage collection.
It works like this: when no references to an object exist, that object is assumed to be no longer needed, and the memory occupied by the object can be reclaimed. There is no explicit need to destroy objects as in C++.
Garbage collection only occurs sporadically (if at all) during the
execution of your program.
It will not occur simply because one or more objects exist that are
no longer used.
Furthermore, different Java run-time implementations will take
varying approaches to garbage collection, but for the most part, you
should not have to think about it while writing your programs.