We have a Java API that is a wrapper around a C API.
As such, we end up with several Java classes that are wrappers around C++ classes.
These classes implement the finalize method in order to free the memory that has been allocated for them.
Generally, this works fine. However, in high-load scenarios we get out of memory exceptions.
Memory dumps indicate that virtually all the memory (around 6Gb in this case) is filled with the finalizer queue and the objects waiting to be finalized.
For comparison, the C API on its own never goes over around 150 Mb of memory usage.
Under low load, the Java implementation can run indefinitely. So this doesn't seem to be a memory leak as such. It just seem to be that under high load, new objects that require finalizing are generated faster than finalizers get executed.
Obviously, the 'correct' fix is to reduce the number of objects being created. However, that's a significant undertaking and will take a while. In the meantime, is there a mechanism that might help alleviate this issue? For example, by giving the GC more resources.
Java was designed around the idea that finalizers could be used as the primary cleanup mechanism for objects that go out of scope. Such an approach may have been almost workable when the total number of objects was small enough that the overhead of an "always scan everything" garbage collector would have been acceptable, but there are relatively few cases where finalization would be appropriate cleanup measure in a system with a generational garbage collector (which nearly all JVM implementations are going to have, because it offers a huge speed boost compared to always scanning everything).
Using Closable along with a try-with-resources constructs is a vastly superior approach whenever it's workable. There is no guarantee that finalize methods will get called with any degree of timeliness, and there are many situations where patterns of interrelated objects may prevent them from getting called at all. While finalize can be useful for some purposes, such as identifying objects which got improperly abandoned while holding resources, there are relatively few purposes for which it would be the proper tool.
If you do need to use finalizers, you should understand an important principle: contrary to popular belief, finalizers do not trigger when an object is actually garbage collected"--they fire when an object would have been garbage collected but for the existence of a finalizer somewhere [including, but not limited to, the object's own finalizer]. No object can actually be garbage collected while any reference to it exists in any local variable, in any other object to which any reference exists, or any object with a finalizer that hasn't run to completion. Further, to avoid having to examine all objects on every garbage-collection cycle, objects which have been alive for awhile will be given a "free pass" on most GC cycles. Thus, if an object with a finalizer is alive for awhile before it is abandoned, it may take quite awhile for its finalizer to run, and it will keep objects to which it holds references around long enough that they're likely to also earn a "free pass".
I would thus suggest that to the extent possible, even when it's necessary to use finalizer, you should limit their use to privately-held objects which in turn avoid holding strong references to anything which isn't explicitly needed for their cleanup task.
Phantom references is an alternative to finalizers available in Java.
Phantom references allow you to better control resource reclamation process.
you can combine explicit resource disposal (e.g. try with resources construct) with GC base disposal
you can employ multiple threads for postmortem housekeeping
Using phantom references is complicated tough. In this article you can find a minimal example of phantom reference base resource housekeeping.
In modern Java there are also Cleaner class which is based on phantom reference too, but provides infrastructure (reference queue, worker threads etc) for ease of use.
(This question is different from Why would you ever implement finalize()? This question is about deprecation from the Java platform, and the other question is about whether one should use this mechanism in applications.)
Why is the finalize() method deprecated in Java 9?
Yes it could be used in wrong way (like save an object from garbage collecting [only one time though] or try to close some native resources within it [it's better than don't close at all though]) as well as many other methods could be used wrongly.
So is finalize() really so dangerous or absolutely useless that it's necessary to kick it out of Java?
Although the question was asking about the Object.finalize method, the subject really is about the finalization mechanism as a whole. This mechanism includes not only the surface API Object.finalize, but it also includes specifications of the programming language about the life cycle of objects, and practical impact on garbage collector implementations in JVMs.
Much has been written about why finalization is difficult to use from the application's point of view. See the questions Why would you ever implement finalize()? and Should Java 9 Cleaner be preferred to finalization? and their answers. See also Effective Java, 3rd edition by Joshua Bloch, Item 8.
Briefly, some points about the problems associated with using finalizers are:
they are notoriously difficult to program correctly
in particular, they can be run unexpectedly when an object
becomes unreachable unexpectedly (but correctly); for example,
see my answer to this question
finalization can easily break subclass/superclass relationships
there is no ordering among finalizers
a given object's finalize method is invoked at most once by the JVM, even if that object is "resurrected"
there are no guarantees about timeliness of finalization or
even that it will run at all
there is no explicit registration or deregistration mechanism
The above are difficulties with the use of finalization. Anyone who is considering using finalization should reconsider, given the above list of issues. But are these issues sufficient to deprecate finalization in the Java platform? There are several additional reasons explained in the sections below.
Finalization Potentially Makes Systems Fragile
Even if you write an object that uses finalization correctly, it can cause problems when your object is integrated into a larger system. Even if you don't use finalization at all, being integrated into a larger system, some parts of which use finalization, can result in problems. The general issue is that worker threads that create garbage need to be in balance with the garbage collector. If the garbage collector falls behind, at least some collectors can "stop the world" and do a full collection to catch up. Finalization complicates this interaction. Even if the garbage collector is keeping up with application threads, finalization can introduce bottlenecks and slow down the system, or it can cause delays in freeing resources that result in exhaustion of those resources. This is a systems problem. Even if the actual code that uses finalization is correct, problems can still occur in correctly programmed systems.
(Edit 2021-09-16: this question describes a problem where a system works fine under low load but fails under high load, likely because the relative rate of allocation outstrips the rate of finalization under high load.)
Finalization Contributes to Security Issues
The SEI CERT Oracle Coding Standard for Java has a rule MET12-J: Do not use finalizers. (Note, this is a site about secure coding.) In particular, it says
Improper use of finalizers can result in resurrection of garbage-collection-ready objects and result in denial-of-service vulnerabilities.
Oracle's Secure Coding Guidelines for Java SE is more explicit about potential security issues that can arise using finalization. In this case it is not a problem with code that uses finalization. Instead, finalization can be used by an attacker to attack sensitive code that hasn't properly defended itself. In particular, Guideline 7-3 / OBJECT-3 states in part,
Partially initialized instances of a non-final class can be accessed via a finalizer attack. The attacker overrides the protected finalize method in a subclass and attempts to create a new instance of that subclass. This attempt fails ... but the attacker simply ignores any exception and waits for the virtual machine to perform finalization on the partially initialized object. When that occurs the malicious finalize method implementation is invoked, giving the attacker access to this, a reference to the object being finalized. Although the object is only partially initialized, the attacker can still invoke methods on it....
Thus, the presence of the finalization mechanism in the platform imposes a burden on programmers who are trying to write high assurance code.
Finalization Adds Complexity to Specifications
The Java Platform is defined by several specifications, including specifications for the language, the virtual machine, and the class library APIs. Impact of finalization is spread thinly across all of these, but it repeatedly makes its presence felt. For example, finalization has a very subtle interaction with object creation (which is already complicated enough). Finalization also has appeared Java's public APIs, meaning that evolution of those APIs has (up to now) been required to remain compatible with previously specified behaviors. Evolving these specifications is made more costly the presence of finalization.
Finalization Adds Complexity to Implementations
This is mainly about garbage collectors. There are several garbage collection implementations, and all are required to pay the cost of implementing finalization. The implementations are quite good at minimizing the runtime overhead if finalization isn't used. However, the implementation still needs to be there, and it needs to be correct and well tested. This is an ongoing development and maintenance burden.
Summary
We've seen elsewhere that it's not recommended for programmers to use finalization. However, if something is not useful, it doesn't necessarily follow that it should be deprecated. The points above illustrate the fact that even if finalization isn't used, the mere presence of the mechanism in the platform imposes ongoing specification, development, and maintenance costs. Given the lack of usefulness of the mechanism and the costs it imposes, it makes sense to deprecate it. Eventually, getting rid of finalization will benefit everyone.
As of this writing (2019-06-04) there is no concrete plan to remove finalization from Java. However, it is certainly the intent to do so. We've deprecated the Object.finalize method, but have not marked it for removal. This is a formal recommendation that programmers stop using this mechanism. It's been known informally that finalization shouldn't be used, but of course it's necessary to take a formal step. In addition, certain finalize methods in library classes (for example, ZipFile.finalize) have been deprecated "for removal" which means that the finalization behavior of these classes may be removed from a future release. Eventually, we hope to disable finalization in the JVM (perhaps first optionally, and then later by default), and at some point in the future actually remove the finalization implementation from garbage collectors.
(Edit 2021-11-03: JEP 421 has just been posted, which proposes to deprecate finalization for removal. At this writing it's in the "candidate" state but I expect it will move forward. The deprecations added by this JEP are a formal notification that finalization will be removed at some point in a subsequent Java release. Perhaps not surprisingly, there's a fair overlap between the material in this answer and in the JEP, though the JEP is more precise and describes a moderate evolution in our thinking on the topic.)
(Edit 2022-04-04: JEP 421 Deprecate Finalization for Removal has been integrated and delivered in JDK 18.)
Alright, so I know that you should always close your streams and other native resources, but I am currently not sure why.
I see one reason for why and that is because of the limited amount of resources available and you want to release them as soon as you are done with them.
But lets assume that your application doesn't use that many resources, then there shouldn't be a need to close the resource right?
Especially since you have the finalize() block that should close all native resources when the GC gets to it.
The assumption that “you have the finalize() block that should close all native resources when the GC gets to it” is wrong in the first place. There is no guaranty that every object representing a native resource has such a finalize() method.
Second, a resource isn’t necessarily a native resource.
When you have, e.g. a BufferedOutputStream, an ObjectOutputStream, or a ZipOutputStream wrapping a FileOutputStream, the latter likely has a finalize() method to free the underlying native resource (that’s implementation dependent), but that doesn’t write any pending data of the wrapper stream needed to have correctly written data. Closing the wrapper stream is mandatory to ensure that the written output is complete.
Naively adding a finalize() method to these wrapper stream classes to close them would not work. When the stream object gets collected, it implies that there is no reference to it, in other words there is no directed application→wrapper stream→native stream graph anymore. Since object graphs could be cyclic, there is no guaranty that an expensive search for an order among dead objects would succeed, and that’s why the JVM doesn’t even try.
Or, as the specification puts it:
The Java programming language imposes no ordering on finalize method calls. Finalizers may be called in any order, or even concurrently.
Therefore, a finalize() method of a wrapper would not be guaranteed to be called before the finalize() method of underlying native stream, thus, the underlying native stream might have been finalized and closed before the wrapper stream making it impossible to write the pending data.
And the rabbit hole goes even deeper. Object instances are maintained by the JVM as needed by the code, but the code can get optimized to use encapsulated data directly. If the wrapper stream class had a finalize() method, you might find out that the wrapper instance can be freed even earlier than expected, as discussed in finalize() called on strongly reachable object in Java 8.
Or, in short, explicit closing is the only way to ensure that it happens exactly at the right point of time.
Simple: you always strive to do the right thing.
Building an application on assumptions such as "it doesn't use many resources" is a wrong approach. Instead: you focus on getting your logic right.
You see: the thing with real world application is: when they are helpful, they will be used. And as soon as you have users, you will be dealing with additional requirements. That results in you enhancing and maintaining your code. And as a result of that, any "compromise" that you made earlier on ("it's just a small thing, so who cares") has the potential to make such activities much harder than the ought be.
In other words: you should strive to build applications that are "correct". You don't write sloppy code because it "doesn't matter". Because you very often can not predict if your code doesn't become "important" at some point - and then your sins will come back biting you.
And no, finalize isn't the answer to anything ( see here ). You care about all resources that your code is using, and you carefully make sure that resources have a well defined life cycle.
Firstly finalize is never guaranteed to be called by the GC, so you cannot rely on it.
Secondly, in a real world application the amount of resources needed by your application changes, hence you shouldn't rely on assumptions and you should free up as many resources as you can to provide the best possible performance.
In my opinion the key words are performance, availability, scalability.
As a Java developer, you have little control over when, or even if, finalizers are invoked. If your resources are in limited supply (database connections, for example), or create additional threads that have to be serviced, or hold locks, or use substantial memory, then you need to exercise control over when they are allocated and freed. It isn't always obvious what the implications are for keeping a resource allocated, and it's always safer to minimize the scope and duration of your resource usage, so far as the logic of the application allows.
Why we are calling System.gc() method to garbage collection of unused object, if garbage collection is automatically done in Java by daemon thread in background process with regular interval?
For testing purposes it can be useful to know when a GC has been performed. However this is rare and calling System.gc() when it isn't needed is such a common mistake that you can turn it off with -XX:+DisableExplicitGC.
We're not, because we're generally not smarter than the GC. Technically, calling it doesn't guarantee it'll run, although in many implementations it does.
Also, it's not necessarily done at regular imtervals, but based on an as-needed basis. Different implementations will start GC based on different criteria.
System.gc() just gives the garbage collector a hint that this might be a good time for garbage collection, e.g. because a lot of objects aren't used anymore and processing has finished.
It doesn't force the garbage collector to do anything - it just gives a hint.
Only use this once you need to optimze your application (after profiling).
I don't think I have ever seen it in production code so far.
While it shouldn't be necessary, I had code that thrashed much, much less when I did an explicit System.gc after a very large Map was emptied, just to be refilled again, in a loop. I think without a hint the collector tried minor gcs until it was clear that wouldn't do. Yes, I put it in production. I also put in a comment, just so my colleagues would test rather than blindly remove.
After answering a question about how to force-free objects in Java (the guy was clearing a 1.5GB HashMap) with System.gc(), I was told it's bad practice to call System.gc() manually, but the comments were not entirely convincing. In addition, no one seemed to dare to upvote, nor downvote my answer.
I was told there that it's bad practice, but then I was also told that garbage collector runs don't systematically stop the world anymore, and that it could also effectively be used by the JVM only as a hint, so I'm kind of at loss.
I do understand that the JVM usually knows better than you when it needs to reclaim memory. I also understand that worrying about a few kilobytes of data is silly. I also understand that even megabytes of data isn't what it was a few years back. But still, 1.5 gigabytes? And you know there's like 1.5 GB of data hanging around in memory; it's not like it's a shot in the dark. Is System.gc() systematically bad, or is there some point at which it becomes okay?
So the question is actually double:
Why is or isn't it bad practice to call System.gc()? Is it really merely a hint to the JVM under certain implementations, or is it always a full collection cycle? Are there really garbage collector implementations that can do their work without stopping the world? Please shed some light over the various assertions people have made in the comments to my answer.
Where's the threshold? Is it never a good idea to call System.gc(), or are there times when it's acceptable? If so, what are those times?
The reason everyone always says to avoid System.gc() is that it is a pretty good indicator of fundamentally broken code. Any code that depends on it for correctness is certainly broken; any that rely on it for performance are most likely broken.
You don't know what sort of garbage collector you are running under. There are certainly some that do not "stop the world" as you assert, but some JVMs aren't that smart or for various reasons (perhaps they are on a phone?) don't do it. You don't know what it's going to do.
Also, it's not guaranteed to do anything. The JVM may just entirely ignore your request.
The combination of "you don't know what it will do," "you don't know if it will even help," and "you shouldn't need to call it anyway" are why people are so forceful in saying that generally you shouldn't call it. I think it's a case of "if you need to ask whether you should be using this, you shouldn't"
EDIT to address a few concerns from the other thread:
After reading the thread you linked, there's a few more things I'd like to point out.
First, someone suggested that calling gc() may return memory to the system. That's certainly not necessarily true - the Java heap itself grows independently of Java allocations.
As in, the JVM will hold memory (many tens of megabytes) and grow the heap as necessary. It doesn't necessarily return that memory to the system even when you free Java objects; it is perfectly free to hold on to the allocated memory to use for future Java allocations.
To show that it's possible that System.gc() does nothing, view
JDK bug 6668279
and in particular that there's a -XX:DisableExplicitGC VM option:
By default calls to System.gc() are enabled (-XX:-DisableExplicitGC). Use -XX:+DisableExplicitGC to disable calls to System.gc(). Note that the JVM still performs garbage collection when necessary.
It has already been explained that calling system.gc() may do nothing, and that any code that "needs" the garbage collector to run is broken.
However, the pragmatic reason that it is bad practice to call System.gc() is that it is inefficient. And in the worst case, it is horribly inefficient! Let me explain.
A typical GC algorithm identifies garbage by traversing all non-garbage objects in the heap, and inferring that any object not visited must be garbage. From this, we can model the total work of a garbage collection consists of one part that is proportional to the amount of live data, and another part that is proportional to the amount of garbage; i.e. work = (live * W1 + garbage * W2).
Now suppose that you do the following in a single-threaded application.
System.gc(); System.gc();
The first call will (we predict) do (live * W1 + garbage * W2) work, and get rid of the outstanding garbage.
The second call will do (live* W1 + 0 * W2) work and reclaim nothing. In other words we have done (live * W1) work and achieved absolutely nothing.
We can model the efficiency of the collector as the amount of work needed to collect a unit of garbage; i.e. efficiency = (live * W1 + garbage * W2) / garbage. So to make the GC as efficient as possible, we need to maximize the value of garbage when we run the GC; i.e. wait until the heap is full. (And also, make the heap as big as possible. But that is a separate topic.)
If the application does not interfere (by calling System.gc()), the GC will wait until the heap is full before running, resulting in efficient collection of garbage1. But if the application forces the GC to run, the chances are that the heap won't be full, and the result will be that garbage is collected inefficiently. And the more often the application forces GC, the more inefficient the GC becomes.
Note: the above explanation glosses over the fact that a typical modern GC partitions the heap into "spaces", the GC may dynamically expand the heap, the application's working set of non-garbage objects may vary and so on. Even so, the same basic principal applies across the board to all true garbage collectors2. It is inefficient to force the GC to run.
1 - This is how the "throughput" collector works. Concurrent collectors such as CMS and G1 use different criteria to decide when to start the garbage collector.
2 - I'm also excluding memory managers that use reference counting exclusively, but no current Java implementation uses that approach ... for good reason.
Lots of people seem to be telling you not to do this. I disagree. If, after a large loading process like loading a level, you believe that:
You have a lot of objects that are unreachable and may not have been gc'ed. and
You think the user could put up with a small slowdown at this point
there is no harm in calling System.gc(). I look at it like the c/c++ inline keyword. It's just a hint to the gc that you, the developer, have decided that time/performance is not as important as it usually is and that some of it could be used reclaiming memory.
Advice to not rely on it doing anything is correct. Don't rely on it working, but giving the hint that now is an acceptable time to collect is perfectly fine. I'd rather waste time at a point in the code where it doesn't matter (loading screen) than when the user is actively interacting with the program (like during a level of a game.)
There is one time when i will force collection: when attempting to find out is a particular object leaks (either native code or large, complex callback interaction. Oh and any UI component that so much as glances at Matlab.) This should never be used in production code.
People have been doing a good job explaining why NOT to use, so I will tell you a couple situations where you should use it:
(The following comments apply to Hotspot running on Linux with the CMS collector, where I feel confident saying that System.gc() does in fact always invoke a full garbage collection).
After the initial work of starting up your application, you may be a terrible state of memory usage. Half your tenured generation could be full of garbage, meaning that you are that much closer to your first CMS. In applications where that matters, it is not a bad idea to call System.gc() to "reset" your heap to the starting state of live data.
Along the same lines as #1, if you monitor your heap usage closely, you want to have an accurate reading of what your baseline memory usage is. If the first 2 minutes of your application's uptime is all initialization, your data is going to be messed up unless you force (ahem... "suggest") the full gc up front.
You may have an application that is designed to never promote anything to the tenured generation while it is running. But maybe you need to initialize some data up-front that is not-so-huge as to automatically get moved to the tenured generation. Unless you call System.gc() after everything is set up, your data could sit in the new generation until the time comes for it to get promoted. All of a sudden your super-duper low-latency, low-GC application gets hit with a HUGE (relatively speaking, of course) latency penalty for promoting those objects during normal operations.
It is sometimes useful to have a System.gc call available in a production application for verifying the existence of a memory leak. If you know that the set of live data at time X should exist in a certain ratio to the set of live data at time Y, then it could be useful to call System.gc() a time X and time Y and compare memory usage.
This is a very bothersome question, and I feel contributes to many being opposed to Java despite how useful of a language it is.
The fact that you can't trust "System.gc" to do anything is incredibly daunting and can easily invoke "Fear, Uncertainty, Doubt" feel to the language.
In many cases, it is nice to deal with memory spikes that you cause on purpose before an important event occurs, which would cause users to think your program is badly designed/unresponsive.
Having ability to control the garbage collection would be very a great education tool, in turn improving people's understanding how the garbage collection works and how to make programs exploit it's default behavior as well as controlled behavior.
Let me review the arguments of this thread.
It is inefficient:
Often, the program may not be doing anything and you know it's not doing anything because of the way it was designed. For instance, it might be doing some kind of long wait with a large wait message box, and at the end it may as well add a call to collect garbage because the time to run it will take a really small fraction of the time of the long wait but will avoid gc from acting up in the middle of a more important operation.
It is always a bad practice and indicates broken code.
I disagree, it doesn't matter what garbage collector you have. Its' job is to track garbage and clean it.
By calling the gc during times where usage is less critical, you reduce odds of it running when your life relies on the specific code being run but instead it decides to collect garbage.
Sure, it might not behave the way you want or expect, but when you do want to call it, you know nothing is happening, and user is willing to tolerate slowness/downtime. If the System.gc works, great! If it doesn't, at least you tried. There's simply no down side unless the garbage collector has inherent side effects that do something horribly unexpected to how a garbage collector is suppose to behave if invoked manually, and this by itself causes distrust.
It is not a common use case:
It is a use case that cannot be achieved reliably, but could be if the system was designed that way. It's like making a traffic light and making it so that some/all of the traffic lights' buttons don't do anything, it makes you question why the button is there to begin with, javascript doesn't have garbage collection function so we don't scrutinize it as much for it.
The spec says that System.gc() is a hint that GC should run and the VM is free to ignore it.
what is a "hint"? what is "ignore"? a computer cannot simply take hints or ignore something, there are strict behavior paths it takes that may be dynamic that are guided by the intent of the system. A proper answer would include what the garbage collector is actually doing, at implementation level, that causes it to not perform collection when you request it. Is the feature simply a nop? Is there some kind of conditions that must me met? What are these conditions?
As it stands, Java's GC often seems like a monster that you just don't trust. You don't know when it's going to come or go, you don't know what it's going to do, how it's going to do it. I can imagine some experts having better idea of how their Garbage Collection works on per-instruction basis, but vast majority simply hopes it "just works", and having to trust an opaque-seeming algorithm to do work for you is frustrating.
There is a big gap between reading about something or being taught something, and actually seeing the implementation of it, the differences across systems, and being able to play with it without having to look at the source code. This creates confidence and feeling of mastery/understanding/control.
To summarize, there is an inherent problem with the answers "this feature might not do anything, and I won't go into details how to tell when it does do something and when it doesn't and why it won't or will, often implying that it is simply against the philosophy to try to do it, even if the intent behind it is reasonable".
It might be okay for Java GC to behave the way it does, or it might not, but to understand it, it is difficult to truly follow in which direction to go to get a comprehensive overview of what you can trust the GC to do and not to do, so it's too easy simply distrust the language, because the purpose of a language is to have controlled behavior up to philosophical extent(it's easy for a programmer, especially novices to fall into existential crisis from certain system/language behaviors) you are capable of tolerating(and if you can't, you just won't use the language until you have to), and more things you can't control for no known reason why you can't control them is inherently harmful.
Sometimes (not often!) you do truly know more about past, current and future memory usage then the run time does. This does not happen very often, and I would claim never in a web application while normal pages are being served.
Many year ago I work on a report generator, that
Had a single thread
Read the “report request” from a queue
Loaded the data needed for the report from the database
Generated the report and emailed it out.
Repeated forever, sleeping when there were no outstanding requests.
It did not reuse any data between reports and did not do any cashing.
Firstly as it was not real time and the users expected to wait for a report, a delay while the GC run was not an issue, but we needed to produce reports at a rate that was faster than they were requested.
Looking at the above outline of the process, it is clear that.
We know there would be very few live objects just after a report had been emailed out, as the next request had not started being processed yet.
It is well known that the cost of running a garbage collection cycle is depending on the number of live objects, the amount of garbage has little effect on the cost of a GC run.
That when the queue is empty there is nothing better to do then run the GC.
Therefore clearly it was well worth while doing a GC run whenever the request queue was empty; there was no downside to this.
It may be worth doing a GC run after each report is emailed, as we know this is a good time for a GC run. However if the computer had enough ram, better results would be obtained by delaying the GC run.
This behaviour was configured on a per installation bases, for some customers enabling a forced GC after each report greatly speeded up the production of reports. (I expect this was due to low memory on their server and it running lots of other processes, so hence a well time forced GC reduced paging.)
We never detected an installation that did not benefit from a forced GC run every time the work queue was empty.
But, let be clear, the above is not a common case.
These days I would be more inclined to run each report in a seperate process leaving the operating system to clear up memory rather then the garbage collector and having the custom queue manager service use mulple working processes on large servers.
GC efficiency relies on a number of heuristics. For instance, a common heuristic is that write accesses to objects usually occur on objects which were created not long ago. Another is that many objects are very short-lived (some objects will be used for a long time, but many will be discarded a few microseconds after their creation).
Calling System.gc() is like kicking the GC. It means: "all those carefully tuned parameters, those smart organizations, all the effort you just put into allocating and managing the objects such that things go smoothly, well, just drop the whole lot, and start from scratch". It may improve performance, but most of the time it just degrades performance.
To use System.gc() reliably(*) you need to know how the GC operates in all its fine details. Such details tend to change quite a bit if you use a JVM from another vendor, or the next version from the same vendor, or the same JVM but with slightly different command-line options. So it is rarely a good idea, unless you want to address a specific issue in which you control all those parameters. Hence the notion of "bad practice": that's not forbidden, the method exists, but it rarely pays off.
(*) I am talking about efficiency here. System.gc() will never break a correct Java program. It will neither conjure extra memory that the JVM could not have obtained otherwise: before throwing an OutOfMemoryError, the JVM does the job of System.gc(), even if as a last resort.
Maybe I write crappy code, but I've come to realize that clicking the trash-can icon on eclipse and netbeans IDEs is a 'good practice'.
Some of what I am about to write is simply a summarization of what has already been written in other answers, and some is new.
The question "Why is it bad practice to call System.gc()?" does not compute. It assumes that it is bad practice, while it is not. It greatly depends on what you are trying to accomplish.
The vast majority of programmers out there have no need for System.gc(), and it will never do anything useful to them in the vast majority of use cases. So, for the majority, calling it is bad practice because it will not do whatever it is that they think it will do, it will only add overhead.
However, there are a few rare cases where invoking System.gc() is actually beneficial:
When you are absolutely sure that you have some CPU time to spare now, and you want to improve the throughput of code that will run later. For example, a web server that discovers that there are no pending web requests at the moment can initiate garbage collection now, so as to reduce the chances that garbage collection will be needed during the processing of a barrage of web requests later on. (Of course this can hurt if a web request arrives during collection, but the web server could be smart about it and abandon collection if a request comes in.) Desktop GUIs are another example: on the idle event (or, more broadly, after a period of inactivity,) you can give the JVM a hint that if it has any garbage collection to do, now is better than later.
When you want to detect memory leaks. This is often done in combination with a debug-mode-only finalizer, or with the java.lang.ref.Cleaner class from Java 9 onwards. The idea is that by forcing garbage collection now, and thus discovering memory leaks now as opposed to some random point in time in the future, you can detect the memory leaks as soon as possible after they have happened, and therefore be in a better position to tell precisely which piece of code has leaked memory and why. (Incidentally, this is also one of, or perhaps the only, legitimate use cases for finalizers or the Cleaner. The practice of using finalization for recycling of unmanaged resources is flawed, despite being very widespread and even officially recommended, because it is non-deterministic. For more on this topic, read this: https://blog.michael.gr/2021/01/object-lifetime-awareness.html)
When you are measuring the performance of code, (benchmarking,) in order to reduce/minimize the chances of garbage collection occurring during the benchmark, or in order to guarantee that whatever overhead is suffered due to garbage collection during the benchmark is due to garbage generated by the code under benchmark, and not by unrelated code. A good benchmark always starts with an as thorough as possible garbage collection.
When you are measuring the memory consumption of code, in order to determine how much garbage is generated by a piece of code. The idea is to perform a full garbage collection so as to start in a clean state, run the code under measurement, obtain the heap size, then do another full garbage collection, obtain the heap size again, and take the difference. (Incidentally, the ability to temporarily suppress garbage collection while running the code under measurement would be useful here, alas, the JVM does not support that. This is deplorable.)
Note that of the above use cases, only one is in a production scenario; the rest are in testing / diagnostics scenarios.
This means that System.gc() can be quite useful under some circumstances, which in turn means that it being "only a hint" is problematic.
(For as long as the JVM is not offering some deterministic and guaranteed means of controlling garbage collection, the JVM is broken in this respect.)
Here is how you can turn System.gc() into a bit less of a hint:
private static void runGarbageCollection()
{
for( WeakReference<Object> ref = new WeakReference<>( new Object() ); ; )
{
System.gc(); //optional
Runtime.getRuntime().runFinalization(); //optional
if( ref.get() == null )
break;
Thread.yield();
}
}
This still does not guarantee that you will get a full GC, but it gets a lot closer. Specifically, it will give you some amount of garbage collection even if the -XX:DisableExplicitGC VM option has been used. (So, it truly uses System.gc() as a hint; it does not rely on it.)
Yes, calling System.gc() doesn't guarantee that it will run, it's a request to the JVM that may be ignored. From the docs:
Calling the gc method suggests that the Java Virtual Machine expend effort toward recycling unused objects
It's almost always a bad idea to call it because the automatic memory management usually knows better than you when to gc. It will do so when its internal pool of free memory is low, or if the OS requests some memory be handed back.
It might be acceptable to call System.gc() if you know that it helps. By that I mean you've thoroughly tested and measured the behaviour of both scenarios on the deployment platform, and you can show it helps. Be aware though that the gc isn't easily predictable - it may help on one run and hurt on another.
First, there is a difference between spec and reality. The spec says that System.gc() is a hint that GC should run and the VM is free to ignore it. The reality is, the VM will never ignore a call to System.gc().
Calling GC comes with a non-trivial overhead to the call and if you do this at some random point in time it's likely you'll see no reward for your efforts. On the other hand, a naturally triggered collection is very likely to recoup the costs of the call. If you have information that indicates that a GC should be run than you can make the call to System.gc() and you should see benefits. However, it's my experience that this happens only in a few edge cases as it's very unlikely that you'll have enough information to understand if and when System.gc() should be called.
One example listed here, hitting the garbage can in your IDE. If you're off to a meeting why not hit it. The overhead isn't going to affect you and heap might be cleaned up for when you get back. Do this in a production system and frequent calls to collect will bring it to a grinding halt! Even occasional calls such as those made by RMI can be disruptive to performance.
In my experience, using System.gc() is effectively a platform-specific form of optimization (where "platform" is the combination of hardware architecture, OS, JVM version and possible more runtime parameters such as RAM available), because its behaviour, while roughly predictable on a specific platform, can (and will) vary considerably between platforms.
Yes, there are situations where System.gc() will improve (perceived) performance. On example is if delays are tolerable in some parts of your app, but not in others (the game example cited above, where you want GC to happen at the start of a level, not during the level).
However, whether it will help or hurt (or do nothing) is highly dependent on the platform (as defined above).
So I think it is valid as a last-resort platform-specific optimization (i.e. if other performance optimizations are not enough). But you should never call it just because you believe it might help(without specific benchmarks), because chances are it will not.
Since objects are dynamically allocated by using the new operator,
you might be wondering how such objects are destroyed and their
memory released for later reallocation.
In some languages, such as C++, dynamically allocated objects must
be manually released by use of a delete operator.
Java takes a different approach; it handles deallocation for you
automatically.
The technique that accomplishes this is called garbage collection.
It works like this: when no references to an object exist, that object is assumed to be no longer needed, and the memory occupied by the object can be reclaimed. There is no explicit need to destroy objects as in C++.
Garbage collection only occurs sporadically (if at all) during the
execution of your program.
It will not occur simply because one or more objects exist that are
no longer used.
Furthermore, different Java run-time implementations will take
varying approaches to garbage collection, but for the most part, you
should not have to think about it while writing your programs.