Following is the scenario i need to solve. I have struck with two solutions.
I need to maintain a cache of data fetched from database to be shown on a Swing GUI.
Whenever my JVM memory exceeds 70% of its allocated memory, i need to warn user regarding excessive usage. And once JVM memory usage exceeds 80%, then i have to halt all the database querying and clean up the existing cache fetched as part of the user operations and notifying the user. During cleanup process, i will manually handle deleting some data based up on some rules and instructs JVM for a GC. Whenever GC occurs, if memory cleans up and reaches 60% of the allocated memory, I need to restart all the Database handling and giving back control to the user.
For checking JVM memory statistics i found following two solutions. Could not able to decide which is best way and why.
Runtime.freeMemory() - Thread created to run every 10 seconds and check for the free memory and if memory exceeds the limits mentioned, necessary popups will intimate user and will call the methods to halt the operations and freeing up the memory.
MemoryPoolMXBean.getUsage() - Java 5 has introduced JMX to get the snapshot of the memory at runtime. In, JMX i cannot use Threshold notification since it will only notify when memory reaches/exceeds the given threshhold. Only way to use is Polling in MemoryMXBean and check the memory statistics over a period.
In case of using polling, it seems for me both the implementations are going to be same.
Please suggest the advantages of the methods and if there are any other alternatives/any corrections to the methods using.
Just a side note: Runtime.freeMemory() doesn't state the amount of memory that's left of allocating, it's just the amount of memory that's free within the currently allocated memory (which is initially smaller than the maximum memory the VM is configured to use), but grows over time.
When starting a VM, the max memory (Runtime.maxMemory()) just defines the upper limit of memory that the VM may allocate (configurable using the -Xmx VM option).
The total memory (Runtime.totalMemory()) is the initial size of the memory allocated for the VM process (configurable using the -Xms VM option), and will dynamically grow every time you allocate more than the currently free portion of it (Runtime.freeMemory()), until it reaches the max memory.
The metric you're interested in is the memory available for further allocation:
long usableFreeMemory= Runtime.getRuntime().maxMemory()
-Runtime.getRuntime().totalMemory()
+Runtime.getRuntime().freeMemory()
or:
double usedPercent=(double)(Runtime.getRuntime().totalMemory()
-Runtime.getRuntime().freeMemory())/Runtime.getRuntime().maxMemory()
The usual way to handle this sort of thing is to use WeakReferences and SoftReferences. You need to use both - the weak reference means you are not holding multiple copies of things, and the soft references mean that the GC will hang onto things until it starts running out of memory.
If you need to do additional cleanup, then you can add references to queues, and override the queue notification methods to trigger the cleanup. It's all good fun, but you do need to understand what these classes do.
It is entirely normal for a JVM to go up to 100% memory usage and them back to say 10% after a GC and do this every few second.
You shouldn't need to try managing the memory in this way.
You cannot say how much memory is being retained until a full GC has been run.
I suggest you work out what you are really trying to achieve and look at the problem another way.
The requirements you mention are a clear contradiction with how Garbage Collection works in a JVM.
because of the behaviour of the JVM it will be very hard to warn you users in a correct way.
Altogether stopping als database manipulation , cleaning stuff up and starting again really is not the way to go.
Let the JVM do what it is supposed to do, handle all memory related for you.
Modern generations of the JVM are very good at it and with some finetuning of the GC parameters you will get a a much cleaner memory handling then forcing things yourself
Articles like http://www.kodewerk.com/advice_on_jvm_heap_tuning_dont_touch_that_dial.htm mention the pros and cons and offer a nice explanation of what the VM does for you
I've only used the first method for similar task and it was OK.
One thing you should note, for both methods, is to implement some kind of debouncing - i.e. once you recognize you've hit 70% of memory, wait for a minute (or any other time you find appropriate) - GC can run at that time and clean up lots of memory.
If you implement a Runtime.freeMemory() graph in your system you'll see how the memory is constantly going up and down, up and down.
VisualVM is a bit nicer than JConsole because it gives you a nice visual Garbage Collector view.
Look into JConsole. It graphs the information you need so it is a matter of adapting this to your needs (given that you run on a Sun Java 6).
This also allows you to detach the surveiling process from what you want to look at.
Very late after the original post, I know, but I thought I'd post an example of how I've done it. Hopefully it'll be of some use to someone (I stress, it's a proof of principal example, nothing else... not particularly elegant either :) )
Just stick these two functions in a class, and it should work.
EDIT: Oh, andimport java.util.ArrayList;
import java.util.List;
public static int MEM(){
return (int)(Runtime.getRuntime().maxMemory()-Runtime.getRuntime().totalMemory() +Runtime.getRuntime().freeMemory())/1024/1024;
}
public static void main(String[] args) throws InterruptedException
{
List list = new ArrayList();
//get available memory before filling list
int initMem = MEM();
int lowMemWarning = (int) (initMem * 0.2);
int highMem = (int) (initMem *0.8);
int iteration =0;
while(true)
{
//use up some memory
list.add(Math.random());
//report
if(++iteration%10000==0)
{
System.out.printf("Available Memory: %dMb \tListSize: %d\n", MEM(),list.size());
//if low on memory, clear list and await garbage collection before continuing
if(MEM()<lowMemWarning)
{
System.out.printf("Warning! Low memory (%dMb remaining). Clearing list and cleaning up.\n",MEM());
//clear list
list = new ArrayList(); //obviously, here is a good place to put your warning logic
//ensure garbage collection occurs before continuing to re-add to list, to avoid immediately entering this block again
while(MEM()<highMem)
{
System.out.printf("Awaiting gc...(%dMb remaining)\n",MEM());
//give it a nudge
Runtime.getRuntime().gc();
Thread.sleep(250);
}
System.out.printf("gc successful! Continuing to fill list (%dMb remaining). List size: %d\n",MEM(),list.size());
Thread.sleep(3000); //just to view output
}
}
}
}
EDIT: This approach still relies on sensible setting of memory in the jvm using -Xmx, however.
EDIT2: It seems that the gc request line really does help things along, at least on my jvm. ymmv.
Related
I'm currently attempting to write a tensor-processing/deep learning library in Java similar to PyTorch or Tensorflow.
Tensors reference MemoryHandles, which hold the native memory needed for the tensor data.
During training, tensor instances are created rapidly, but never the less, the JVM heap itself stays about 100Mb-200Mb and thus the garbage collector is never prompted to garbage collect.
This results in the memory footprint of the application exploding and consuming upwards of 16GB of RAM, due to how much native memory is needed to store the tensor data.
The memory handles themselves are allocated via a cental MemoryManager, which creates PhantomReferences to the handed out handles, and after the object is garbage collected, the associated native memory is correctly freed.
What makes this problem hard
Why is the GC not smart enough to instantly clean these tensors?
Operations such as .matmul(), .plus() etc. are not immediately executed, but rather recorded into a Graph, where nodes represent either variables or operations. This graph is necessary for backpropagation and thus creating it is not optional.
This creates a rather complicated reference structure that is hard to unravel for a GC.
Attempted solutions
I have attempted various less then ideal ways to fix this problem:
Insanely small JVM heap size
-Xmx100M
By forcing the Garbage collector to work with insanely low heap sizes, the garbage collector keeps the native memory footprint bearable.
This introduces very little slow down to the training loop in the cases I have evaluated and would be bearable, if finding out that ideal MB to make the GC do what you want wasn't so painful. Also, if the memory usage of your application isn't more or less constant, this approach also bursts into flames.
Periodic full gc
Running a full gc for every X Mb of natively allocated memory.
This introduces abysmal slow down to the training loop in the cases I have evaluated.
This is the only "in-application" fix that I can think of, meaning, that the user is not forced to use weird jvm args when running their program.
While -XX:+UseZGC and -XX:+ExplicitGCInvokesConcurrent show some improvement, the situation remains rather bad.
Both these solutions do in fact keep the memory footprint of the application at bay, which goes to show that IF the GC catches all the un-referenced MemoryHandles, everything is freed correctly.
Thus my question:
When Jvm applications experience high allocation rates, the GC usually kicks in hard.
Now the problem here is that we have effectively high allocation rates, but that is not at all reflected in the JVM heap. If you put yourself into the shoes of the Garbage Collector, the least that you suspect is that freeing a java object solely consisting of an 8 byte long is where you should place your efforts.
If however it was possible to hint the GC to try harder to free objects of the MemoryHandle type, I suspect these problems would largely disappear. So my question would be: Is this possible?
I wouldn't mind writing hacky native code, if necessary.
Another idea would be to use some jvm argument to make the full GC less aggressive, more in line with the slight slowdown that I experienced with -Xmx100m .
If this is in fact not possible, are there alternative solutions to sovling this problem?
Surely I can't be the first person to attempt to write a Java library with large native resources.
I think that I have now figured out a solution that works as good as it can.
The problem
If you face a similar issue you probably have code that fits some of these criterias:
A high allocation rate of small objects, which hold large native resources
Objects referencing each other in complicated ways that is hard for the GC to untangle
No place in the code where you can safely determine that the resources are no longer in use
Requirements for a potential solution
Your requirements probably are:
Don't bottleneck the loop that allocates the native handles
Nearly instantanious cleanup after the native handle becomes unreferenced
The tradeoff
It turns out you cannot accomplish both these requirements at once.
You unfortunately have to choose between one or the other.
If you don't want to bottleneck the loop that allocates these native handles at a high rate, you need to trade RAM to do that.
If you want instantatious cleanup after the native handle becomes unreferenced,
you have to sacrifice the execution speed of the code that allocates the native handles.
The (hacky) solution
Create a mechanism such that you can asynchronously request a full GC to be performed.
private final AtomicBoolean shouldRunGC = new AtomicBoolean(false);
private final Thread gcThread = new Thread(() -> {
while (true) {
try {
Thread.sleep(10);
} catch (InterruptedException e) {
e.printStackTrace();
}
if (shouldRunGC.getAndSet(false)) {
System.gc();
}
}
}, "GC-Invoker-Thread");
{
gcThread.setDaemon(true);
gcThread.start();
}
Ideally, you have a region of code that is loosely associated with cleanup of these handle objects. It doesn't have to mean that these objects can be safely disposed at this point in time, it just has to mean that the object is >probably< safe to delete. This callsite merely serves a statistical metric to determine the best intervall in which to trigger the Garbage Collection.
You should also know the size of your native resource, or alternatively an estimate of how bad it would be to keep a given object arround.
Alternatively you could also place this at the point of the allocation of your native handles, but note that the effectiveness of the statistical metric that you collect is less effective.
This is an example of such a method in my tensor processing library Sci-Core:
/**
* Drops the history of how this tensor was computed.
* This is useful e.g. when the tensor was changed by the optimizer
* and thus backpropagation back into the last training step (wtf) would be brain-dead.
* Thus, we no longer need to keep a record of how the tensor was computed.
* Executes all operations to compute the value of the specified tensor contained in the graph, if it is not already computed.
* #param tensor the tensor to drop the computation history for
*/
public void dropHistory(ITensor tensor) {
// for all nodes now dropped from the graph
...
nBytesDeletedSinceLastAsyncGC += value.getNumBytes();
nBytesDeletedSinceLastOnSameThreadGC += value.getNumBytes();
...
if (nBytesDeletedSinceLastAsyncGC > 100_000_000) { // 100 Mb
shouldRunGC.set(true);
nBytesDeletedSinceLastAsyncGC = 0;
}
if (nBytesDeletedSinceLastOnSameThreadGC > 2_000_000_000) { // 2 GB
System.gc();
nBytesDeletedSinceLastOnSameThreadGC = 0;
}
}
To fight against bottlenecking your allocation loop, you can use the following JVM arguments:
-XX:+UseZGC -XX:+ExplicitGCInvokesConcurrent -XX:MaxGCPauseMillis=1
Why would this work?
Triggering regular garbage collection seems to make the garbage collector interested in cleaning the very small handle objects (among basically every other object that you create in your application. You still don't have "prioritization" for your handles, they just happen to also be garbage collected. If your application in addition to the native handle objects also allocates a significant amount of other small objects, the effectiveness of this technique will be significantly reduced.
Note however, that triggering the Garbage collector is expensive and thus the maximum value for nBytesDeletedSinceLastAsyncGC and nBytesDeletedSinceLastOnSameThreadGC must be carefully chosen.
Running the garbage collector asynchronously is less expensive, as it will not bottleneck your allocation loop very much but also less effective than calling the garbage collector on the same thread the objects are allocated. So, doing both in carefully chosen intervals can probably get you a good compromise between execution speed of your allocation loop and memory footprint.
Hello!
I'm a beginner Java and Android developer and I've been having trouble lately dealing with my app's memory management. I will break this text into sections, in order to make it clearer and readable.
A brief description of my app
It's a game that consists of several stages (levels). Each stage has a starting point for the player and an exit, which leads the player to the next stage. Each stage has its own set of obstacles. Currently, when the player reaches the final stage (I've only created 4 so far) he/she automatically goes back to the first stage (level 1).
An abstract class called GameObject (extends Android.View) defines the base structure and behaviour for the player and all the other objects (obstacles, etc) present in the game. All the objects (that are, essentially, views) are drawn in a custom view created by me (extends FrameLayout). The game logic and the game loop is handled by a side thread (gameThread). The stages are created by retrieving metadata from xml files.
The problem
Besides all the possible memory leaks on my code (all of which I've been working hard to find and solve), there is a strange phenomenon related to the garbage collector happening. Instead of describing it with words and risk getting you confused, I will use images. As Confucius said, "An image is worth a thousand words". Well, in this case, I've just saved you from reading 150,000 words, since my GIF below has 150 frames.
Description: the first image represents my app's memory usage when the "stage 1" is first loaded. The second image (GIF) firstly represents my app's memory usage timeline when the "stage 1" is loaded for the second time (this happens, as described earlier, when the player beat the last stage) and is followed by four garbage collections forcefully initiated by me.
As you might have noticed, there is a huge difference (almost 50MB) in the memory usage between the two situations. When the "Stage 1" is firstly loaded, when the game starts, the app is using 85MB of memory. When the same stage is loaded for the second time, a little bit later, the memory usage is already at 130MB! That's probably due to some bad coding on my part and I'm not here because of this. Have you noticed how, after I forcefully performed 2 (actually 4, but only the first 2 mattered) garbage collections, the memory usage went back to it's "normal state" (the same memory usage as when the stage was firstly loaded)? That's the weird phenomenon I was talking about.
The question
If the garbage collector is supposed to remove from memory objects that are no long being referenced (or, at least, have only weak references), why is the "trash memory" that you saw above being removed only when I forcefully call the GC and not on the GC's normal executions? I mean, if the garbage collection manually initiated by me could remove this "thrash", then the normal GC's executions would be able to remove it as well. Why isn't it happening?
I've even tried to call System.gc() when the stages are being switched, but, even though the garbage collection happens, this "thrash" memory isn't removed like when I manually perform the GC. Am I missing something important about how the garbage collector works or about how Android implements it?
Final considerations
I've spent days searching, studying and making modifications on my code but I could not find out why this is happening. StackOverflow is my last resort. Thank you!
NOTE: I was going to post some possibly relevant part of my app's source code, but since the question is already too long I will stop here. If you feel the need to check some of the code, just let me know and I will edit this question.
What I have already read:
How to force garbage collection in Java?
Garbage collector in Android
Java Garbage Collection Basics by Oracle
Android Memory Overview
Memory Leak Patterns in Android
Avoiding Memory Leaks in Android
Manage your app's memory
What you need to know about Android app memory leaks
View the Java heap and memory allocations with Memory Profiler
LeakCanary (memory leak detection library for Android and Java)
Android Memory Leak and Garbage Collection
Generic Android Garbage Collection
How to clear dynamically created view from memory?
How References Work in Android and Java
Java Garbage Collector - Not running normally at regular intervals
Garbage Collection in android (Done manually)
... and more I couldn't find again.
Garbage collection is complicated, and different platforms implement it differently. Indeed, different versions of the same platform implement garbage collection differently. (And more ... )
A typical modern collector is based on the observation that most objects die young; i.e. they become unreachable soon after they are created. The heap is then divided into two or more "spaces"; e.g. a "young" space and an "old" space.
The "young" space is where new objects are created, and it is collected frequently. The "young" space tends to be smaller, and a "young" collection happens quickly.
The "old" space is where long-lived objects end up, and it is collected infrequently. On "old" space collection tends to be more expensive. (For various reasons.)
Object that survive a number of GC cycles in the "new" space get "tenured"; i.e they are moved to the "old" space.
Occasionally we may find that we need to collect the new and old spaces at the same time. This is called a full collection. A full GC is the most expensive, and typically "stops the world" for a relatively long time.
(There are all sorts of other clever and complex things ... which I won't go into.)
Your question is why doesn't the space usage drop significantly until you call System.gc().
The answer is basically that this is the efficient way to do things.
The real goal of collection is not to free as much memory all of the time. Rather, the goal is to ensure that there is enough free memory when it is needed, and to do this either with minimum CPU overheads or a minimum of GC pauses.
So in normal operation, the GC will behave as above: do frequent "new" space collections and less frequent "old" space collections. And the collections
will run "as required".
But when you call System.gc() the JVM will typically try to get back as much memory as possible. That means it does a "full gc".
Now I think you said it takes a couple of System.gc() calls to make a real difference, that could be related to use of finalize methods or Reference objects or similar. It turns out that finalizable objects and Reference are processed after the main GC has finished by a background thread. The objects are only actually in a state where they can be collected and deleted after that. So another GC is needed to finally get rid of them.
Finally, there is the issue of the overall heap size. Most VMs request memory from the host operating system when the heap is too small, but are reluctant to give it back. The Oracle collectors note the free space ratio at the end of successive "full" collections. They only reduce the overall size of the heap if the free space ratio is "too high" after a number of GC cycles. There are a number of reasons that the Oracle GCs take this approach:
Typical modern GCs work most efficiently when the ratio of garbage to non-garbage objects is high. So keeping the heap large aids efficiency.
There is a good chance that the application's memory requirement will grow again. But the GC needs to run to detect that.
A JVM repeatedly giving memory back to the OS and and re-requesting it is potentially disruptive for the OS virtual memory algorithms.
It is problematic if the OS is short of memory resources; e.g. JVM: "I don't need this memory. Have it back", OS: "Thanks", JVM: "Oh ... I need it again!", OS: "Nope", JVM: "OOME".
Assuming that the Android collector works the same way, that is another explanation for why you had to run System.gc() multiple times to get the heap size to shrink.
And before you start adding System.gc() calls to your code, read Why is it bad practice to call System.gc()?.
I got the same problem on my app, I seen you have understood the GC, try to watch this video on why the GC is needed. try to add this code to your app class (the java file of the app, like each java file for each activity) and add this code under the Override of the "onCreate" (the code is in kotlin)
here is the hole class:
open class _appName_() : Application(){
private var appKilled = false
override fun onCreate() {
super.onCreate()
thread {
while (!appKilled){
Thread.sleep(6000)
System.runFinalization()
Runtime.getRuntime().gc()
System.gc()
}
}
}
override fun onTerminate() {
super.onTerminate()
appKilled = true
}
}
this bit of code make that every 6 sec GC is called
I have troubles with Java memory consumption.
I'd like to say to Java something like this: "you have 8GB of memory, please use it, and only it. Only if you really can't put all your resources in this memory pool, then fail with OOM".
I know, there are default parameters like -Xmx - they limit only the heap. There are also plenty of other parameters, I know. The problems with these parameters are:
They aren't relevant. I don't want to limit the heap size to 6GB (and trust that native memory won't take more than 2GB). I do want to limit all the memory (heap, native, whatever). And do that effectively, not just saying "-Xmx1GB" - to be safe.
There is too many different parameters related to memory, and I don't know how to configure all of them to achieve the goal.
So, I don't want to go there and care about heap, perm and whatever types of memory. My high-level expectation is: since there is only 8GB, and some static memory is needed - take the static memory from the 8GB, and carefully split the remaining memory between other dynamic memory entities.
Also, ulimit and similar things don't work. I don't want to kill the java process once it consumes more memory than expected. I want Java does its best to not reach the limit firstly, and only if it really, really can't - kill the process.
And I'm OK to define even 100 java parameters, why not. :) But then I need assistance with the full list of needed parameters (for, say, Java 8).
Have you tried -XX:MetaspaceSize?
Is this what you need?
Please, read this article: http://karunsubramanian.com/websphere/one-important-change-in-memory-management-in-java-8/
Keep in mind that this is only valid to Java 8.
AFAIK, there is no java command line parameter or set of parameters that will do that.
Your best bet (IMO) is to set the max heap size and the max metaspace size and hope that other things are going to be pretty static / predictable for your application. (It won't cover the size of the JVM binary and it probably won't cover native libraries, memory mapped files, stacks and so on.)
In a comment you said:
So I'm forced to have a significant amount of memory unused to be safe.
I think you are worrying about the wrong thing here. Assuming that you are not constrained by address space or swap space limitations, memory that is never used doesn't matter.
If a page of your address space is not used, the OS will (in the long term) swap it out, and give the physical RAM page to something else.
Pages in the heap won't be in that situation in a typical Java application. (Address space pages will cycle between in-use and free as the GC moves objects within and between "spaces".)
However, the flip-side is that a GC needs the total heap size to be significantly larger than the sum of the live objects. If too much of the heap is occupied with reachable objects, the interval between garbage collection runs decreases, and your GC ergonomics suffer. In the worst case, a JVM can grind to a halt as the time spent in the GC tends to 100%. Ugly. The GC overhead limit mechanism prevents this, but that just means that your JVM gets an OOME sooner.
So, in the normal heap case, a better way to think about it is that you need to keep a portion of memory "unused" so that the GC can operate efficiently.
This is related to my question Java Excel POI stops after multiple execution by quartz.
My program stops unexpectedly after a few iterations. I tried profiling and found that I was consuming a lot of heap memory per iteration (And a memory leak somewhere.. havn't found the bugger yet). So, as a temporary solution, I tried inserting System.gc(); at the end of each complete execution of the program (kindly read the linked question for a brief description of the program). I was not expecting much, maybe a few more heap space available after each iteration. But, it appears that the program uses less heap memory when I inserted System.gc();.
The top graph shows the program running with the System.gc(); while the bottom graph is the one without. As you can see the top graph shows that I'm only using less than a 100mb after 4 iteratioins of the program while the bottom graph shows over 100mb in usage for the same amount of iterations. Can anyone clarify how and why System.gc(); causes this effect in my heap? If there are any disadvantages if I were to use this in my program? Or I'm completely hopless in programming and take up photography instead?
Note that I inserted GC at the end of each program iteration. So I assume that heap usage must be the same as without the GC inserted until it meets the the System.gc(); command
Thanks!
Can anyone clarify how and why System.gc(); causes this effect in my heap?
System.gc is kind of a request service for the Garbage Collector to Run. Note that I have used request and not trigger in my statement. GC based upon the heap state might/not carry on collection.
If there are any disadvantages if I were to use this in my program?
From experience, GC works best when left alone. In your example you shouldn't worry or use System.gc. Because GC will run when it is best to run and manually requesting it might reduce the performance. Even though only a small difference, you can observe that "time spent on gc" is better in the below graph than the first one.
As per memory, both the graphs are OK. Seems like your max heap is a bit high. Hence GC did-not run it in second graph. If it was really required, it would have ran it.
As per the Java specs, calling gc() does not guarantee that it will run, you only hint to the JVM that you need it to run, so the result is unreliable (You should avoid calling gc() at not matter what). But, in your case here and since the heap is reaching critical limits incrementally, that's why perhaps your hints are being executed.
GC usually runs based on specific algorithms to prevent the heap from being exhausted and when it fails to reclaim the much needed space while having no more heap for you app to survive, you'll face the OutOfMemoryException.
While the GC is running, your application will experience some pauses as a result of its activities, so you won't really want it to run more often!
Your best bet is to solve the leak and practice better memory management for a healthy runtime experience.
Using System.gc() shouldn't impact the heap size allocated to JVM. Heap size is dependent only on startup arguments we provide to our JVM. I will recommend you to run the same program 3-4 times and take average values with System.gc() and without.
Coming back to the problem of finding the memory leak; I will recommend to use JProfiler or other tools which would tell you exact memory footprint; and different objects in the heap.
Last but not the least; you are a reasonable programmer. No need of going for a photo shoot :)
Let's say I have a Java application which does roughly the following:
Initialize (takes a long time because this is complicated)
Do some stuff quickly
Wait idly for a long time (your favorite mechanism here)
Go to step 2.
Is there a way to encourage or force the JVM to flush its memory out to disk during long periods of idleness? (e.g. at the end of step 2, make some function call that effectively says "HEY JVM! I'm going to be going to sleep for a while.")
I don't mind using a big chunk of virtual memory, but physical memory is at a premium on the machine I'm using because there are many background processes.
The operating system should handle this, I'd think.
Otherwise, you could manually store your application to disk or database post-initialization, and do a quicker initialization from that data, maybe?
Instead of having your program sit idle and use up resources, why not schedule it with cron? Or better yet, since you're using Java, schedule it with Quartz? Do your best to cache elements of your lengthy initialization procedure so you don't have to pay a big penalty each time the scheduled task runs.
The very first thing you must make sure of, is that your objects are garbage collectable. But that's just the first step.
Secondly, the memory used by the JVM may not be returned to the OS at all.
For instance. Let's say you have 100mb of java objects, your VM size will be 100mb approx. After the garbage collection you may reduce the heap usage to 10mb, but the VM will stay in something around 100mb. This strategy is used to allow the VM to have available memory for new objects.
To have the application returning "physical" memory to the system you have to check if your VM supports such a thing.
There are additional VM options that may allow your app to return more memory to the OS:
-XX:MaxHeapFreeRatio=70 Maximum percentage of heap free after GC to avoid shrinking.
-XX:MinHeapFreeRatio=40 Minimum percentage of heap free after GC to avoid expansion.
In my own interpretation using those options the VM will shirk if it falls below 70%. But quite frankly I don't know if only the heap will shrink and be returned to the OS or only shrink inside the VM.
For a complete description on the hot memory management works see:
Description of HotSpot GCs: Memory Management in the Java HotSpot Virtual Machine White Paper: https://www.oracle.com/technetwork/java/javase/memorymanagement-whitepaper-150215.pdf
And please, please. Give it a try and measure and let us know back here if that effectively reduces the memory consumption.
It's a bit of a hack to say the very least, but assuming you are on Win32 and if you are prepared to give up portability - write a small DLL that calls SetProcessWorkingSetSize and call into it using JNI. This allows you to suggest to the OS what the WS size should be. You can even specify -1, in which case the OS will attempt to page out as much as possible.
Assuming this is something like a server that's waiting for a request, could you do this?
Make two classes, Server and Worker.
Server only listens and launches Worker when required.
If Worker has never been initialised, initialise it.
After Worker has finished doing whatever it needed to do, serialize it, write it to disk, and set the Worker object to null.
Wait for a request.
When a request is received, read the serialized Worker object from disk and load it into memory.
Perform Worker tasks, when done, serialize, write out and set Worker object to null.
Rinse and repeat.
This means that the memory-intensive Worker object gets unloaded from memory (when the gc next runs, and you can encourage the gc to run by calling System.gc() after setting the Worker object to null), but since you saved it's state, you have the ability to reload it from disk and let it do it's work without going through initialization again. If it needs to run every "x" hours, you can put a java.util.Timer in the Server class instead of listening on a socket.
EDIT: There is also a JVM option -Xmx which sets the maximum size of the JVM's heap. This is probably not helpful in this case, but just thought I'd throw it in there.
Isn't this what page files are for? If your JVM is idle for any length of time and doesn't access it's memory pages. It'll very likely get paged and thus won't be using much actual RAM.
One thing you could do though... Most daemon programs have a startup phase (where they parse files and create data structures etc) and a running phase where they use the objects created at startup. If the JVM is allowed to it will start on the second phase without doing a garbage collection potentially causing the size of the process to grow and then stay that big for the lifetime of the process (since GC never/infrequently reduces the actual size of the process).
If you make sure that all memory allocated at each distinct phase of the programs life is GCable before the next phase starts then you can use the -Xmx setting to force down the maximum size of the process and cause your program to constantly GC between phases. I've done that before with some success.