Let's say I have a Java application which does roughly the following:
Initialize (takes a long time because this is complicated)
Do some stuff quickly
Wait idly for a long time (your favorite mechanism here)
Go to step 2.
Is there a way to encourage or force the JVM to flush its memory out to disk during long periods of idleness? (e.g. at the end of step 2, make some function call that effectively says "HEY JVM! I'm going to be going to sleep for a while.")
I don't mind using a big chunk of virtual memory, but physical memory is at a premium on the machine I'm using because there are many background processes.
The operating system should handle this, I'd think.
Otherwise, you could manually store your application to disk or database post-initialization, and do a quicker initialization from that data, maybe?
Instead of having your program sit idle and use up resources, why not schedule it with cron? Or better yet, since you're using Java, schedule it with Quartz? Do your best to cache elements of your lengthy initialization procedure so you don't have to pay a big penalty each time the scheduled task runs.
The very first thing you must make sure of, is that your objects are garbage collectable. But that's just the first step.
Secondly, the memory used by the JVM may not be returned to the OS at all.
For instance. Let's say you have 100mb of java objects, your VM size will be 100mb approx. After the garbage collection you may reduce the heap usage to 10mb, but the VM will stay in something around 100mb. This strategy is used to allow the VM to have available memory for new objects.
To have the application returning "physical" memory to the system you have to check if your VM supports such a thing.
There are additional VM options that may allow your app to return more memory to the OS:
-XX:MaxHeapFreeRatio=70 Maximum percentage of heap free after GC to avoid shrinking.
-XX:MinHeapFreeRatio=40 Minimum percentage of heap free after GC to avoid expansion.
In my own interpretation using those options the VM will shirk if it falls below 70%. But quite frankly I don't know if only the heap will shrink and be returned to the OS or only shrink inside the VM.
For a complete description on the hot memory management works see:
Description of HotSpot GCs: Memory Management in the Java HotSpot Virtual Machine White Paper: https://www.oracle.com/technetwork/java/javase/memorymanagement-whitepaper-150215.pdf
And please, please. Give it a try and measure and let us know back here if that effectively reduces the memory consumption.
It's a bit of a hack to say the very least, but assuming you are on Win32 and if you are prepared to give up portability - write a small DLL that calls SetProcessWorkingSetSize and call into it using JNI. This allows you to suggest to the OS what the WS size should be. You can even specify -1, in which case the OS will attempt to page out as much as possible.
Assuming this is something like a server that's waiting for a request, could you do this?
Make two classes, Server and Worker.
Server only listens and launches Worker when required.
If Worker has never been initialised, initialise it.
After Worker has finished doing whatever it needed to do, serialize it, write it to disk, and set the Worker object to null.
Wait for a request.
When a request is received, read the serialized Worker object from disk and load it into memory.
Perform Worker tasks, when done, serialize, write out and set Worker object to null.
Rinse and repeat.
This means that the memory-intensive Worker object gets unloaded from memory (when the gc next runs, and you can encourage the gc to run by calling System.gc() after setting the Worker object to null), but since you saved it's state, you have the ability to reload it from disk and let it do it's work without going through initialization again. If it needs to run every "x" hours, you can put a java.util.Timer in the Server class instead of listening on a socket.
EDIT: There is also a JVM option -Xmx which sets the maximum size of the JVM's heap. This is probably not helpful in this case, but just thought I'd throw it in there.
Isn't this what page files are for? If your JVM is idle for any length of time and doesn't access it's memory pages. It'll very likely get paged and thus won't be using much actual RAM.
One thing you could do though... Most daemon programs have a startup phase (where they parse files and create data structures etc) and a running phase where they use the objects created at startup. If the JVM is allowed to it will start on the second phase without doing a garbage collection potentially causing the size of the process to grow and then stay that big for the lifetime of the process (since GC never/infrequently reduces the actual size of the process).
If you make sure that all memory allocated at each distinct phase of the programs life is GCable before the next phase starts then you can use the -Xmx setting to force down the maximum size of the process and cause your program to constantly GC between phases. I've done that before with some success.
Related
I was able to call ObjectHeap.iterateObjectsOfKlass (with the help of SA) to obtain all objects belonging to a certain class. The result is exactly what I have expected, but the performance is not.
It took me >800 seconds to get my result, during which the target VM is suspended. The target VM heap is about 2GB. I know iterateObjectsOfKlass will call iterateExact.
My question is: do these methods iterate/traverse the entire heap just to obtain objects for 1 class? I am disappointed since my expectation is that with a single class, the result should return within 10 seconds.
HotSpot Serviceability Agent is really powerful technology, but indeed very slow. I have explained how it works in this answer.
JVM has no means to quickly find all instances of the specific class. So, yes, it has to scan the entire heap. Moreover, in order to read memory of a foreign process, SA uses ptrace system call for every single word of data. That's why it is so slow.
You have several options to scan heap faster:
Create a coredump of a foreign process and then run SA tool against the coredump. This is much faster than to read memory of a suspended process. See the related question.
Inject a JVMTI agent into a running process using Dynamic Attach mechanism. The agent can scan heap of a local JVM using IterateOverInstancesOfClass function. This will be dramatically faster comparing to SA, because it will be just reading from within the same process without any syscalls or whatever. I believe it will take just a few seconds for a 2GB heap.
No, really, that's what I'm trying to do. Server is holding onto 1600 users - back end long-running process, not web server - but sometimes the users generate more activity than usual, so it needs to cut its load down, specifically when it runs out of "resources," which pretty much means heap memory. This is a big design question - how to design this?
This might likely involve preventing OOM instead of recovering from them. Ideally
if(nearlyOutOfMemory()) throw new MyRecoverableOOMException();
might happen.
But that nearlyOutOfMemory() function I don't really know what might be.
Split the server into shards, each holding fewer users but residing in different physical machines.
If you have lots of caches, try to use soft references, which get cleared out when the VM runs out of heap.
In any case, profile, profile, profile first to see where CPU time is consumed and memory is allocated and held onto.
I have actually asked a similar question about handling OOM and it turns out that there's not too many options to recover from it. Basically you can:
1) invoke external shell script (-XX:OnOutOfMemoryError="cmd args;cmd args") which would trigger some action. The problem is that if OOM has happened in some thread which doesn't have a decent recovery strategy, you're doomed.
2) Define a threshold for Old gen which technically isn't OOM but a few steps ahead, say 80% and act if the threshold has been reached. More details here.
You could use Runtime.getRuntime() and the following methods:
freeMemory()
totalMemory()
maxMemory()
But I agree with the other posters, using SoftReference, WeakReference or a WeakHashMap will probably safe you the trouble of manually recovering from that condition.
A throttling, resource regulating servlet filter may be of use too. I did encounter DoSFilter of jetty/eclipse.
I'm writing a Java/Swing application with ~30 class my probleme is when i run my programe it load more than 150 M of the memory, is that normal ? since the application have 4 threads, parse some XML files, load some icon file, and drow some Jfreechat charts.
if not how can i do to minimize the amount of memory used by the application, is affecting some variables to null help? is loading the XML files once to use them in all the application life cycle help or i have to load them evry time i need them? is there some other tips that help me?
PS: im devlopping with a 8G memory computer in case that can affect the memory used by my program.
EDIT: it appeared that the program don't occupy all the 150MB because i get this value from the top command on linux, by running this code in my application as vilmantas advises me:
long free = Runtime.getRuntime().freeMemory();
long total = Runtime.getRuntime().totalMemory();
long max = Runtime.getRuntime().maxMemory();
long used = total - free;
I found that he occupy much less than that (~40MB) so i decide to run it with "-Xmx40M" argument and i reduce more than 40% of memory usage in the Top command.
The problem who are occupying the rest of memory since JVM (as i know) have his own process ? and how to make this operation automatic**?** because when choosing a not appropriate value you can get a memory exception as i have by running with "-Xmx30M" argument:
Exception in thread "Thread-2" java.lang.OutOfMemoryError: Java heap space
It is. This is Java, usually your VM/GC will do the job for you. Worry about memory usage when and if it becomes a problem.
If you want, there are several tools that can help you analyze what is going on. How to monitor Java memory usage?
Setting variables to null can help preventing memory leaks, if the referring variable's life cycle is greater than the referred instance. So that variables that should hold-on through the whole application life cycle are better not hold references to temporary objects that are used for a short time.
Loading the XMLs only once can help if you're good with loading its information only once. Meaning, if the XML is changed otherwise than through your application and you need to get the update - you'll have to reload the XML (and if the deprecated XML info is no longer needed - get rid of it).
You could use java memory heap analyzer like http://www.eclipse.org/mat/ to identify the parts of your application that use up most of the memory. You can then either optimize your data structures, or decide release parts of the data by setting all references to it to null.
Unintended references to data that is not needed anymore are also refered as "memory leaks". Settings those references to null will cause the garbage collector to remove it from java memory heap.
Along that line, you might find WeakReferences helpful.
Where do you observe those 150M? Is that how much your JVM process occupies (e.g. visible in the top command on linux/unix) or is it really the memory used (and necessary) by your application?
Try writing the following 4 values when your application runs:
long free = Runtime.getRuntime().freeMemory();
long total = Runtime.getRuntime().totalMemory();
long max = Runtime.getRuntime().maxMemory();
long used = total - free;
If the value for "used" is much lower than 150M, you may add java start parameter e.g. "-Xmx30M" to limit the heap size of your application to 30MB. Note that the JVM process will still occupy a little bit more than 30MB in such case.
The memory usage by JVM is somewhat tricky.
Following is the scenario i need to solve. I have struck with two solutions.
I need to maintain a cache of data fetched from database to be shown on a Swing GUI.
Whenever my JVM memory exceeds 70% of its allocated memory, i need to warn user regarding excessive usage. And once JVM memory usage exceeds 80%, then i have to halt all the database querying and clean up the existing cache fetched as part of the user operations and notifying the user. During cleanup process, i will manually handle deleting some data based up on some rules and instructs JVM for a GC. Whenever GC occurs, if memory cleans up and reaches 60% of the allocated memory, I need to restart all the Database handling and giving back control to the user.
For checking JVM memory statistics i found following two solutions. Could not able to decide which is best way and why.
Runtime.freeMemory() - Thread created to run every 10 seconds and check for the free memory and if memory exceeds the limits mentioned, necessary popups will intimate user and will call the methods to halt the operations and freeing up the memory.
MemoryPoolMXBean.getUsage() - Java 5 has introduced JMX to get the snapshot of the memory at runtime. In, JMX i cannot use Threshold notification since it will only notify when memory reaches/exceeds the given threshhold. Only way to use is Polling in MemoryMXBean and check the memory statistics over a period.
In case of using polling, it seems for me both the implementations are going to be same.
Please suggest the advantages of the methods and if there are any other alternatives/any corrections to the methods using.
Just a side note: Runtime.freeMemory() doesn't state the amount of memory that's left of allocating, it's just the amount of memory that's free within the currently allocated memory (which is initially smaller than the maximum memory the VM is configured to use), but grows over time.
When starting a VM, the max memory (Runtime.maxMemory()) just defines the upper limit of memory that the VM may allocate (configurable using the -Xmx VM option).
The total memory (Runtime.totalMemory()) is the initial size of the memory allocated for the VM process (configurable using the -Xms VM option), and will dynamically grow every time you allocate more than the currently free portion of it (Runtime.freeMemory()), until it reaches the max memory.
The metric you're interested in is the memory available for further allocation:
long usableFreeMemory= Runtime.getRuntime().maxMemory()
-Runtime.getRuntime().totalMemory()
+Runtime.getRuntime().freeMemory()
or:
double usedPercent=(double)(Runtime.getRuntime().totalMemory()
-Runtime.getRuntime().freeMemory())/Runtime.getRuntime().maxMemory()
The usual way to handle this sort of thing is to use WeakReferences and SoftReferences. You need to use both - the weak reference means you are not holding multiple copies of things, and the soft references mean that the GC will hang onto things until it starts running out of memory.
If you need to do additional cleanup, then you can add references to queues, and override the queue notification methods to trigger the cleanup. It's all good fun, but you do need to understand what these classes do.
It is entirely normal for a JVM to go up to 100% memory usage and them back to say 10% after a GC and do this every few second.
You shouldn't need to try managing the memory in this way.
You cannot say how much memory is being retained until a full GC has been run.
I suggest you work out what you are really trying to achieve and look at the problem another way.
The requirements you mention are a clear contradiction with how Garbage Collection works in a JVM.
because of the behaviour of the JVM it will be very hard to warn you users in a correct way.
Altogether stopping als database manipulation , cleaning stuff up and starting again really is not the way to go.
Let the JVM do what it is supposed to do, handle all memory related for you.
Modern generations of the JVM are very good at it and with some finetuning of the GC parameters you will get a a much cleaner memory handling then forcing things yourself
Articles like http://www.kodewerk.com/advice_on_jvm_heap_tuning_dont_touch_that_dial.htm mention the pros and cons and offer a nice explanation of what the VM does for you
I've only used the first method for similar task and it was OK.
One thing you should note, for both methods, is to implement some kind of debouncing - i.e. once you recognize you've hit 70% of memory, wait for a minute (or any other time you find appropriate) - GC can run at that time and clean up lots of memory.
If you implement a Runtime.freeMemory() graph in your system you'll see how the memory is constantly going up and down, up and down.
VisualVM is a bit nicer than JConsole because it gives you a nice visual Garbage Collector view.
Look into JConsole. It graphs the information you need so it is a matter of adapting this to your needs (given that you run on a Sun Java 6).
This also allows you to detach the surveiling process from what you want to look at.
Very late after the original post, I know, but I thought I'd post an example of how I've done it. Hopefully it'll be of some use to someone (I stress, it's a proof of principal example, nothing else... not particularly elegant either :) )
Just stick these two functions in a class, and it should work.
EDIT: Oh, andimport java.util.ArrayList;
import java.util.List;
public static int MEM(){
return (int)(Runtime.getRuntime().maxMemory()-Runtime.getRuntime().totalMemory() +Runtime.getRuntime().freeMemory())/1024/1024;
}
public static void main(String[] args) throws InterruptedException
{
List list = new ArrayList();
//get available memory before filling list
int initMem = MEM();
int lowMemWarning = (int) (initMem * 0.2);
int highMem = (int) (initMem *0.8);
int iteration =0;
while(true)
{
//use up some memory
list.add(Math.random());
//report
if(++iteration%10000==0)
{
System.out.printf("Available Memory: %dMb \tListSize: %d\n", MEM(),list.size());
//if low on memory, clear list and await garbage collection before continuing
if(MEM()<lowMemWarning)
{
System.out.printf("Warning! Low memory (%dMb remaining). Clearing list and cleaning up.\n",MEM());
//clear list
list = new ArrayList(); //obviously, here is a good place to put your warning logic
//ensure garbage collection occurs before continuing to re-add to list, to avoid immediately entering this block again
while(MEM()<highMem)
{
System.out.printf("Awaiting gc...(%dMb remaining)\n",MEM());
//give it a nudge
Runtime.getRuntime().gc();
Thread.sleep(250);
}
System.out.printf("gc successful! Continuing to fill list (%dMb remaining). List size: %d\n",MEM(),list.size());
Thread.sleep(3000); //just to view output
}
}
}
}
EDIT: This approach still relies on sensible setting of memory in the jvm using -Xmx, however.
EDIT2: It seems that the gc request line really does help things along, at least on my jvm. ymmv.
Does Java 6 consume more memory than you expect for largish applications?
I have an application I have been developing for years, which has, until now taken about 30-40 MB in my particular test configuration; now with Java 6u10 and 11 it is taking several hundred while active. It bounces around a lot, anywhere between 50M and 200M, and when it idles, it does GC and drop the memory right down. In addition it generates millions of page faults. All of this is observed via Windows Task Manager.
So, I ran it up under my profiler (jProfiler) and using jVisualVM, and both of them indicate the usual moderate heap and perm-gen usages of around 30M combined, even when fully active doing my load-test cycle.
So I am mystified! And it not just requesting more memory from the Windows Virtual Memory pool - this is showing up as 200M "Mem Usage".
CLARIFICATION: I want to be perfectly clear on this - observed over an 18 hour period with Java VisualVM the class heap and perm gen heap have been perfectly stable. The allocated volatile heap (eden and tenured) sits unmoved at 16MB (which it reaches in the first few minutes), and the use of this memory fluctuates in a perfect pattern of growing evenly from 8MB to 16MB, at which point GC kicks in an drops it back to 8MB. Over this 18 hour period, the system was under constant maximum load since I was running a stress test. This behavior is perfectly and consistently reproducible, seen over numerous runs. The only anomaly is that while this is going on the memory taken from Windows, observed via Task Manager, fluctuates all over the place from 64MB up to 900+MB.
UPDATE 2008-12-18: I have run the program with -Xms16M -Xmx16M without any apparent adverse affect - performance is fine, total run time is about the same. But memory use in a short run still peaked at about 180M.
Update 2009-01-21: It seems the answer may be in the number of threads - see my answer below.
EDIT: And I mean millions of page faults literally - in the region of 30M+.
EDIT: I have a 4G machine, so the 200M is not significant in that regard.
In response to a discussion in the comments to Ran's answer, here's a test case that proves that the JVM will release memory back to the OS under certain circumstances:
public class FreeTest
{
public static void main(String[] args) throws Exception
{
byte[][] blob = new byte[60][1024*1024];
for(int i=0; i<blob.length; i++)
{
Thread.sleep(500);
System.out.println("freeing block "+i);
blob[i] = null;
System.gc();
}
}
}
I see the JVM process' size decrease when the count reaches around 40, on both Java 1.4 and Java 6 JVMs (from Sun).
You can even tune the exact behaviour with the -XX:MaxHeapFreeRatio and -XX:MinHeapFreeRatio options -- some of the options on that page may also help with answering the original question.
I don't know about the page faults. but about the huge memory allocated for Java:
Sun's JVM only allocates memory, never deallocates it (until JVM death) deallocates memory only after a specific ratio between internal memory needs and allocated memory drops beneath a (tunable) value. The JVM starts with the amount specified in -Xms and can be extended up to the amount specified in -Xmx. I'm not sure what the defaults are. Whenever the JVM needs more memory (new objects / primitives / arrays) it allocates an entire chunk from the OS. However, when the need subsides (a momentary need, see 2 as well) it doesn't deallocates the memory back the the OS immediately, but keeps it to itself until that ratio has been reached. I was once told that JRockit behaves better, but I can't verify it.
Sun's JVM runs a full GC based on several triggers. One of them is the amount of available memory - when it falls down too much the JVM tries to perform a full GC to free some more. So, when more memory is allocated from the OS (momentary need) the chance for a full GC is lowered. This means that while you may see 30Mb of "live" objects, there might be a lot more "dead" objects (not reachable), just waiting for a GC to happen. I know yourkit has a great view called "dead objects" where you may see these "left-overs".
In "-server" mode, Sun's JVM runs GC in parallel mode (as opposed the older serial "stop the world" GC). This means that while there may be garbage to collect, it might not be collected immediately because of other threads taking all available CPU time. It will be collected before reaching out of memory (well, kinda. see http://java.sun.com/javase/technologies/hotspot/gc/gc_tuning_6.html), if more memory can be allocated from the OS, it might be before the GC runs.
Combined, a large initial memory configuration and short bursts creating a lot of short-lived objects might create a scenario as described.
edit: changed "never deallcoates" to "only after ratio reached".
Excessive thread creation explains your problem perfectly:
Each Thread gets its own stack, which is separate from heap memory and therefore not registered by profilers
The default thread stack size is quite large, IIRC 256KB (at least it was for Java 1.3)
Tread stack memory is probably not reused, so if you create and destroy lots of threads, you'll get lots of page faults
If you ever really need to have hundreds of threads aound, the thread stack size can be configured via the -Xss command line parameter.
Garbage collection is a rather arcane science. As the state of the art develops, un-tuned behaviour will change in response.
Java 6 has different default GC behaviour and different "ergonomics" to earlier JVM versions. If you tell it that it can use more memory (either explicitly on the command line, or implicitly by failing to specify anything more explicit), it will use more memory if it believes that this is likely to improve performance.
In this case, Java 6 appears to believe that reserving the extra space which the heap could grow into will give it better performance - presumably because it believes that this will cause more objects to die in Eden space, and limit the number of objects promoted to the tenured generation space. And from the specifications of your hardware, the JVM doesn't think that this extra reserved heap space will cause any problems. Note that many (though not all) of the assumptions the JVM makes in reaching its conclusion are based on "typical" applications, rather than your specific application. It also makes assumptions based on your hardware and OS profile.
If the JVM has made the wrong assumptions, you can influence its behaviour through the command line, though it is easy to get things wrong...
Information about performance changes in java 6 can be found here.
There is a discussion about memory management and performance implications in the Memory Management White Paper.
Over the last few weeks I had cause to investigate and correct a problem with a thread pooling object (a pre-Java 6 multi-threaded execution pool), where is was launching far more threads than required. In the jobs in question there could be up to 200 unnecessary threads. And the threads were continually dying and new ones replacing them.
Having corrected that problem, I thought to run a test again, and now it seems the memory consumption is stable (though 20 or so MB higher than with older JVMs).
So my conclusion is that the spikes in memory were related to the number of threads running (several hundred). Unfortunately I don't have time to experiment.
If someone would like to experiment and answer this with their conclusions, I will accept that answer; otherwise I will accept this one (after the 2 day waiting period).
Also, the page fault rate is way down (by a factor of 10).
Also, the fixes to the thread pool corrected some contention issues.
Lots of memory allocated outside Java's heap after upgrading to Java 6u10? Can only be one thing:
Java6 u10 Release Notes: "New Direct3D Accelerated Rendering Pipeline (...) Enabled by Default"
Sun enabled Direct 3D accelerations by default in Java 6u10. This option creates lots of (temporary?) native memory buffers, which are allocated outside the Java Heap. Add the following vm argument to disable it again:
-Dsun.java2d.d3d=false
Note that this will NOT disable 2D hardware acceleration, just some features that can make use of 3D hardware acceleration. You will see that your Java heap usage will increase by up to 7MB, but that's a good trade-off because you'll save ~100MB(+) of this temporary volatile memory.
I did a fair amount of testing within 2 Swing desktop application, on two platforms:
a high-end Intel-i7 with nVidia GTX 260 graphics card,
a 3-year laptop with Intel graphics.
On both hardware platforms the option made practically zero subjective difference. (Tests included: scrolling tables, zooming graphical flowsheets, charts, etc.). On the few tests where something was subtly different, disabling d3d counter-intuitively increased performance. I suspect that memory management/bandwidth problems counteracted whatever benefits the d3d accelerated functions were supposed to achieve. (Your mileage may vary!)
If you need to do some performance tuning, here's an excellent reference (e.g. "Troubleshooting Java 2D")
Are you using the ConcMarkSweep collector? It can increase the amount of memory required for your application due to increased memory fragmentation, and "floating garbage" - objects that become unreachable only after the collector has examined them, and therefore are not collected until the next pass.