How to get free memory info from Java - java

I use the following lines to get the memory usage with Java :
Free_Memory=Run_Time.freeMemory()/1048576; // 1024 x 1024 = 1K x 1K = 1Meg
Total_Memory=Run_Time.totalMemory()/1048576; // 992 Total on a 4 GB PC
The Free_Memory I got was : 900, but it is way off, when Free_Memory goes down to around 600, my program ran out of memory and generated heap overflow message.
So I looked at the => Windows Task Manager : Performance : Physical Memory : Free, it's down to 1, 2 or 0, which is a more accurate reflection of my memory situation, and according to it, my Total Memory is : 4089, which is correct, while Java's Total_Memory=992 is incorrect.
So, my question now is : In my Java program how to get the memory usage numbers reflected in the Windows Task Manager : Performance : Physical Memory ? I need to depend on those numbers.

The JVM doesn't allow Java to consume all available system memory. Instead the JVM grabs a fixed chunk and allocates all of your objects within that chunk. If this area fills up, you're out of memory! There are commandline options to alternate the max/initial memory usage of the JVM.
The more important issue is that you should not be relying on tracking free/max memory. What are you doing that relies on tracking memory?
UPDATE:
Try 64bit if you need more memory than 1.5GB
If you're trying to track memory running out then consider figuring our WHY your program does this and if it can be prevented through different algorithms or better management of objects. When the memory reaches zero what do you expect to do? Popup a dialog and tell the user they're screwed and exit the program? I can understand a grateful shutdown but warning the user to run with a large -Xmx is not going to cut it.

If you want detailed Windows stats, you can use WMI and a .vbs script, executed via cscript.exe.
This link details a script that pulls more detailed memory stats than you could possibly want.
Execute this via the usual Process/Runtime combination, and simply read back what figures you require. These are system level stats, and not for the VM (although WMI can pull back per-process stats as well).

Related

MAT space vs. TaskManager space

after searching the web for a while I decided to ask you for help with my problem.
My program should analyze logfiles, which are really big. They are about 100mb up to 2gb. I want to read the files using NIO-classes like FileChannel.
I don't want to save the files in memory, but I want to process the lines immediately. The code works.
Now my problem: I analyzed the Memory usage with the Eclipse MAT plugin and it says about 18mb of data is saved (that fits). But TaskManager in Windows says that about 180mb are used by the JVM.
Can you tell me WHY this is?
I don't want to save the data reading with the FileChannel, i just want to process it. I am closing the Channel afterwards - I thought every data would be deleted then?
I hope you guys can help me with the difference between the used space is shown in MAT and the used space is shown in TaskManager.
MAT will only show objects that are actively references by your program. The JVM uses more memory than that:
Its own code
Non-object data (classes, compiled bytecode e.t.c.)
Heap space that is not currently in use, but has already been allocated.
The last case is probably the most major one. Depending on how much physical memory there is on a computer, the JVM will set a default maximum size for its heap. To improve performance it will keep using up to that amount of memory with minimal garbage collection activity. That means that objects that are no longer referenced will remain in memory, rather than be garbage collected immediately, thus increasing the total amount of memory used.
As a result, the JVM will generally not free any memory it has allocated as part of its heap back to the system. This will show-up as an inordinate amount of used memory in the OS monitoring utilities.
Applications with high object allocation/de-allocation rates will be worse - I have an application that uses 1.8GB of memory, while actually requiring less than 100MB. Reducing the maximum heap size to 120 MB, though, increases the execution time by almost a full order of magnitude.

How to minimize the memory used by my application?

I'm writing a Java/Swing application with ~30 class my probleme is when i run my programe it load more than 150 M of the memory, is that normal ? since the application have 4 threads, parse some XML files, load some icon file, and drow some Jfreechat charts.
if not how can i do to minimize the amount of memory used by the application, is affecting some variables to null help? is loading the XML files once to use them in all the application life cycle help or i have to load them evry time i need them? is there some other tips that help me?
PS: im devlopping with a 8G memory computer in case that can affect the memory used by my program.
EDIT: it appeared that the program don't occupy all the 150MB because i get this value from the top command on linux, by running this code in my application as vilmantas advises me:
long free = Runtime.getRuntime().freeMemory();
long total = Runtime.getRuntime().totalMemory();
long max = Runtime.getRuntime().maxMemory();
long used = total - free;
I found that he occupy much less than that (~40MB) so i decide to run it with "-Xmx40M" argument and i reduce more than 40% of memory usage in the Top command.
The problem who are occupying the rest of memory since JVM (as i know) have his own process ? and how to make this operation automatic**?** because when choosing a not appropriate value you can get a memory exception as i have by running with "-Xmx30M" argument:
Exception in thread "Thread-2" java.lang.OutOfMemoryError: Java heap space
It is. This is Java, usually your VM/GC will do the job for you. Worry about memory usage when and if it becomes a problem.
If you want, there are several tools that can help you analyze what is going on. How to monitor Java memory usage?
Setting variables to null can help preventing memory leaks, if the referring variable's life cycle is greater than the referred instance. So that variables that should hold-on through the whole application life cycle are better not hold references to temporary objects that are used for a short time.
Loading the XMLs only once can help if you're good with loading its information only once. Meaning, if the XML is changed otherwise than through your application and you need to get the update - you'll have to reload the XML (and if the deprecated XML info is no longer needed - get rid of it).
You could use java memory heap analyzer like http://www.eclipse.org/mat/ to identify the parts of your application that use up most of the memory. You can then either optimize your data structures, or decide release parts of the data by setting all references to it to null.
Unintended references to data that is not needed anymore are also refered as "memory leaks". Settings those references to null will cause the garbage collector to remove it from java memory heap.
Along that line, you might find WeakReferences helpful.
Where do you observe those 150M? Is that how much your JVM process occupies (e.g. visible in the top command on linux/unix) or is it really the memory used (and necessary) by your application?
Try writing the following 4 values when your application runs:
long free = Runtime.getRuntime().freeMemory();
long total = Runtime.getRuntime().totalMemory();
long max = Runtime.getRuntime().maxMemory();
long used = total - free;
If the value for "used" is much lower than 150M, you may add java start parameter e.g. "-Xmx30M" to limit the heap size of your application to 30MB. Note that the JVM process will still occupy a little bit more than 30MB in such case.
The memory usage by JVM is somewhat tricky.

Debugging memory problems in java - understanding freeMemory

I have an application that causes an OutOfMemoryError, so I try to debug it using Runtime.getRuntime().freeMemory(). Here is what I get:
freeMemory=48792216
## Reading real sentences file...map size=4709. freeMemory=57056656
## Reading full sentences file...map size=28360. freeMemory=42028760
freeMemory=42028760
## Reading suffix array files of main corpus ...array size=513762 freeMemory=90063112
## Reading reverse suffix array files... array size=513762. freeMemory=64449240
I try to understand the behaviour of freeMemory. It starts with 48 MB, then - after I read a large file - it jumps UP to 57 MB, then down again to 42 MB, then - after I read a very large file (513762 elements) it jumps UP to 90 MB, then down again to 64 MB.
What happens here? How can I make sense of these numbers?
Java memory is a bit tricky. Your program runs inside the JVM, the JVM runs inside the OS, the OS uses your computer resources. When your program needs memory, the JVM will see if it has already requested to the OS some memory that is currently unused, if there isn't enough memory, the JVM will ask the OS and, if possible, obtain some memory.
From time to time, the JVM will look around for memory that is not used anymore, and will free it. Depending on a (huge) number of factors, the JVM can also give that memory back to the OS, so that other programs can use it.
This mean that, at any given moment, you have a certain quantity of memory the JVM has obtained from the OS, and a certain amount the JVM is currently using.
At any given point, the JVM may refuse to acquire more memory, because it has been instructed to do so, or the OS may deny the JVM to access to more memory, either because again instructed to do so, or simply because there is no more free RAM.
When you run your program on your computer, you are probably not giving any limit to the JVM, so you can use plenty of RAM. When running on google apps, there could be some limits imposed to the JVM by google operators, so that available memory may be less.
Runtime.freeMemory will tell you how much of the RAM the JVM has obtained from the OS is currently free.
When you allocate a big object, say one MB, the JVM may require more RAM to the OS, say 5 MB, resulting in freeMemory be 4 MB more than before, which is counterintuitive. Allocating another MB will probably shrink free memory as expected, but later the JVM could decide to release some memory to the OS, and freeMemory will shrink again with no apparent reason.
Using totalMemory and maxMemory in combination with freeMemory you can have a better insight of your current RAM limits and consumption.
To understand why you are consuming more RAM than you would expect, you should use a memory profiler. A simple but effective one is packaged with VisualVM, a tool usually already installed with the JDK. There you'll be able to see what is using RAM in your program and why that memory cannot be reclaimed by the JVM.
(Note, the memory system of the JVM is by far more complicated than this, but I hope that this simplification can help you understand more than a complete and complicated picture.)
It's not terribly clear or user friendly. If you look at the runtime api you see 3 different memory calls:
freeMemory Returns the amount of free memory in the Java Virtual
Machine. Calling the gc method may result in increasing the value
returned by freeMemory.
totalMemory Returns the total amount of memory in the Java virtual
machine. The value returned by this method may vary over time,
depending on the host environment.
maxMemory Returns the maximum amount of memory that the Java virtual
machine will attempt to use.
When you start up the jvm, you can set the initial heap size (-Xms) as well as the max heap size (-Xmx). e.g. java -Xms100m -Xmx 200m starts with a heap of 100m, will grow the heap as more space is needed up to 200, and will fail with OutOfMemory if it needs to grow beyond that. So there's a ceiling, which gives you maxMemory().
The memory currently available in the JVM is somewhere between your starting and max. Somwhere. That's your totalMemory(). freeMemory() is how much is free out of that total.
To add to the confusion, see what they say about gc - "Calling the gc method may result in increasing the value returned by freeMemory." This implies that uncollected garbage is not included in free memory.
OK, based on your comments I wrote this function, which prints a summary of memory measures:
static String memory() {
final int unit = 1000000; // MB
long usedMemory = Runtime.getRuntime().totalMemory() - Runtime.getRuntime().freeMemory();
long availableMemory = Runtime.getRuntime().maxMemory() - usedMemory;
return "Memory: free="+(Runtime.getRuntime().freeMemory()/unit)+" total="+(Runtime.getRuntime().totalMemory()/unit)+" max="+(Runtime.getRuntime().maxMemory()/unit+" used="+usedMemory/unit+" available="+availableMemory/unit);
}
It seems that the best measures for how much my program is using are usedMemory, and the complementary availableMemory. They increase/decrease monotonically when I use more memory:
Memory: free=61 total=62 max=922 used=0 available=921
Memory: free=46 total=62 max=922 used=15 available=906
Memory: free=46 total=62 max=922 used=15 available=876
Memory: free=44 total=118 max=922 used=73 available=877
Memory: free=97 total=189 max=922 used=92 available=825
Try running your app against something like http://download.oracle.com/javase/1.5.0/docs/guide/management/jconsole.html.
It comes with the JVM (or certainly used to) and is invaluable in terms of monitoring what is happening inside the JVM during the execution of an applicaiton.
It'll provide more of a useful insight as to what is going on with regards to your memory than your debug statements.
Also, if you are really keen, you can learn a bit more about tuning garbage collections via something like;
http://www.oracle.com/technetwork/java/gc-tuning-5-138395.html
This is pretty in depth, but it is good to get an insight into the various generations of memory in the JVM and how objects are retained in these generations. If you are seeing that objects are being retained in old gen and old gen is continually increasing, then this could be an indicator of a leak.
For debugging why data is being retained and not collected, then you can't go past profilers. Check out JProfiler or Yourkit.
Best of luck.

Java: flushing memory out to disk

Let's say I have a Java application which does roughly the following:
Initialize (takes a long time because this is complicated)
Do some stuff quickly
Wait idly for a long time (your favorite mechanism here)
Go to step 2.
Is there a way to encourage or force the JVM to flush its memory out to disk during long periods of idleness? (e.g. at the end of step 2, make some function call that effectively says "HEY JVM! I'm going to be going to sleep for a while.")
I don't mind using a big chunk of virtual memory, but physical memory is at a premium on the machine I'm using because there are many background processes.
The operating system should handle this, I'd think.
Otherwise, you could manually store your application to disk or database post-initialization, and do a quicker initialization from that data, maybe?
Instead of having your program sit idle and use up resources, why not schedule it with cron? Or better yet, since you're using Java, schedule it with Quartz? Do your best to cache elements of your lengthy initialization procedure so you don't have to pay a big penalty each time the scheduled task runs.
The very first thing you must make sure of, is that your objects are garbage collectable. But that's just the first step.
Secondly, the memory used by the JVM may not be returned to the OS at all.
For instance. Let's say you have 100mb of java objects, your VM size will be 100mb approx. After the garbage collection you may reduce the heap usage to 10mb, but the VM will stay in something around 100mb. This strategy is used to allow the VM to have available memory for new objects.
To have the application returning "physical" memory to the system you have to check if your VM supports such a thing.
There are additional VM options that may allow your app to return more memory to the OS:
-XX:MaxHeapFreeRatio=70 Maximum percentage of heap free after GC to avoid shrinking.
-XX:MinHeapFreeRatio=40 Minimum percentage of heap free after GC to avoid expansion.
In my own interpretation using those options the VM will shirk if it falls below 70%. But quite frankly I don't know if only the heap will shrink and be returned to the OS or only shrink inside the VM.
For a complete description on the hot memory management works see:
Description of HotSpot GCs: Memory Management in the Java HotSpot Virtual Machine White Paper: https://www.oracle.com/technetwork/java/javase/memorymanagement-whitepaper-150215.pdf
And please, please. Give it a try and measure and let us know back here if that effectively reduces the memory consumption.
It's a bit of a hack to say the very least, but assuming you are on Win32 and if you are prepared to give up portability - write a small DLL that calls SetProcessWorkingSetSize and call into it using JNI. This allows you to suggest to the OS what the WS size should be. You can even specify -1, in which case the OS will attempt to page out as much as possible.
Assuming this is something like a server that's waiting for a request, could you do this?
Make two classes, Server and Worker.
Server only listens and launches Worker when required.
If Worker has never been initialised, initialise it.
After Worker has finished doing whatever it needed to do, serialize it, write it to disk, and set the Worker object to null.
Wait for a request.
When a request is received, read the serialized Worker object from disk and load it into memory.
Perform Worker tasks, when done, serialize, write out and set Worker object to null.
Rinse and repeat.
This means that the memory-intensive Worker object gets unloaded from memory (when the gc next runs, and you can encourage the gc to run by calling System.gc() after setting the Worker object to null), but since you saved it's state, you have the ability to reload it from disk and let it do it's work without going through initialization again. If it needs to run every "x" hours, you can put a java.util.Timer in the Server class instead of listening on a socket.
EDIT: There is also a JVM option -Xmx which sets the maximum size of the JVM's heap. This is probably not helpful in this case, but just thought I'd throw it in there.
Isn't this what page files are for? If your JVM is idle for any length of time and doesn't access it's memory pages. It'll very likely get paged and thus won't be using much actual RAM.
One thing you could do though... Most daemon programs have a startup phase (where they parse files and create data structures etc) and a running phase where they use the objects created at startup. If the JVM is allowed to it will start on the second phase without doing a garbage collection potentially causing the size of the process to grow and then stay that big for the lifetime of the process (since GC never/infrequently reduces the actual size of the process).
If you make sure that all memory allocated at each distinct phase of the programs life is GCable before the next phase starts then you can use the -Xmx setting to force down the maximum size of the process and cause your program to constantly GC between phases. I've done that before with some success.

Java 6 Excessive Memory Usage

Does Java 6 consume more memory than you expect for largish applications?
I have an application I have been developing for years, which has, until now taken about 30-40 MB in my particular test configuration; now with Java 6u10 and 11 it is taking several hundred while active. It bounces around a lot, anywhere between 50M and 200M, and when it idles, it does GC and drop the memory right down. In addition it generates millions of page faults. All of this is observed via Windows Task Manager.
So, I ran it up under my profiler (jProfiler) and using jVisualVM, and both of them indicate the usual moderate heap and perm-gen usages of around 30M combined, even when fully active doing my load-test cycle.
So I am mystified! And it not just requesting more memory from the Windows Virtual Memory pool - this is showing up as 200M "Mem Usage".
CLARIFICATION: I want to be perfectly clear on this - observed over an 18 hour period with Java VisualVM the class heap and perm gen heap have been perfectly stable. The allocated volatile heap (eden and tenured) sits unmoved at 16MB (which it reaches in the first few minutes), and the use of this memory fluctuates in a perfect pattern of growing evenly from 8MB to 16MB, at which point GC kicks in an drops it back to 8MB. Over this 18 hour period, the system was under constant maximum load since I was running a stress test. This behavior is perfectly and consistently reproducible, seen over numerous runs. The only anomaly is that while this is going on the memory taken from Windows, observed via Task Manager, fluctuates all over the place from 64MB up to 900+MB.
UPDATE 2008-12-18: I have run the program with -Xms16M -Xmx16M without any apparent adverse affect - performance is fine, total run time is about the same. But memory use in a short run still peaked at about 180M.
Update 2009-01-21: It seems the answer may be in the number of threads - see my answer below.
EDIT: And I mean millions of page faults literally - in the region of 30M+.
EDIT: I have a 4G machine, so the 200M is not significant in that regard.
In response to a discussion in the comments to Ran's answer, here's a test case that proves that the JVM will release memory back to the OS under certain circumstances:
public class FreeTest
{
public static void main(String[] args) throws Exception
{
byte[][] blob = new byte[60][1024*1024];
for(int i=0; i<blob.length; i++)
{
Thread.sleep(500);
System.out.println("freeing block "+i);
blob[i] = null;
System.gc();
}
}
}
I see the JVM process' size decrease when the count reaches around 40, on both Java 1.4 and Java 6 JVMs (from Sun).
You can even tune the exact behaviour with the -XX:MaxHeapFreeRatio and -XX:MinHeapFreeRatio options -- some of the options on that page may also help with answering the original question.
I don't know about the page faults. but about the huge memory allocated for Java:
Sun's JVM only allocates memory, never deallocates it (until JVM death) deallocates memory only after a specific ratio between internal memory needs and allocated memory drops beneath a (tunable) value. The JVM starts with the amount specified in -Xms and can be extended up to the amount specified in -Xmx. I'm not sure what the defaults are. Whenever the JVM needs more memory (new objects / primitives / arrays) it allocates an entire chunk from the OS. However, when the need subsides (a momentary need, see 2 as well) it doesn't deallocates the memory back the the OS immediately, but keeps it to itself until that ratio has been reached. I was once told that JRockit behaves better, but I can't verify it.
Sun's JVM runs a full GC based on several triggers. One of them is the amount of available memory - when it falls down too much the JVM tries to perform a full GC to free some more. So, when more memory is allocated from the OS (momentary need) the chance for a full GC is lowered. This means that while you may see 30Mb of "live" objects, there might be a lot more "dead" objects (not reachable), just waiting for a GC to happen. I know yourkit has a great view called "dead objects" where you may see these "left-overs".
In "-server" mode, Sun's JVM runs GC in parallel mode (as opposed the older serial "stop the world" GC). This means that while there may be garbage to collect, it might not be collected immediately because of other threads taking all available CPU time. It will be collected before reaching out of memory (well, kinda. see http://java.sun.com/javase/technologies/hotspot/gc/gc_tuning_6.html), if more memory can be allocated from the OS, it might be before the GC runs.
Combined, a large initial memory configuration and short bursts creating a lot of short-lived objects might create a scenario as described.
edit: changed "never deallcoates" to "only after ratio reached".
Excessive thread creation explains your problem perfectly:
Each Thread gets its own stack, which is separate from heap memory and therefore not registered by profilers
The default thread stack size is quite large, IIRC 256KB (at least it was for Java 1.3)
Tread stack memory is probably not reused, so if you create and destroy lots of threads, you'll get lots of page faults
If you ever really need to have hundreds of threads aound, the thread stack size can be configured via the -Xss command line parameter.
Garbage collection is a rather arcane science. As the state of the art develops, un-tuned behaviour will change in response.
Java 6 has different default GC behaviour and different "ergonomics" to earlier JVM versions. If you tell it that it can use more memory (either explicitly on the command line, or implicitly by failing to specify anything more explicit), it will use more memory if it believes that this is likely to improve performance.
In this case, Java 6 appears to believe that reserving the extra space which the heap could grow into will give it better performance - presumably because it believes that this will cause more objects to die in Eden space, and limit the number of objects promoted to the tenured generation space. And from the specifications of your hardware, the JVM doesn't think that this extra reserved heap space will cause any problems. Note that many (though not all) of the assumptions the JVM makes in reaching its conclusion are based on "typical" applications, rather than your specific application. It also makes assumptions based on your hardware and OS profile.
If the JVM has made the wrong assumptions, you can influence its behaviour through the command line, though it is easy to get things wrong...
Information about performance changes in java 6 can be found here.
There is a discussion about memory management and performance implications in the Memory Management White Paper.
Over the last few weeks I had cause to investigate and correct a problem with a thread pooling object (a pre-Java 6 multi-threaded execution pool), where is was launching far more threads than required. In the jobs in question there could be up to 200 unnecessary threads. And the threads were continually dying and new ones replacing them.
Having corrected that problem, I thought to run a test again, and now it seems the memory consumption is stable (though 20 or so MB higher than with older JVMs).
So my conclusion is that the spikes in memory were related to the number of threads running (several hundred). Unfortunately I don't have time to experiment.
If someone would like to experiment and answer this with their conclusions, I will accept that answer; otherwise I will accept this one (after the 2 day waiting period).
Also, the page fault rate is way down (by a factor of 10).
Also, the fixes to the thread pool corrected some contention issues.
Lots of memory allocated outside Java's heap after upgrading to Java 6u10? Can only be one thing:
Java6 u10 Release Notes: "New Direct3D Accelerated Rendering Pipeline (...) Enabled by Default"
Sun enabled Direct 3D accelerations by default in Java 6u10. This option creates lots of (temporary?) native memory buffers, which are allocated outside the Java Heap. Add the following vm argument to disable it again:
-Dsun.java2d.d3d=false
Note that this will NOT disable 2D hardware acceleration, just some features that can make use of 3D hardware acceleration. You will see that your Java heap usage will increase by up to 7MB, but that's a good trade-off because you'll save ~100MB(+) of this temporary volatile memory.
I did a fair amount of testing within 2 Swing desktop application, on two platforms:
a high-end Intel-i7 with nVidia GTX 260 graphics card,
a 3-year laptop with Intel graphics.
On both hardware platforms the option made practically zero subjective difference. (Tests included: scrolling tables, zooming graphical flowsheets, charts, etc.). On the few tests where something was subtly different, disabling d3d counter-intuitively increased performance. I suspect that memory management/bandwidth problems counteracted whatever benefits the d3d accelerated functions were supposed to achieve. (Your mileage may vary!)
If you need to do some performance tuning, here's an excellent reference (e.g. "Troubleshooting Java 2D")
Are you using the ConcMarkSweep collector? It can increase the amount of memory required for your application due to increased memory fragmentation, and "floating garbage" - objects that become unreachable only after the collector has examined them, and therefore are not collected until the next pass.

Categories