I have an application that causes an OutOfMemoryError, so I try to debug it using Runtime.getRuntime().freeMemory(). Here is what I get:
freeMemory=48792216
## Reading real sentences file...map size=4709. freeMemory=57056656
## Reading full sentences file...map size=28360. freeMemory=42028760
freeMemory=42028760
## Reading suffix array files of main corpus ...array size=513762 freeMemory=90063112
## Reading reverse suffix array files... array size=513762. freeMemory=64449240
I try to understand the behaviour of freeMemory. It starts with 48 MB, then - after I read a large file - it jumps UP to 57 MB, then down again to 42 MB, then - after I read a very large file (513762 elements) it jumps UP to 90 MB, then down again to 64 MB.
What happens here? How can I make sense of these numbers?
Java memory is a bit tricky. Your program runs inside the JVM, the JVM runs inside the OS, the OS uses your computer resources. When your program needs memory, the JVM will see if it has already requested to the OS some memory that is currently unused, if there isn't enough memory, the JVM will ask the OS and, if possible, obtain some memory.
From time to time, the JVM will look around for memory that is not used anymore, and will free it. Depending on a (huge) number of factors, the JVM can also give that memory back to the OS, so that other programs can use it.
This mean that, at any given moment, you have a certain quantity of memory the JVM has obtained from the OS, and a certain amount the JVM is currently using.
At any given point, the JVM may refuse to acquire more memory, because it has been instructed to do so, or the OS may deny the JVM to access to more memory, either because again instructed to do so, or simply because there is no more free RAM.
When you run your program on your computer, you are probably not giving any limit to the JVM, so you can use plenty of RAM. When running on google apps, there could be some limits imposed to the JVM by google operators, so that available memory may be less.
Runtime.freeMemory will tell you how much of the RAM the JVM has obtained from the OS is currently free.
When you allocate a big object, say one MB, the JVM may require more RAM to the OS, say 5 MB, resulting in freeMemory be 4 MB more than before, which is counterintuitive. Allocating another MB will probably shrink free memory as expected, but later the JVM could decide to release some memory to the OS, and freeMemory will shrink again with no apparent reason.
Using totalMemory and maxMemory in combination with freeMemory you can have a better insight of your current RAM limits and consumption.
To understand why you are consuming more RAM than you would expect, you should use a memory profiler. A simple but effective one is packaged with VisualVM, a tool usually already installed with the JDK. There you'll be able to see what is using RAM in your program and why that memory cannot be reclaimed by the JVM.
(Note, the memory system of the JVM is by far more complicated than this, but I hope that this simplification can help you understand more than a complete and complicated picture.)
It's not terribly clear or user friendly. If you look at the runtime api you see 3 different memory calls:
freeMemory Returns the amount of free memory in the Java Virtual
Machine. Calling the gc method may result in increasing the value
returned by freeMemory.
totalMemory Returns the total amount of memory in the Java virtual
machine. The value returned by this method may vary over time,
depending on the host environment.
maxMemory Returns the maximum amount of memory that the Java virtual
machine will attempt to use.
When you start up the jvm, you can set the initial heap size (-Xms) as well as the max heap size (-Xmx). e.g. java -Xms100m -Xmx 200m starts with a heap of 100m, will grow the heap as more space is needed up to 200, and will fail with OutOfMemory if it needs to grow beyond that. So there's a ceiling, which gives you maxMemory().
The memory currently available in the JVM is somewhere between your starting and max. Somwhere. That's your totalMemory(). freeMemory() is how much is free out of that total.
To add to the confusion, see what they say about gc - "Calling the gc method may result in increasing the value returned by freeMemory." This implies that uncollected garbage is not included in free memory.
OK, based on your comments I wrote this function, which prints a summary of memory measures:
static String memory() {
final int unit = 1000000; // MB
long usedMemory = Runtime.getRuntime().totalMemory() - Runtime.getRuntime().freeMemory();
long availableMemory = Runtime.getRuntime().maxMemory() - usedMemory;
return "Memory: free="+(Runtime.getRuntime().freeMemory()/unit)+" total="+(Runtime.getRuntime().totalMemory()/unit)+" max="+(Runtime.getRuntime().maxMemory()/unit+" used="+usedMemory/unit+" available="+availableMemory/unit);
}
It seems that the best measures for how much my program is using are usedMemory, and the complementary availableMemory. They increase/decrease monotonically when I use more memory:
Memory: free=61 total=62 max=922 used=0 available=921
Memory: free=46 total=62 max=922 used=15 available=906
Memory: free=46 total=62 max=922 used=15 available=876
Memory: free=44 total=118 max=922 used=73 available=877
Memory: free=97 total=189 max=922 used=92 available=825
Try running your app against something like http://download.oracle.com/javase/1.5.0/docs/guide/management/jconsole.html.
It comes with the JVM (or certainly used to) and is invaluable in terms of monitoring what is happening inside the JVM during the execution of an applicaiton.
It'll provide more of a useful insight as to what is going on with regards to your memory than your debug statements.
Also, if you are really keen, you can learn a bit more about tuning garbage collections via something like;
http://www.oracle.com/technetwork/java/gc-tuning-5-138395.html
This is pretty in depth, but it is good to get an insight into the various generations of memory in the JVM and how objects are retained in these generations. If you are seeing that objects are being retained in old gen and old gen is continually increasing, then this could be an indicator of a leak.
For debugging why data is being retained and not collected, then you can't go past profilers. Check out JProfiler or Yourkit.
Best of luck.
Related
I am running a memory intensive application. Some info about the environment:
64 bit debian
13 GB of RAM
64 bit JVM (I output System.getProperty("sun.arch.data.model") when my program runs, and it says "64")
Here is the exact command I am issuing:
java -Xmx9000m -jar "ale.jar" testconfig
I have run the program with same exact data, config, etc. on several other systems, and I know that the JVM uses (at its peak) 6 GB of memory on those systems. However, I am getting an OutOfMemory error. Furthermore, during the execution of the program, the system never drops below 8.5 GB of free memory.
When I output Runtime.getRuntime().maxMemory() during execution, I get the value 3044540416, i.e. ~ 3 GB.
I don't know whether it is relevant, but this is a Google Compute Engine instance.
The only explanation I can think of is that there may be some sort of system restriction on the maximum amount of memory that a single process may use.
-Xmx will only set the maximum assigned memory. Use -Xms to specify the minimum. Setting them to the same value will make the memory footprint static.
The only explanation I can think of is that there may be some sort of system restriction on the maximum amount of memory that a single process may use.
That is one possible explanation.
Another one is that you are attempting to allocate a really large array. The largest possible arrays are 2^31 - 1 elements, but the actual size depends on the element size:
byte[] or boolean[] ... 2G bytes
char[] or short[] ... 4G bytes
int[] ... 8 Gbytes
long[] or Object[] ... 16 Gbytes
If you allocate a really large array, the GC needs to find a contiguous region of free memory of the required size. Depending on the array size, and how the heap space is split into spaces, it may be able to find considerably less contiguous space than you think.
A third possibility is that you are getting OOMEs because the GC is hitting the GC Overhead limit for time spent running the GC.
Some of these theories could be confirmed or dismissed if you showed us the stacktrace ...
We are currently developing an application which visualizes huge vector fields (> 250'000) on a sphere/plane in 4D. To speed up the process we are using VBOs for the vertices, normals and colors. To prepare the data before sending down to the GPU we are using Buffers (FloatBuffer, ByteBuffer, etc..).
Some data to the cylinders:
Each cylinder uses 16 * 9 + 16 * 3 = 192 floats -> 192 * 4 Bytes = 768 bytes.
After sending down the vertices we are doing the following cleanup:
// clear all buffers
vertexBufferShell.clear();
indexBufferShell.clear();
vertexBufferShell = null;
indexBufferShell = null;
We have monitored it with JConsole and we found out that the GarbageCollector is not run "correctly". Even if we switch down the cylinder count, the memory does not get freed up. In the JConsole Monitoring Tool there is a button to Run the GC and if we do that manually it frees up the memory (If we have loaded a huge amount of cylinders and decrease it a lot, sometimes over 600mb gets cleaned by the GC).
Here an image of the JConsole:
Now the question is how can we clean up this Buffers by ourself in the code? Calling the clear method and set the reference to null is not enough. We have also tried to call System.gc() but with no effect. Do you have any idea?
There is any number of reason the memory usage could increase. I would say its not a memory leak unless the memory increases every time you perform this operation. If it only occurs the first time, it may be that this library needs some memory to load.
I suggest you take a heap dump or at least jmap -histo:live before and after to see where the memory increase is.
If you use a memory profiler like VisualVM or YourKit it will show you where and why memory is being retained.
Its not really a memory leak if the gc is able to clean it up. It might be a waste of memory, but your app seems to be configured to allow it to use over 800MB of heap. This is a trade-off between garbage collection performance and memory usage. You could also try to simply run your application with a smaller heap size.
There might not be a memory leak, but objects going to the Ternured (area where the objects that passed alive in a minor gc goes).
These big step you see might be the Young Eden that is full and after a minor gc is moving alive objects to Ternure.
You can also try to tune up the garbage collector and the memory.
you might have plenty middle length live objects that are constantly passing to the Ternured releasing them in full gc. If you dimension them well those objects go minor gc.
There are plenty jvm arguments to do this.
A good place to look at is here.
This one is suitable for you:
-XX:NewSize=2.125m
Default size of new generation (in bytes)
[5.0 and newer: 64 bit VMs are scaled 30% larger; x86: 1m; x86, 5.0 and older: 640k]
Regards.
The JVM will not free any objects until it has to (e.g.-Xmx reached). Thats one of the main concepts behind all GCs which you can find in the current JVM. They are all optimised for throughput, even the concurrent one. I see nothing unusual on the GC graph.
It would be a leak if the used heap after a full GC would constantly grow over time - if it doesn't -> all good.
in short: foo=null; will not release the object only the reference. The GC can free the memory whenever it likes to.
also:
buffer.clear() does not clear the buffer, it sets pos=0 and limit=capacity - thats all. Please refer to the javadoc for more info.
VisualVM +1
have fun :)
(offtopic: if the buffers are large and static you should allocate them in the permgen. Buffers.newDirectFloatBuffer() would be one of those utility methods in latest gluegen-rt)
Currently in our testing environment the max and min JVM heap size are set to the same value, basically as much as the dedicated server machine will allow for our application. Is this the best configuration for performance or would giving the JVM a range be better?
Peter 's answer is correct in that -Xms is allocated at startup and it will grow up to -Xmx (max heap size) but it's a little misleading in how he has worded his answer. (Sorry Peter I know you know this stuff cold).
Setting ms == mx effectively turns off this behavior. While this used to be a good idea in older JVMs, it is no longer the case. Growing and shrinking the heap allows the JVM to adapt to increases in pressure on memory yet reduce pause time by shrinking the heap when memory pressure is reduced. Sometimes this behavior doesn't give you the performance benefits you'd expect and in those cases it's best to set mx == ms.
OOME is thrown when heap is more than 98% of time is spent collecting and the collections cannot recover more than 2% of that. If you are not at max heaps size then the JVM will simply grow so that you're beyond that boundaries. You cannot have an OutOfMemoryError on startup unless your heap hits the max heap size and meets the other conditions that define an OutOfMemoryError.
For the comments that have come in since I posted. I don't know what the JMonitor blog entry is showing but this is from the PSYoung collector.
size_t desired_size = MAX2(MIN2(eden_plus_survivors, gen_size_limit()),
min_gen_size());
I could do more digging about but I'd bet I'd find code that serves the same purpose in the ParNew and PSOldGen and CMS Tenured implementations. In fact it's unlikely that CMS would be able to return memory unless there has been a Concurrent Mode Failure. In the case of a CMF the serial collector will run and that should include a compaction after which top of heap would most likely be clean and therefore eligible to be deallocated.
Main reason to set the -Xms is for if you need a certain heap on start up. (Prevents OutOfMemoryErrors from happening on start up.) As mentioned above, if you need the startup heap to match the max heap is when you would match it. Otherwise you don't really need it. Just asks the application to take up more memory that it may ultimately need. Watching your memory use over time (profiling) while load testing and using your application should give you a good feel for what to need to set them to. But it isn't the worse thing to set them to the same on start up. For a lot of our apps, I actually start out with something like 128, 256, or 512 for min (startup) and one gigabyte for max (this is for non application server applications).
Just found this question on stack overflow which may also be helpful side-effect-for-increasing-maxpermsize-and-max-heap-size. Worth the look.
AFAIK, setting both to the same size does away with the additional step of heap resizing which might be in your favour if you pretty much know how much heap you are going to use. Also, having a large heap size reduces GC invocations to the point that it happens very few times. In my current project (risk analysis of trades), our risk engines have both Xmx and Xms to the same value which pretty large (around 8Gib). This ensures that even after an entire day of invoking the engines, almost no GC takes place.
Also, I found an interesting discussion here.
Definitely yes for a server app. What's the point of having so much memory but not using it?
(No it doesn't save electricity if you don't use a memory cell)
JVM loves memory. For a given app, the more memory JVM has, the less GC it performs. The best part is more objects will die young and less will tenure.
Especially during a server startup, the load is even higher than normal. It's brain dead to give server a small memory to work with at this stage.
From what I see here at http://java-monitor.com/forum/showthread.php?t=427
the JVM under test begins with the Xms setting, but WILL deallocate memory it doesn't need and it will take it upto the Xmx mark when it needs it.
Unless you need a chunk of memory dedicated for a big memory consumer initially, there's not much of a point in putting in a high Xms=Xmx. Looks like deallocation and allocation occur even with Xms=Xmx
I want to calculate the heap usage for my app. I would like to get a procent value of Heap size only.
How do I get the value in code for the current running app?
EDIT
There was an upvoted answer that was NOT complete/correct. The values returned by those methods include stack and method area too, and I need to monitor only heap size.
With that code I got HeapError exception when I reached 43%, so I can't use those methods to monitor just heap
Runtime.getRuntime().totalMemory()
dbyme's answer is not accurate - these Runtime calls give you an amount of memory used by JVM, but this memory does not consist only of heap , there is also stack and method area e.g.
This information is exposed over the JMX management interface. If you simply want to look at it, JConsole or visualvm (part of the JDK, installed in JAVA_HOME/bin) can display nice graphs of a JVM's memory usage, optionally broken down into the various memory pools.
This interface can also be accessed programmatically; see MemoryMXBean.
MemoryMXBean bean = ManagementFactory.getMemoryMXBean();
bean.getHeapMemoryUsage().getUsed();
There really is no good answer, since how much heap memory the JVM has free is not the same as how much heap memory the operating system has free, which are both not the same as how much heap memory can be assigned to your application.
This is because the JVM and OS heaps are different. When the JVM runs out of memory, it may run garbage-collection, defragment its own heap, or request more memory from the OS. Since unused non-garbage-collected objects still exist, but are technically "free", they make the concept of free memory a bit fuzzy.
Also, heap memory fragments; how/when/if memory is defragmented is up to the implementation of the JVM/OS. For example, the OS-heap may have 100MB of free memory, but due to fragmentation, the largest available contiguous space may be 2MB. Thus, if the JVM requests 3MB, it may get an out-of-memory error, even though 100MB are still available. It is not possible for the JVM to know ahead of time that the OS won't be able to allocate that 3MB.
I want to limit the maximum memory used by the JVM. Note, this is not just the heap, I want to limit the total memory used by this process.
use the arguments -Xms<memory> -Xmx<memory>. Use M or G after the numbers for indicating Megs and Gigs of bytes respectively. -Xms indicates the minimum and -Xmx the maximum.
You shouldn't have to worry about the stack leaking memory (it is highly uncommon). The only time you can have the stack get out of control is with infinite (or really deep) recursion.
This is just the heap. Sorry, didn't read your question fully at first.
You need to run the JVM with the following command line argument.
-Xmx<ammount of memory>
Example:
-Xmx1024m
That will allow a max of 1GB of memory for the JVM.
If you want to limit memory for jvm (not the heap size )
ulimit -v
To get an idea of the difference between jvm and heap memory , take a look at this excellent article
http://blogs.vmware.com/apps/2011/06/taking-a-closer-look-at-sizing-the-java-process.html
The answer above is kind of correct, you can't gracefully control how much native memory a java process allocates. It depends on what your application is doing.
That said, depending on platform, you may be able to do use some mechanism, ulimit for example, to limit the size of a java or any other process.
Just don't expect it to fail gracefully if it hits that limit. Native memory allocation failures are much harder to handle than allocation failures on the java heap. There's a fairly good chance the application will crash but depending on how critical it is to the system to keep the process size down that might still suit you.
The NativeHeap can be increasded by -XX:MaxDirectMemorySize=256M
(default is 128)
I've never used it. Maybe you'll find it useful.