vm size (task manager) vs. heap size java application - java

I want to find a memory leak in a Java 1.5 application. I use JProfiler for profiling.
I see using the windows' task manager that the vm size for my application is about 790000KB (increased from approx 300000KB). In the profiler I see that that the allocated heap is 266MB (increasing also).
Probably it's a rookie question but, what else can occupy so much memory besides the heap so that it goes to approx 700MB vm size (or private bytes size)?
I mention that there are approx 1200 threads running, which can occupy, according to an answer from here quite some memory, but I think there still is some space until 700MB. By the way, how I can see how much memory the threads stacks occupy?
Thanks.

The JVM can use alot of virtual memory which may not use resident memory. On startup it allocates the heap, and maps in its shared libraries. Classes which are loaded use Perm Gen space. An application can use direct memory which can be as large as the heap maximum. As each thread is created a stack is allocated for each thread. In each case until this memory is used, it might not be allocated to the application i.e. not use physical memory. As the application warms up, more of the virtual memory can become physical memory.
If you believe your JVM is not running efficiently, the first thing I would try is Java 6 which has had many fixes and improvements since the last release of Java 5.0.

Related

JPype / Java - Initialize with, or get, remaining heap space

We have software written in Python, which uses JPype to call Java, which performs various resource heavy calculations / report building. We originally assigned 800mb of heap space when starting the JVM. The java side is fully multithreaded and will work with whatever resources are available to it.
jvmArgs = ["-Djava.class.path=" + classpath, "-Xmx800M"]
jpype.startJVM(u"java\\jre8\\bin\\client\\jvm.dll", *jvmArgs)
This worked well until we tested on Windows XP for our legacy clients. The new machines are Win 7 64-bit with 4GB of RAM, whereas the old ones are Win XP 32-bit with only 2 GB of ram.
The issue is that JPype causes our application to ungracefully and silently crash if we allocate too much memory. A try catch doesn't even get triggered on the statement above.
I'm wondering if there's a way to use java from command line to determine how much memory we can allocate on a computer. We can check if it's 32-bit or 64-bit which helps, but we need to make sure they aren't running other programs taking up heap space on the JVM. If they are, our application will crash.
Reader's Digest: We'd like to allocate 500mb of heap space when initializing the JVM, but can't be sure of how much space is currently being used. If we allocate too much, the entire application silently crashes.
We use the following
JPype: 0.5.4.2
Python: 2.7
Java: 1.8 or 1.7 (64-bit or 32-bit)
Thanks.
The memory consumed by JVM consists of 2 main areas:
Heap memory
Non heap memory - MetaSpace, native method stacks, pc register, direct byte buffers, sockets, jni allocated memory, thread stacks and more
While the maximum size that will be used for the heap memory is known and configurable, the size of the non heap memory cannot be fully controlled.
The size of the native memory used by the JVM will be effected by the number of threads you use, the amount of classes being loaded and the use of buffers (use of I/O).
You can limit the size of the metapsace by setting the MaxMetaspaceSize (-XX:MaxMetaspaceSize). You can control the amount of memory used for thread stacks by limiting the number of threads and setting the thread stack size (-Xss).
Assuming you do not have native memory leaks, the amount of classes being loaded is stable (no excessive use of dynamic proxies and bytecode generation) and the amount of threads being used is known - you can speculate how much memory will be required for your application to run by monitoring the overall memory used by the JVM over a period of time. When you do it, make sure the entire heap is being allocated when the JVM starts.

How does Java reserve/use memory?

This is probably a noob question, but I need to run a java application processing a large dataset. So I went about -Xmx14G, knowing that my machine has 16G of physical memory.
A short while later, boom, I am being notified by my operating system that my startup disk is almost full. I checked my process, there's no OOM exception, just that it stalled. Checked my activity monitor, doesn't says the application runs at full memory capacity.
How does the JVM reserves/use memory?
Typically, the JVM allocates new memory until the heap is full, in which case it garbage collects, freeing up non-referenced objects. If you allocated 14GB for the heap, chances are it will consume that much memory.
There is another JVM argument -Xms<size> it' initial heap size. If we do not set it explicitly JVM will choose one automatically depending on PC configuration. This value is never that big, typically 64M. Later JVM may allocate more memory up to max. But it happens only when the app really uses it. If actual memory usage decreases JVM will shrink the memory to a smaller size.

-Xmx attribute and available system memory correlation

I have a question on my mind. Let's assume that I have two parameters passed to JVM:
-Xms256mb -Xmx1024mb
At the beginning of the program 256MB is allocated. Next, some objects are created and JVM process tries to allocate more memory. Let's say that JVM needs to allocate 800MB. Xmx attribute allows that but the memory which is currently available on the system (let's say Linux/Windows) is 600MB. Is it possible that OutOfMemoryError will be thrown? Or maybe swap mechanism will play a role?
My second question is related to the quality of GC algorithms. Let's say that I have jdk1.5u7 and jdk1.5u22. Is it possible that in the latter JVM the memory leaks vanish and OutOfMemoryError does not occur? Can the quality of GC be better in the latest version?
The quality of the GC (barring a buggy GC) does not affect memory leaks, as memory leaks are an artifact of the application -- GC can't collect what isn't actual garbage.
If a JVM needs more memory, it will take it from the system. If the system can swap, it will swap (like any other process). If the system can not swap, your JVM will fail with a system error, not an OOM exception, because the system can not satisfy the request and and this point its effectively fatal.
As a rule, you NEVER want to have an active JVM partially swapped out. GC event will crush you as the system thrashes cycling pages through the virtual memory system. It's one thing to have a idle background JVM swapped out as a whole, but if you machine as 1G of RAM and your main process wants 1.5GB, then you have a major problem.
The JVM like room to breathe. I've seen JVMs in a GC death spiral when they didn't have enough memory, even though they didn't have memory leaks. They simply didn't have enough working set. Adding another chunk of heap transformed that JVM from awful to happy sawtooth GC graphs.
Give a JVM the memory it needs, you and it will be much happier.
"Memory" and "RAM" aren't the same thing. Memory includes virtual memory (swap), so you can allocate a total of free RAM+ free swap before you get the OutOfMemoryError.
Allocation depends on the used OS.
If you allocate too much memory, maybe you could end up having loaded portions into swap, which is slow.
If the your program runs fater os slower depends on how VM handle the memory.
I would not specify a heap that's not so big to make sure it don't occupy all the memory preventing the slows from VM.
Concerning your first question:
Actually if the machine can not allocate the 1024 MB that you asked as max heap size it will not even start the JVM.
I know this because I noticed it often trying to open eclipse with large heap size and the OS could not allocate the larger heap space the JVM failed to load. You could also try it out yourself to confirm. So the rest of the details are irrelevant to you. If course if your program uses too much swap (same as in all languages) then the performance will be horrible.
Concerning your second question:
the memory leaks vanish
Not possible as they are bugs you will have to fix
and OutOfMemoryError does not occur? Can the quality of GC be better
in the latest version?
This could happen, if for example some different algorithm in GC is used and it manages to kick-in before you seeing the exception. But if you have a memory leak then it would probable mask it or you would see it intermittent.
Also various JVMs have different GCs you can configure
Update:
I have to admit (after see #Orochi note) that I noticed the behavior on max heap on Windows. I can not say for sure that this applies to linux as well. But you could try it yourself.
Update 2:
As an answer to comments of #DennisCheung
From IBM(my emphasis):
The table shows both the maximum Java heap possible and a recommended limit for the maximum Java heap size setting ......It is important to have more physical memory than is required by all of the processes on the machine combined to prevent paging or swapping. Paging reduces the performance of the system and affects the performance of the Java memory management system.

What are some tools that can analyse memory usage outside of the heap in Java?

We have weird memory leak problem with a Java process running in linux has an ever growing swap usage. So naturally we looked at the heap dump and also used a profiler to monitor it over a period of time. We found that
1) The number of threads does not grow
2) The heap usage does not grow
3) Yet the (VIRT) usage keeps growing (which can become a problem because the system starts to run out of swap space)
Now there are a ton of tools that can dump the heap or monitor the heap but none for memory outside of the heap. Anyone have any ideas?
PS this is a remote server, we don't have access to any GUI.
You could be leaking something in native memory, like Sockets. Are there lots of connections happening, and are you closing out the connections in a finally block?
Doesn't the case where 1) the process's heap space does not change but 2) the swap usage does change indicate that some other process on the box might be responsible for sudden growths in memory usage?
In other words, my understanding was that something like swap usage was regulated by the OS - so if a Java process's own heap usage does not change but the swap usage does, that would seem to indicate to me that the problem lies elsewhere, and it just so happens that the OS is choosing your Java process to start eating up swap space.
Or do I have the wrong understanding on swap space?
Do the other parts of JVM memory grow? For example the permgen space?
Do you use native libraries (JNI)?
I'll try to answer by answering another question.
Is it possible that the heap size configuration of the JVM is more than free physical memory you have? Even if you define initial heap size much smaller than maximum heap size, and JVM allocates it all, it will never returns it back to OS, even if you garbage collect it all, and you don't have any allocations anymore. Don't confiugre 1.5GB max heap on 1G RAM server. Please check that configured maximum heap size "enters" the free RAM you have, together with other processes, especially if it's a server application. Otherwise, your application will get a lot of page faults and will swap all the time.

Java memory usage with native processes

What is the best way to tune a server application written in Java that uses a native C++ library?
The environment is a 32-bit Windows machine with 4GB of RAM. The JDK is Sun 1.5.0_12.
The Java process is given 1024MB of memory (-Xmx) at startup but I often see OutOfMemoryErrors due to lack of heap space. If the memory is increased to 1200MB, the OutOfMemoryErrors occur due to lack of swap space. How is the memory shared between the JVM and the native process?
Does the Windows /3GB switch have any effect with native processes and Sun JVM?
I had lots of trouble with that setting (Java on 32-bit systems - msw and others) and they were all solved by reserving just *under 1GB of RAM to the JVM.
Otherwise, as stated, the actual occupied memory in the system for that process would be over 2GB; at that point I was having 'silent deaths' of the process - no errors, no warnings, just the process terminating very quietly.
I got more stability and performance running several JVM (each with under 1GB RAM) on the same system.
I found some info on JNI memory management here, and here's the JVM JNI section on memory management.
Well having a 3GB user space over a 2GB user space should help, but if your having problems running out of swap space at 2GB, I think 3GB is just going to make it worse. How big is your pagefile? Is it maxed out?
You can get a better idea on you heap allocation by hooking up jconsole to your jvm.
How is the memory shared between the JVM and the native process?
Sun's JVM's garbage collector is mark-and-sweep, with options to enable concurrent and incremental GC.
Well, more accurately, it's staged, and the above only applies to tenured (long-lived) objects. For young objects, GC is still done with a stop-and-copy collector, which is much better for working with short-lived objects (and all typical Java programs create many short-lived objects).
A copying collector walks over all elements in the heap, copying them to a new heap if they are referenced, and then discards the former heap. Thus 1M of live objects requires up to 2M of real memory: if every object is alive, there will be two copies of everything during garbage collection.
So the JVM requires far more system memory than is available to the code running within the VM, because there is a substantial overhead to management and garbage collection.
Does the Windows /3GB switch have any effect with native processes and Sun JVM?
The /3GB allows user virtual memory address space to be 3GB, but only for executables whose headers are marked with IMAGE_FILE_LARGE_ADDRESS_AWARE. As far as I am aware, Sun's java.exe is not. I don't have a Windows system here, so I can't verify.
You haven't explained your problem well enough, unfortunately. The real question is --- why is the Java process growing so much. Do you have a memory leak? Do you have a real reason to have that much data in the JVM?
Is the C++ library allocating its own memory from the C stack, or is it allocating memory from the Java object space, or is it doing something else entirely?

Categories