I'm profiling a Java application deployed to Jetty server, using JProfiler.
After a while, I'm getting this memory telemetry:
On the right side is the total memory of this Java process on Windows Task Manager.
I see periodic increases in the Committed Memory in JProfiler, although most of the time, most of this memory is Free (Green). Why is the committed memory increased like this?
In the time point when the image above was taken, the Committed Memory in JProfiler shows 3.17GB but Windows Task Manager shows much higher - 4.2457 GB. Isn't it the same memory they both show? What might be the reason for this difference?
If the peak memory usage approaches the total committed memory size, the JVM will increase the committed memory (the memory that has actually been reserved by the OS for the process) as long as your -Xmx value allows it.
This is a little like filling an ArrayList. When the backing array is exhausted, it's enlarged in larger and larger steps, so that it does not have to be resized for each insert.
As for the difference between the task manager and the heap size of the JVM, the memory in the task manager is always larger than the heap size and is generally difficult to interpret. See here for an explanation of the different measures:
https://technet.microsoft.com/en-us/library/ff382715.aspx
Related
I am facing an issue where my Java application heap increases with an increased no. of requests to the application but then it does not release even the unused heap memory.
Here is the description:
My java application starts with a heap memory of 200MB out of which around 100MB is in use.
As the no. of requests increases, the heap memory usage goes up to 1GB.
Once the requests processing is finished, the used heap memory drops back to normal but the unused/free heap space remains 1GB.
I have tried to use -XX:-ShrinkHeapInSteps, -XX:MinHeapFreeRatio and -XX:MaxHeapFreeRatio JVM arguments but was not able to solve this.
Note: If I try to run the Garbage Collector manually then it lowers the unused heap memory also.
Please suggest how we can lower the unused heap memory.
The used heap will not return if the -Xms is high. -Xms essentially overrides the FreeRation. Now there are other factors to consider, in the case of parallel GC you can't shrink the heap as parallel GC doesn't allow that.
Also, JVM can only relinquish the memory after the fullGC if parallelGC is not used.
So essentially, not much can be done here. The JVM doesn't relinquish the memory to OS to avoid recreating the memory. Memory allocation is expensive work, so JVM will hold on to that memory for some time and as the memory management is controlled by Java, it is not always possible to force things here.
One downside of reducing heap size would be, it will take time for Java to recreate the memory space over and over with incoming requests. So the clients will always see some higher latency. However, if the memory space is already created, the next stream of clients will see lower latency so essentially your amortized performance will increase.
We are running a process that has a cache that consumes a lot of memory.
But the amount of objects in that cache keeps stable during execution, while memory usage is growing with no limit.
We have run Java Flight Recorder in order to try to guess what is happening.
In that report, we can see that UsedHeap is about half of UsedSize, and I cannot find any explanation for that.
JVM exits and dumps a report of OutOfMemory that you can find here:
https://frojasg1.com/stackOverflow/20210423.outOfMemory/hs_err_pid26210.log
Here it is the whole Java Flight Recorder report:
https://frojasg1.com/stackOverflow/20210423.outOfMemory/test.7z
Does anybody know why this outOfMemory is arising?
May be I would have to change the question ... and ask: Why are there almost 10 GB of used memory that is not used in heap?
The log file says this:
# Native memory allocation (mmap) failed to map 520093696 bytes
for committing reserved memory.
So what has happened is that the JVM has requested a ~500MB chunk of memory from the OS via an mmap system call and the OS has refused.
When I looked at more of the log file, it is clear that G1GC itself is requesting more memory, and it looks like it is doing it while trying to expand the heap1.
I can think of a couple of possible reasons for the mmap failure:
The OS may be out of swap space to back the memory allocation.
Your JVM may have hit the per-process memory limit. (On UNIX / Linux this is implemented as a ulimit.)
If your JVM is running in a Docker (or similar) container, you may have exceeded the container's memory limit.
This is not a "normal" OOME. It is actually a mismatch between the memory demands of the JVM and what is available from the OS.
It can be addressed at the OS level; i.e. by removing or increasing the limit, or adding more swap space (or possibly more RAM).
It could also be addressed by reducing the JVM's maximum heap size. This will stop the GC from trying to expand the heap to an unsustainable size2. Doing this may also result in the GC running more often, but that is better than the application dying prematurely from an avoidable OOME.
1- Someone with more experience in G1GC diagnosis may be able to discern more from the crash dump, but it looks like normal heap expansion behavior to me. There is no obvious sign of a "huge" object being created.
2 - Working out what the sustainable size actually would involve analyzing the memory usage for the entire system, and looking at the available RAM and swap resources and the limits. That is a system administration problem, not a programming problem.
May be I would have to change the question ... and ask: Why are there almost 10 GB of used memory that is not used in heap?
What you are seeing is the difference between memory that is currently allocated to to the heap, and the heap limit that you have set. The JVM doesn't actually request all of the heap memory from the OS up front. Instead, it requests more memory incrementally ... if required ... at the end of a major GC run.
So while the total heap size appears to be ~24GB, the actual memory allocated is substantially less than that.
Normally, that is fine. The GC asks the OS for more memory and adds it to the relevant pools for the memory allocators to use. But in this case, the OS cannot oblige, and G1GC pulls the plug.
The problem that I faced is my application's memory used is only 100MB after that it decreased 50MB, but on Window Task Manager it showed 150MB and always keep or increase but not decrease,
How can we reduce Memory (Private working set) on task manager ?
What you are seeing in JConsole (or other monitoring tools) is the pattern the java memory is being used.
The memory of the JVM is usually divided among these areas (what you also see in monitoring tools).
Heap memory which is for Java objects
Non-heap memory which is the place where java stores loaded classes
and metadata and the JVM code
Native memory which is a part of memory reserved for dll's and
native code of Java (very low level). Sometimes you could get a OOM
in this area while you got enough heap memory (because as you
increase the Max Heap size, it reduces the native memory
available).
The windows task manager does not show that. It shows the whole memory used by your application (heap + non-heap+ native part).
Also note that usually processes that request more memory from OS, this memory is kept by them even when the actual application "frees" memory. These memory pages have been mapped as part of the processe's address space. So in your Task Manager you would not see a pattern of decreasing memory, but that would not indicate some memory leak from your application.
So you can not reduce the memory you see from task manager but the memory you see from the monitoring tool should decrease at some point, otherwise that could indicate a memory leak
I want to find a memory leak in a Java 1.5 application. I use JProfiler for profiling.
I see using the windows' task manager that the vm size for my application is about 790000KB (increased from approx 300000KB). In the profiler I see that that the allocated heap is 266MB (increasing also).
Probably it's a rookie question but, what else can occupy so much memory besides the heap so that it goes to approx 700MB vm size (or private bytes size)?
I mention that there are approx 1200 threads running, which can occupy, according to an answer from here quite some memory, but I think there still is some space until 700MB. By the way, how I can see how much memory the threads stacks occupy?
Thanks.
The JVM can use alot of virtual memory which may not use resident memory. On startup it allocates the heap, and maps in its shared libraries. Classes which are loaded use Perm Gen space. An application can use direct memory which can be as large as the heap maximum. As each thread is created a stack is allocated for each thread. In each case until this memory is used, it might not be allocated to the application i.e. not use physical memory. As the application warms up, more of the virtual memory can become physical memory.
If you believe your JVM is not running efficiently, the first thing I would try is Java 6 which has had many fixes and improvements since the last release of Java 5.0.
We have weird memory leak problem with a Java process running in linux has an ever growing swap usage. So naturally we looked at the heap dump and also used a profiler to monitor it over a period of time. We found that
1) The number of threads does not grow
2) The heap usage does not grow
3) Yet the (VIRT) usage keeps growing (which can become a problem because the system starts to run out of swap space)
Now there are a ton of tools that can dump the heap or monitor the heap but none for memory outside of the heap. Anyone have any ideas?
PS this is a remote server, we don't have access to any GUI.
You could be leaking something in native memory, like Sockets. Are there lots of connections happening, and are you closing out the connections in a finally block?
Doesn't the case where 1) the process's heap space does not change but 2) the swap usage does change indicate that some other process on the box might be responsible for sudden growths in memory usage?
In other words, my understanding was that something like swap usage was regulated by the OS - so if a Java process's own heap usage does not change but the swap usage does, that would seem to indicate to me that the problem lies elsewhere, and it just so happens that the OS is choosing your Java process to start eating up swap space.
Or do I have the wrong understanding on swap space?
Do the other parts of JVM memory grow? For example the permgen space?
Do you use native libraries (JNI)?
I'll try to answer by answering another question.
Is it possible that the heap size configuration of the JVM is more than free physical memory you have? Even if you define initial heap size much smaller than maximum heap size, and JVM allocates it all, it will never returns it back to OS, even if you garbage collect it all, and you don't have any allocations anymore. Don't confiugre 1.5GB max heap on 1G RAM server. Please check that configured maximum heap size "enters" the free RAM you have, together with other processes, especially if it's a server application. Otherwise, your application will get a lot of page faults and will swap all the time.