Java not releasing unused heap memory - java

I am facing an issue where my Java application heap increases with an increased no. of requests to the application but then it does not release even the unused heap memory.
Here is the description:
My java application starts with a heap memory of 200MB out of which around 100MB is in use.
As the no. of requests increases, the heap memory usage goes up to 1GB.
Once the requests processing is finished, the used heap memory drops back to normal but the unused/free heap space remains 1GB.
I have tried to use -XX:-ShrinkHeapInSteps, -XX:MinHeapFreeRatio and -XX:MaxHeapFreeRatio JVM arguments but was not able to solve this.
Note: If I try to run the Garbage Collector manually then it lowers the unused heap memory also.
Please suggest how we can lower the unused heap memory.

The used heap will not return if the -Xms is high. -Xms essentially overrides the FreeRation. Now there are other factors to consider, in the case of parallel GC you can't shrink the heap as parallel GC doesn't allow that.
Also, JVM can only relinquish the memory after the fullGC if parallelGC is not used.
So essentially, not much can be done here. The JVM doesn't relinquish the memory to OS to avoid recreating the memory. Memory allocation is expensive work, so JVM will hold on to that memory for some time and as the memory management is controlled by Java, it is not always possible to force things here.
One downside of reducing heap size would be, it will take time for Java to recreate the memory space over and over with incoming requests. So the clients will always see some higher latency. However, if the memory space is already created, the next stream of clients will see lower latency so essentially your amortized performance will increase.

Related

Major GC decreasing performance

We are having frecuent outages in our app, basically the heap grows over time to the point were the GC takes a lot of CPU time and execute for several minutes, decreasing the app perfomance drastically. The app is in JSF with a tomcat server.
In the mean time, we:
Increased the heap size from 15G to 26G (-Xms27917287424 -Xmx27917287424)
Take several heap dumps (we are trying to determine the problem using these)
Activated GC logs
With the heap size increase, GC is not executing for that much time but still takes a lot of CPU and frezees the app.
So the question is:
Is this normal? When the GC executes it frees memory, so i think this probably isn't a memory leak (Am I right?)
Is there a way of optimize the GC or maybe this behavior is just a sympthom of something wrong in the app itself?
How can I monitor and analyze this without taking a heap dump?
UPDATE:
I changed JSF from 2.2 to 2.3 because some heap dumps were pointing that JSF was using a lot of memory.
That didn't work out, and yesterday we had and outage again, but this time a little different (from my point of view). Also this time, we had to reset tomcat because the app didn't work anymore after a while
In this case, the garbage collector is running when de old gen heap is not full, and the new generation GC is running all the time.
¿What can be the cause of this?
As has been said in the comments, the behaviour of the application does not look unreasonable. Your code is continually allocating objects that leads to heap space filling up, causing the GC to run. There does not appear to be a memory leak since GC reclaims a lot of space and the overall used space is not continually increasing.
What does appear to be an issue is that a significant number of objects are being promoted to the old-gen before being collected. Major GC cycles are more expensive in terms of CPU due to the relocation and remapping of objects (assuming you're using a compacting algorithm).
To reduce this, you could try increasing the size of the young generation. This will have happened when you increased the overall heap size but not by enough. Ideally, you want the majority of objects to be collected during a minor GC cycle since this is effectively free (the GC does nothing to the objects in Eden space as they are collected). You can do this with the -XX:NewRatio= or -XX:NewSize= flags. You could also try changing the survivor space sizes, again to increase the number of objects collected before tenuring. (use the -XX:SurvivorRatio= flag for this).
For monitoring, I find Flight Recorder and Mission Control very useful as you can drill down into details of how many objects of specific types are allocated. It's also easy to connect to a running JVM or take dumps for later analysis.

Profiling JVM: Committed vs Used vs free Memory

I'm profiling a Java application deployed to Jetty server, using JProfiler.
After a while, I'm getting this memory telemetry:
On the right side is the total memory of this Java process on Windows Task Manager.
I see periodic increases in the Committed Memory in JProfiler, although most of the time, most of this memory is Free (Green). Why is the committed memory increased like this?
In the time point when the image above was taken, the Committed Memory in JProfiler shows 3.17GB but Windows Task Manager shows much higher - 4.2457 GB. Isn't it the same memory they both show? What might be the reason for this difference?
If the peak memory usage approaches the total committed memory size, the JVM will increase the committed memory (the memory that has actually been reserved by the OS for the process) as long as your -Xmx value allows it.
This is a little like filling an ArrayList. When the backing array is exhausted, it's enlarged in larger and larger steps, so that it does not have to be resized for each insert.
As for the difference between the task manager and the heap size of the JVM, the memory in the task manager is always larger than the heap size and is generally difficult to interpret. See here for an explanation of the different measures:
https://technet.microsoft.com/en-us/library/ff382715.aspx

java.lang.OutOfMemoryError GC overhead limit exceeded vs Java heap space?

What java.lang.OutOfMemoryError: Java heap space means
That message means when the application just requires more Java heap space than available to it to operate normally.
What java.lang.OutOfMemoryError: GC overhead limit exceeded means
This message means that for some reason the garbage collector is taking an excessive amount of time (by default 98% of all CPU time of the process) and recovers very little memory in each run (by default 2% of the heap). This internally also mean that when the application just requires more Java heap space than available to it to operate normally.
So my question is which scenario out of the above two will be triggered?
So here is my understanding when a specific exception will be thrown based on a scenario:-
Say I have allocated 1GB of heap size. Currently in use heap memory is 970 MB. A thread started(JVM does not know upfront how much memory it will be consuming).
Now there GC can take one of the below steps
1) JVM starts allocating the memory and then at one point of time it exhaust 1GB of mem and throws java.lang.OutOfMemoryError: Java heap space
2) GC runs in advance and tries to free some memory as it knows currently in use memory is close to 1 GB allocated, Heap. But it's not able to free more than 2% of space in each
subsequent run. Then it will throw java.lang.OutOfMemoryError: GC overhead limit exceeded
Is my understanding correct in the context of my question?
OutOfMemoryError: Java heap space
There is no way for the JVM to satisfy an allocation request, even after performing all last-ditch efforts at its disposal.
OutOfMemoryError GC overhead limit exceeded
Means that the JVM might be able to satisfy an allocation request, but in the recent past it has to GC so often that the amount of CPU time spent on GCing exceeded a (configurable) fraction of overall CPU time used by the java process.
The JVM self-terminates instead of lingering in a half-working, highly inefficient state that might only get worse over time.
Often disabling the GC overhead OOM would just result in a Java heap space OOM a few minutes later.
It's basically a fail-fast mechanism.
java.lang.OutOfMemoryError: Java heap space
Cause: Object could not be allocated in the Java heap. This error does not necessarily imply a memory leak. The problem can be as simple as a configuration issue, where the specified heap size (or the default size, if it is not specified) is insufficient for the application.
java.lang.OutOfMemoryError: GC Overhead limit exceeded
As you quoted, garbage collection is taking excessive time. It may be side effect of memory leak in your application. Old gen may be completely full due to a leak and hence GC is not releasing any (or very less) in garbage collection cycle.
Have a look at this oracle article to troubleshoot different kind of memory leaks.
Regarding your two queries, I too think that your understanding is correct except with a difference. Event of triggering GC in 2nd case is not just new object creation. Full GC will be triggered at a particular condition. Have a look at this SE question.

-Xmx attribute and available system memory correlation

I have a question on my mind. Let's assume that I have two parameters passed to JVM:
-Xms256mb -Xmx1024mb
At the beginning of the program 256MB is allocated. Next, some objects are created and JVM process tries to allocate more memory. Let's say that JVM needs to allocate 800MB. Xmx attribute allows that but the memory which is currently available on the system (let's say Linux/Windows) is 600MB. Is it possible that OutOfMemoryError will be thrown? Or maybe swap mechanism will play a role?
My second question is related to the quality of GC algorithms. Let's say that I have jdk1.5u7 and jdk1.5u22. Is it possible that in the latter JVM the memory leaks vanish and OutOfMemoryError does not occur? Can the quality of GC be better in the latest version?
The quality of the GC (barring a buggy GC) does not affect memory leaks, as memory leaks are an artifact of the application -- GC can't collect what isn't actual garbage.
If a JVM needs more memory, it will take it from the system. If the system can swap, it will swap (like any other process). If the system can not swap, your JVM will fail with a system error, not an OOM exception, because the system can not satisfy the request and and this point its effectively fatal.
As a rule, you NEVER want to have an active JVM partially swapped out. GC event will crush you as the system thrashes cycling pages through the virtual memory system. It's one thing to have a idle background JVM swapped out as a whole, but if you machine as 1G of RAM and your main process wants 1.5GB, then you have a major problem.
The JVM like room to breathe. I've seen JVMs in a GC death spiral when they didn't have enough memory, even though they didn't have memory leaks. They simply didn't have enough working set. Adding another chunk of heap transformed that JVM from awful to happy sawtooth GC graphs.
Give a JVM the memory it needs, you and it will be much happier.
"Memory" and "RAM" aren't the same thing. Memory includes virtual memory (swap), so you can allocate a total of free RAM+ free swap before you get the OutOfMemoryError.
Allocation depends on the used OS.
If you allocate too much memory, maybe you could end up having loaded portions into swap, which is slow.
If the your program runs fater os slower depends on how VM handle the memory.
I would not specify a heap that's not so big to make sure it don't occupy all the memory preventing the slows from VM.
Concerning your first question:
Actually if the machine can not allocate the 1024 MB that you asked as max heap size it will not even start the JVM.
I know this because I noticed it often trying to open eclipse with large heap size and the OS could not allocate the larger heap space the JVM failed to load. You could also try it out yourself to confirm. So the rest of the details are irrelevant to you. If course if your program uses too much swap (same as in all languages) then the performance will be horrible.
Concerning your second question:
the memory leaks vanish
Not possible as they are bugs you will have to fix
and OutOfMemoryError does not occur? Can the quality of GC be better
in the latest version?
This could happen, if for example some different algorithm in GC is used and it manages to kick-in before you seeing the exception. But if you have a memory leak then it would probable mask it or you would see it intermittent.
Also various JVMs have different GCs you can configure
Update:
I have to admit (after see #Orochi note) that I noticed the behavior on max heap on Windows. I can not say for sure that this applies to linux as well. But you could try it yourself.
Update 2:
As an answer to comments of #DennisCheung
From IBM(my emphasis):
The table shows both the maximum Java heap possible and a recommended limit for the maximum Java heap size setting ......It is important to have more physical memory than is required by all of the processes on the machine combined to prevent paging or swapping. Paging reduces the performance of the system and affects the performance of the Java memory management system.

Does the JVM force garbage collection when it reaches its -Xmx limit?

The question is basically contained in the title.
Say you have an application that has reached its JVM -Xmx limit. When that application requires more memory is garbage collection forced ? (in the HotSpot JVM)
A second odd thing I can't explain is that I currently have an application server which is ran with -Xmx=2048m, the "top" command (on linux) reports 2.7g for its process.
So how/when is an application allowed to exceed its -Xmx ?
Thanks,
Actually the normal GC is triggered when young generation is full (not the whole heap) and major GC is triggered when there is no space left in survivor space so some objects need to be migrated to old generation.
The Xmx parameter does only specify the size of the heap. The Java process takes more memory since the heap is only one part of the Java process, I guess you also have other stuff that the java process contains like native libraries, the perm gen and also native memory allocations made by the application.
Here's an nice article describing memory allocation:
http://www.ibm.com/developerworks/java/library/j-nativememory-linux/
Yes, the JVM will certainly call the GC if it reaches the heap limit (and probably much sooner). If this doesn't help, it will throw OutOfMemoryErrors.
The reason why you are seeing a bigger process memory consumption is that the -Xmx option only limits the Java heap space (where the Java objects are allocated on). There are several other memory regions used by the JVM additionally: space for Thread stacks, the "PermGen" (where the classes and their code is stored), "direct" memory allocated via ByteBuffers, memory allocated by native libraries, etc. For some of these additional memory regions there exist other configuration options which allow to limit them, for example -Xss, but some are even out of control of the JVM.
This is usually the case, althought GC is normally triggered much sooner, depending on the Garbage Collector you use.
Yes, if you don't still find memory it will raise OutOfmemory Error. I understand it like this.
IIRC the guarantee is that a full GC will be performed before an OutOfMemoryError is thrown. Since exceeding the heap size limit must result in such an error, that implies you'll always have at least one full GC run when the limit is reached.
Garbage collection is quite a large area, but what you say is correct for full collections (there are other types)
One thing to be aware of is that -Xmx sets the maximum heap size but there is also an -Xms, which is the minumum heap size. Your application may start with only the minimum configured. Then if the memory used reaches that, it will trigger a full garbage collection AND increase the amount of heap available, from the minimum (-Xmx) up to some value less than or equal to the maximum (-Xmx). This can happen several times, until the maximum is reached. After that, it cannot increase the heap anymore but garbage collections will continue to happen when that maximum is reached.

Categories