We have a web application deployed on a tomcat server. There are certain scheduled jobs which we run, after which the heap memory peaks up and settles down, everything seems fine.
However the system admin is complaining that memory usage ('top' on Linux ) keeps increasing the more the scheduled jobs are.
Whats the co-relation between heap memory and memory of the CPU? Can it be controlled by any JVM settings? I used JConsole to monitor the system.
I forced the garbage collection through JConsole and the heap usage came down, however the memory usage on Linux remained high and it never decreased.
Any ideas or suggestions would of great help?
The memory allocated by the JVM process is not the same as the heap size. The used heap size could go down without an actual reduction in the space allocated by the JVM. The JVM has to receive a trigger indicating it should shrink the heap size. As #Xepoch mentions, this is controlled by -XX:MaxHeapFreeRatio.
However the system admin is complaining that memory usage ('top' on Linux ) keeps increasing the more the scheduled jobs are [run].
That's because you very likely have some sort of memory leak. System admins tend to complain when they see processes slowly chew up more and more space.
Any ideas or suggestions would of great help?
Have you looked at the number of threads? Is you application creating its own threads and sending them off to deadlock and wait idly forever?
Are you integrating with any third party APIs which may be using JNI?
What is likely being observed is the virtual size and not the resident set size of the Java process(es)? If you have a goal for a small footprint, you may want to not include -Xms or any minimum size on the JVM heap arguments and adjust the 70% -XX:MaxHeapFreeRatio= to a smaller number to allow for more aggressive heap shrinkage.
In the meantime, provide more detail as to what was observed with the comment the Linux memory never decreased? What metric?
You can use -Xmx and -Xms settings to adjust the size of the heap. With tomcat you can set an environment variable before starting:
export JAVA_OPTS=”-Xms256m -Xmx512m”
This initially creates a heap of 256MB, with a max size of 512MB.
Some more details:
http://confluence.atlassian.com/display/CONF25/Fix+'Out+of+Memory'+errors+by+increasing+available+memory
Related
We are running a process that has a cache that consumes a lot of memory.
But the amount of objects in that cache keeps stable during execution, while memory usage is growing with no limit.
We have run Java Flight Recorder in order to try to guess what is happening.
In that report, we can see that UsedHeap is about half of UsedSize, and I cannot find any explanation for that.
JVM exits and dumps a report of OutOfMemory that you can find here:
https://frojasg1.com/stackOverflow/20210423.outOfMemory/hs_err_pid26210.log
Here it is the whole Java Flight Recorder report:
https://frojasg1.com/stackOverflow/20210423.outOfMemory/test.7z
Does anybody know why this outOfMemory is arising?
May be I would have to change the question ... and ask: Why are there almost 10 GB of used memory that is not used in heap?
The log file says this:
# Native memory allocation (mmap) failed to map 520093696 bytes
for committing reserved memory.
So what has happened is that the JVM has requested a ~500MB chunk of memory from the OS via an mmap system call and the OS has refused.
When I looked at more of the log file, it is clear that G1GC itself is requesting more memory, and it looks like it is doing it while trying to expand the heap1.
I can think of a couple of possible reasons for the mmap failure:
The OS may be out of swap space to back the memory allocation.
Your JVM may have hit the per-process memory limit. (On UNIX / Linux this is implemented as a ulimit.)
If your JVM is running in a Docker (or similar) container, you may have exceeded the container's memory limit.
This is not a "normal" OOME. It is actually a mismatch between the memory demands of the JVM and what is available from the OS.
It can be addressed at the OS level; i.e. by removing or increasing the limit, or adding more swap space (or possibly more RAM).
It could also be addressed by reducing the JVM's maximum heap size. This will stop the GC from trying to expand the heap to an unsustainable size2. Doing this may also result in the GC running more often, but that is better than the application dying prematurely from an avoidable OOME.
1- Someone with more experience in G1GC diagnosis may be able to discern more from the crash dump, but it looks like normal heap expansion behavior to me. There is no obvious sign of a "huge" object being created.
2 - Working out what the sustainable size actually would involve analyzing the memory usage for the entire system, and looking at the available RAM and swap resources and the limits. That is a system administration problem, not a programming problem.
May be I would have to change the question ... and ask: Why are there almost 10 GB of used memory that is not used in heap?
What you are seeing is the difference between memory that is currently allocated to to the heap, and the heap limit that you have set. The JVM doesn't actually request all of the heap memory from the OS up front. Instead, it requests more memory incrementally ... if required ... at the end of a major GC run.
So while the total heap size appears to be ~24GB, the actual memory allocated is substantially less than that.
Normally, that is fine. The GC asks the OS for more memory and adds it to the relevant pools for the memory allocators to use. But in this case, the OS cannot oblige, and G1GC pulls the plug.
I have a Java program that has been running for days, it processes incoming messages and forward them out.
A problem I noticed today is that, the heap size I printed via Runtime.totalMemory() shows only ~200M,but the RES column in top command shows it is occupying 1.2g RAM.
The program is not using direct byte buffer.
How can I find out why JVM is taking this much extra RAM?
Some other info:
I am using openjdk-1.8.0
I did not set any JVM options to limit the heap size, the startup command is simply: java -jar my.jar
I tried heap dump using jcmd, the dump file size is only about 15M.
I tried pmap , but there seemed to be too much info printed and I don't know which of them is useful.
The Java Native Memory Tracking tool is very helpful in situations like this. You enable it by starting the JVM with the flag -XX:NativeMemoryTracking=summary.
Then when your process is running you can get the stats by executing the following command:
jcmd [pid] VM.native_memory
This will produce a detailed output listing e.g. the heap size, metaspace size as well as memory allocated directly on the heap.
You can also use this tool to create a baseline to monitor allocations over time.
As you will be able to see using this tool, the JVM reserves by default about 1GB for the metaspace, even though just a fraction may be used. But this may account for the RSS usage you are seeing.
One thing is that if your heap is not taking much memory, then check from a profiler tool how much has it taken for your non-heap memory. If that amount is high and even after a GC cycle, if its not coming down, then probably you should be looking for a memory leak ( non-heap ).
If the non-heap memory is not taking much and everything looks good when you look into the memory using profiling tools, then I guess its the JVM which holds the memory rather releasing them.
So you better check if your GC hasn't work at all or if GC is being forcefully executed using a profiling tool, whether the memory comes down do does it expands or what is happening.
JVM memory and Heap memory are having 2 different behaviors and JVM could assume that it should expand after a GC cycle based on
-XX:MinHeapFreeRatio=
-XX:MaxHeapFreeRatio=
above parameters. So the basic concept behind this is that after a GC cycle, the JVM starts to get measures of free memory and used memory and starts to expand itself or shrink down based on the values for above JVM flags. By default they are set to 40 and 70, which you may interested in tuning up. This is critical specially in containerized environment.
You can use VisualVM to monitor what is happening inside your JVM. You can also use JConsole for a primary overview. It comes with JDK itself. If your JDK is setup with an environment variable, then start it from teriminal with jconsole. Then select your application and start monitoring.
When running out of memory, Java 8 running Tomcat 8 never stops after a heap dump. Instead it just hangs as it max out memory. The server becomes very slow and non-responsive because of extensive GC as it slowly approaches max memory. The memory graph in JConsole flat lines after hitting max. 64 bit linux/ java version "1.8.0_102"/ Tomcat 8. Jconsole
I have set -XX:HeapDumpOnOutOfMemoryError and -XX:HeapDumpPath. Anyone know how to force heap dump instead of JVM getting into unresponsive/ very slow response mode?
Anyone know how to force heap dump instead of JVM getting into unresponsive/ very slow response mode?
You need to use -XX:+UseGCOverheadLimit. This tells the GC to throw an OOME (or dump the heap if you have configured that) when the percentage time spent garbage collecting gets too high. This should be enabled by default for a recent JVM ... but you might have disabled it.
You can adjust the "overheads" thresholds for the collector giving up using -XX:GCTimeLimit=... and -XX:GCHeapFreeLimit=...; see https://docs.oracle.com/javase/8/docs/technotes/guides/vm/gc-ergonomics.html
The effect of "overhead" limits is that your application gets the GC failures earlier. Hopefully, this avoids the "death spiral" effect as the GC uses a larger and larger proportion of time to collect smaller and smaller amounts of actual garbage.
The other possibility is that your JVM is taking a very long time to dump the heap. That might occur if the real problem is that your JVM is causing virtual memory thrashing because Java's memory usage is significantly greater than the amount of physical memory.
jmap is the utility that will create a heap dump for any running jvm. This will allow you to create a heap dump before a crash
https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/tooldescr014.html
It will be a matter of timing, though, to know when you should create it. You can take subsequent heaps and use tools to compare the heaps. I highly recommend Eclipse Memory Access Tool and it's dominator tree view to identify potential memory issues (https://www.eclipse.org/mat/)
I have an application that can be executed when I use the jvm command -Xmx65m. Although it is running fine, I want to allow the application to consume more memory, because it has some features that require it. The problem is that if I increase the -Xmx option, the JVM will alocate more memory to run the features that it can handle with only 65 MB.
Is it possible to configure the JVM to only request more memory to the SO only when it is running out of options and it is about to throw an OutOfMemoryError?
Please add the min , max memory settings. So that JVM can start with the min required memory and keeps allocating more memory as and when its required.
-Xms65m -Xmx512m
Hope this helps.
The JVM reserves the maximum heap size as virtual memory on start up but it only uses the amount it needs (even if you set a minimum size it might not use that much) If it uses a large amount of memory but doesn't need it any more it can give back to the OS (but often doesn't AFAIK)
Perhaps you are not seeing a gradual increase as your maximum is so small. Try it with a maximum of -mx1g and look in jvisualvm as to how the maximum heap size grows.
Is it possible to configure the JVM to only request more memory to the SO only when it is running out of options and it is about to throw an OutOfMemoryError?
As the JVM gets close to finally running out of space, it runs the GC more and more frequently, and application throughput (in terms of useful work done per CPU second) falls dramatically. You don't want that happening if you can avoid it.
There is one GC tuning option that you could use to discourage the JVM from growing the heap. The -XX:MinHeapFreeRatio option sets the "minimum percentage of heap free after GC to avoid expansion". If you reduce this from the default value of 40% to (say) 20% the GC will be less eager to expand the heap.
The down-side is that if you reduce -XX:MinHeapFreeRatio, the JVM as a whole will spend a larger percentage of its time running the garbage collector. Go too far and the effect could possibly be quite severe. (Personally, I would not recommend changing this setting at all ... )
We have weird memory leak problem with a Java process running in linux has an ever growing swap usage. So naturally we looked at the heap dump and also used a profiler to monitor it over a period of time. We found that
1) The number of threads does not grow
2) The heap usage does not grow
3) Yet the (VIRT) usage keeps growing (which can become a problem because the system starts to run out of swap space)
Now there are a ton of tools that can dump the heap or monitor the heap but none for memory outside of the heap. Anyone have any ideas?
PS this is a remote server, we don't have access to any GUI.
You could be leaking something in native memory, like Sockets. Are there lots of connections happening, and are you closing out the connections in a finally block?
Doesn't the case where 1) the process's heap space does not change but 2) the swap usage does change indicate that some other process on the box might be responsible for sudden growths in memory usage?
In other words, my understanding was that something like swap usage was regulated by the OS - so if a Java process's own heap usage does not change but the swap usage does, that would seem to indicate to me that the problem lies elsewhere, and it just so happens that the OS is choosing your Java process to start eating up swap space.
Or do I have the wrong understanding on swap space?
Do the other parts of JVM memory grow? For example the permgen space?
Do you use native libraries (JNI)?
I'll try to answer by answering another question.
Is it possible that the heap size configuration of the JVM is more than free physical memory you have? Even if you define initial heap size much smaller than maximum heap size, and JVM allocates it all, it will never returns it back to OS, even if you garbage collect it all, and you don't have any allocations anymore. Don't confiugre 1.5GB max heap on 1G RAM server. Please check that configured maximum heap size "enters" the free RAM you have, together with other processes, especially if it's a server application. Otherwise, your application will get a lot of page faults and will swap all the time.