Hi JVM configuration is :
Xmx = 4096
Xms = 1024
In my monitoring tools : I have always a committed heap at 2g and my server goes to OutOfMemory errors .
I don't understand why my JVM committed heap is limited to 2g and didn't grow up to 4g (Max heap)
Why the free heap is not used and my server goes to OutOfMemory exception ?
NB:
My application server is websphere 8.5.
My server is Linux 64 Bits .
Thanks in advance
While setting preferredHeapBase may resolve the problem, there could be a number of other reasons for OOM:
A very large memory request could not be satisfied - check the
verboseGC output just before OOM timestamp.
Inadequate User Limits
Insufficient ulimit -u (NPROC) Value Contributes to Native
OutOfMemory. Check the user limits.
The requirement from the application for more than 4gb native memory.
In that case, -Xnocompressedrefs will resolve the problem (but with a
larger java memory footprint and performance impact).
There are other reasons but I find those to be the top hits when diagnosing OOM when there is plenty of java heap space free.
Check this page - Using -Xgc:preferredHeapBase with -Xcompressedrefs. You may be hitting native out of memory error.
Try to set the following flag in JVM arguments:
-Xgc:preferredHeapBase=0x100000000
Related
My java process is getting killed after sometime. The heap settings are min - 2 Gb and max 3 Gb with parallel GC. From pmap command, it is showing more than 40 64Mb anonymos blocks which seems to be causing linux OOM killer.
Error:
There is insufficient memory for the Java Runtime Environment to
continue. Native memory allocation (mmap) failed to map 71827456
bytes for committing reserved memory. Possible reasons: The system
is out of physical RAM or swap space In 32 bit mode, the process
size limit was hit Possible solutions: Reduce memory load on the
system
Increase physical memory or swap space
Check if swap backing store is full Use 64 bit Java on a 64 bit OS Decrease Java heap size (-Xmx/-Xms) Decrease number of Java
threads Decrease Java thread stack sizes (-Xss) Set larger code
cache with -XX:ReservedCodeCacheSize= This output file may be
truncated or incomplete.
Out of Memory Error (os_linux.cpp:2673), pid=21171,
tid=140547280430848
JRE version: Java(TM) SE Runtime Environment (8.0_51-b16) (build
1.8.0_51-b16) Java VM: Java HotSpot(TM) 64-Bit Server VM (25.51-b03 mixed mode linux-amd64 compressed oops) Failed to write core dump.
Core dumps have been disabled. To enable core dumping, try “ulimit -c
unlimited” before starting Java again
Tried reducing the heap to min 512 Mb and max 2 Gb along with G1GC, we see limited number of 64 Mb blocks around 18 and the process does not get killed.
But with heap min 2Gb and max 3Gb, and G1GC , we see high number of 64 Mb blocks.
As per documentation, the max number of 64Mb blocks (malloc arenas) for a 64 bit system with 2 cores can be 2*8 = 16 but we see more than 16.
This answer tries to deal with your observations about memory blocks, the MALLOC_ARENA_MAX and so on. I'm not an expert on native memory allocators. This is based on the Malloc Internals page in the Glibc Wiki.
You have read PrestoDB issue 8993 as implying that glibc malloc will allocate at most MALLOC_ARENA_MAX x NOS_THREADS blocks of memory for the native heap. According to "Malloc Internals", this is not necessarily true.
If the application requests a large enough node, the implementation will call mmap directly rather than using an arena. (The threshold is given by the M_MMAP_THRESHOLD option.)
If an existing arena fills up and compaction fails, the implementation will attempt to grow the arena by calling sbrk or mmap.
These factors mean that MALLOC_ARENA_MAX does not limit the number of mmap'd blocks.
Note that the purpose of arenas is to reduce contention when there are lots of threads calling malloc and free. But it comes with the risk that more memory will be lost due to fragmentation. The goal of MALLOC_ARENA_MAX tuning is to reduce memory fragmentation.
So far, you haven't shown us any clear evidence that you memory problems are due to fragmentation. Other possible explanations are:
your application has a native memory leak, or
your application is simply using a lot of native memory.
Either way, it looks like MALLOC_ARENA_MAX tuning has not helped.
That doesn't look like the Linux OOM killer.
The symptoms you describe indicate that you have run out of physical memory and swap space. In fact, the error message says exactly that:
There is insufficient memory for the Java Runtime Environment to continue. Native memory allocation (mmap) failed to map 71827456 bytes for committing reserved memory. Possible reasons:
The system is out of physical RAM or swap space
In 32 bit mode, the process size limit was hit
A virtual memory system works by mapping the virtual address space to a combination of physical RAM pages, and disk pages. At any given time, the live page may live in RAM or on disk. If an application asks for more virtual memory (e.g using an mmap call), the OS may have to say "can't". That is what has happened.
The solutions are as the message says:
get more RAM,
increase the size of swap space, or
limit the amount of memory that the application asks for ... in various ways.
The G1GC parameters (apart from the max heap size) are largely irrelevant. My understanding is that the max heap size is the total amount of (virtual) memory that the Java heap is allowed to occupy.
So if this is not the Linux OOM killer, what is that?
In fact the OOM killer is a mechanism that identifies applications that are causing dangerous performance problems by doing too much paging. As I mentioned at the start, virtual memory consists of pages that either live in RAM or on disk. In general, the application doesn't know whether any VM page is RAM resident or not. The operating system just takes care of it.
If the application tries to use (read from or a write to) a page that is not RAM resident, a "page fault" occurs. The OS handles this by:
suspending the application thread
finding a spare RAM page
loading reading the disk page into the RAM page
resuming the application thread ... which can then access the memory at the address.
In addition, the operating system needs to maintain a pool of "clean" pages; i.e. pages where the RAM and disk versions are the same. This is done by scanning for mages that have been modified by the application and writing them to disk.
If an application is behaving "nicely", then the amount of paging activity is relatively modest, and thread don't get suspend often. But if there is a lot of paging, you can get to the point where the paging I/O is a bottleneck. In the worst case, the whole system will lock up.
The OOM killer's purpose is to identify processes that are causing the dangerously high paging rates, and .... kill them.
If a JVM process is killed by the OOM killer, it it doesn't get a chance to print an error message (like you got). The process gets a "SIGKILL": instant death.
But ... if you look in the system logfiles, you should see a message that says that such and such process has been killed by the OOM killer.
There are lots of resources that explain the OOM killer:
What killed my process and why?
"How to configure the Linux Out-ofMemory Killer"
"Out of memory management"
Scenario:
I have a JVM running in a docker container. I did some memory analysis using two tools: 1) top 2) Java Native Memory Tracking. The numbers look confusing and I am trying to find whats causing the differences.
Question:
The RSS is reported as 1272MB for the Java process and the Total Java Memory is reported as 790.55 MB. How can I explain where did the rest of the memory 1272 - 790.55 = 481.44 MB go?
Why I want to keep this issue open even after looking at this question on SO:
I did see the answer and the explanation makes sense. However, after getting output from Java NMT and pmap -x , I am still not able to concretely map which java memory addresses are actually resident and physically mapped. I need some concrete explanation (with detailed steps) to find whats causing this difference between RSS and Java Total committed memory.
Top Output
Java NMT
Docker memory stats
Graphs
I have a docker container running for most than 48 hours. Now, when I see a graph which contains:
Total memory given to the docker container = 2 GB
Java Max Heap = 1 GB
Total committed (JVM) = always less than 800 MB
Heap Used (JVM) = always less than 200 MB
Non Heap Used (JVM) = always less than 100 MB.
RSS = around 1.1 GB.
So, whats eating the memory between 1.1 GB (RSS) and 800 MB (Java Total committed memory)?
You have some clue in "
Analyzing java memory usage in a Docker container" from Mikhail Krestjaninoff:
(And to be clear, in May 2019, three years later, the situation does improves with openJDK 8u212 )
Resident Set Size is the amount of physical memory currently allocated and used by a process (without swapped out pages). It includes the code, data and shared libraries (which are counted in every process which uses them)
Why does docker stats info differ from the ps data?
Answer for the first question is very simple - Docker has a bug (or a feature - depends on your mood): it includes file caches into the total memory usage info. So, we can just avoid this metric and use ps info about RSS.
Well, ok - but why is RSS higher than Xmx?
Theoretically, in case of a java application
RSS = Heap size + MetaSpace + OffHeap size
where OffHeap consists of thread stacks, direct buffers, mapped files (libraries and jars) and JVM code itse
Since JDK 1.8.40 we have Native Memory Tracker!
As you can see, I’ve already added -XX:NativeMemoryTracking=summary property to the JVM, so we can just invoke it from the command line:
docker exec my-app jcmd 1 VM.native_memory summary
(This is what the OP did)
Don’t worry about the “Unknown” section - seems that NMT is an immature tool and can’t deal with CMS GC (this section disappears when you use an another GC).
Keep in mind, that NMT displays “committed” memory, not "resident" (which you get through the ps command). In other words, a memory page can be committed without considering as a resident (until it directly accessed).
That means that NMT results for non-heap areas (heap is always preinitialized) might be bigger than RSS values.
(that is where "Why does a JVM report more committed memory than the linux process resident set size?" comes in)
As a result, despite the fact that we set the jvm heap limit to 256m, our application consumes 367M. The “other” 164M are mostly used for storing class metadata, compiled code, threads and GC data.
First three points are often constants for an application, so the only thing which increases with the heap size is GC data.
This dependency is linear, but the “k” coefficient (y = kx + b) is much less then 1.
More generally, this seems to be followed by issue 15020 which reports a similar issue since docker 1.7
I'm running a simple Scala (JVM) application which loads a lot of data into and out of memory.
I set the JVM to 8G heap (-Xmx8G). I have a machine with 132G memory, and it can't handle more than 7-8 containers because they grow well past the 8G limit I imposed on the JVM.
(docker stat was reported as misleading before, as it apparently includes file caches into the total memory usage info)
docker stat shows that each container itself is using much more memory than the JVM is supposed to be using. For instance:
CONTAINER CPU % MEM USAGE/LIMIT MEM % NET I/O
dave-1 3.55% 10.61 GB/135.3 GB 7.85% 7.132 MB/959.9 MB
perf-1 3.63% 16.51 GB/135.3 GB 12.21% 30.71 MB/5.115 GB
It almost seems that the JVM is asking the OS for memory, which is allocated within the container, and the JVM is freeing memory as its GC runs, but the container doesn't release the memory back to the main OS. So... memory leak.
Disclaimer: I am not an expert
I had a production incident recently when under heavy load, pods had a big jump in RSS and Kubernetes killed the pods. There was no OOM error exception, but Linux stopped the process in the most hardcore way.
There was a big gap between RSS and total reserved space by JVM. Heap memory, native memory, threads, everything looked ok, however RSS was big.
It was found out that it is due to the fact how malloc works internally. There are big gaps in the memory where malloc takes chunks of memory from. If there are a lot of cores on your machine, malloc tries to adapt and give every core each own space to take free memory from to avoid resource contention. Setting up export MALLOC_ARENA_MAX=2 solved the issue. You can find more about this situation here:
Growing resident memory usage (RSS) of Java Process
https://devcenter.heroku.com/articles/tuning-glibc-memory-behavior
https://www.gnu.org/software/libc/manual/html_node/Malloc-Tunable-Parameters.html
https://github.com/jeffgriffith/native-jvm-leaks
P.S. I don't know why there was a jump in RSS memory. Pods are built on Spring Boot + Kafka.
I have a Java EE application running on jboss-5.0.0.GA. The application uses BIRT report tool to generate several reports.
The server has 4 cores of 2.4 Ghz and 8 Gb of ram.
The startup script is using the next options:
-Xms2g -Xmx2g -XX:MaxPermSize=512m
The application has reached some stability with this configuration, some time ago I had a lot of crashes because of the memory was totally full.
Rigth now, the application is not crashing, but memory is always fully used.
Example of top command:
Mem: 7927100k total, 7874824k used, 52276k free
The java process shows a use of 2.6g, and this is the only application running on this server.
What can I do to ensure an amount of free memory?
What can I do to try to find a memory leak?
Any other suggestion?
TIA
Based in answer by mezzie:
If you are using linux, what the
kernel does with the memory is
different with how windows work. In
linux, it will try to use up all the
memory. After it uses everything, it
will then recycle the memory for
further use. This is not a memory
leak. We also have jboss tomcat on our
linux server and we did research on
this issue a while back.
I found more information about this,
https://serverfault.com/questions/9442/why-does-red-hat-linux-report-less-free-memory-on-the-system-than-is-actually-ava
http://lwn.net/Articles/329458/
And well, half memory is cached:
total used free shared buffers cached
Mem: 7741 7690 50 0 143 4469
If you are using linux, what the kernel does with the memory is different with how windows work. In linux, it will try to use up all the memory. After it uses everything, it will then recycle the memory for further use. This is not a memory leak. We also have jboss tomcat on our linux server and we did research on this issue a while back.
I bet those are operating system mem values, not Java mem values. Java uses all the memory up to -Xmx and then starts to garbage collect, to vastly oversimplify. Use jconsole to see what the real Java memory usage is.
To make it simple, the JVM's max amount of memory us is equal to MaxPermGen (permanently used as your JVM is running. It contains the class definitions, so it should not grow with the load of your server) + Xmx (max size of the object heap, which contains all instances of the objects currently running in the JVM) + Xss (Thread stacks space, depending on the number of threads running in you JVM, which can most of the time be limited for a server) + Direct Memory Space (set by -XX:MaxDirectMemorySize=xxxx)
So do the math.If you want to be sure you have free memory left, you will have to limit the MaxPermGen, the Xmx and the number of threads allowed on your server.
Risk is, if the load on your server grows, you can get an OutOfMemoryError...
We have production Tomcat (6.0.18) server which runs with the following settings:
-server -Xms7000M -Xmx7000M -Xss128k -XX:+UseFastAccessorMethods
-XX:+HeapDumpOnOutOfMemoryError -Dcom.sun.management.jmxremote.port=7009
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false -verbose:gc -XX:+PrintGCDetails
-XX:+PrintGCTimeStamps
-Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
-Djava.util.logging.config.file=/opt/apache-tomcat-6.0.18/conf/logging.properties
-agentlib:jdwp=transport=dt_socket,address=8000,server=y,suspend=n
-Djava.endorsed.dirs=/opt/apache-tomcat-6.0.18/endorsed
-classpath :/opt/apache-tomcat-6.0.18/bin/bootstrap.jar
java version "1.6.0_12"
Java(TM) SE Runtime Environment (build 1.6.0_12-b04)
Java HotSpot(TM) 64-Bit Server VM (build 11.2-b01, mixed mode)
After some time of work we get (via JConsole) the following memory consumption:
Current heap size: 3 034 233 kbytes
Maximum heap size: 6 504 832 kbytes
Committed memory: 6 504 832 kbytes
Pending finalization: 0 objects
Garbage collector: Name = 'PS MarkSweep', Collections = 128, Total time spent = 16 minutes
Garbage collector: Name = 'PS Scavenge', Collections = 1 791, Total time spent = 17 minutes
Operating System: Linux 2.6.26-2-amd64
Architecture: amd64
Number of processors: 2
Committed virtual memory: 9 148 856 kbytes
Total physical memory: 8 199 684 kbytes
Free physical memory: 48 060 kbytes
Total swap space: 19 800 072 kbytes
Free swap space: 15 910 212 kbytes
The question is why do we have a lot of committed virtual memory? Note that max heap size is ~7Gb (as expected since Xmx=7G).
top shows the following:
31413 root 18 -2 8970m 7.1g 39m S 90 90.3 351:17.87 java
Why does JVM need additional 2Gb! of virtual memory? Can I get non-heap memory disrtibution just like in JRockit http://blogs.oracle.com/jrockit/2009/02/why_is_my_jvm_process_larger_t.html ?
Edit 1: Perm is 36M.
Seems that this problem was caused by a very high number of page faults JVM had. Most likely when Sun's JVM experiences a lot of page faults it starts to allocate additional virtual memory (still don't know why) which may in turn increase IO pressure even more and so on. As a result we got a very high virtual memory consumption and periodical hangs (up to 30 minutes) on full GC.
Three things helped us to get stable work in production:
Decreasing tendency of the Linux kernel to swap (for description see here What Is the Linux Kernel Parameter vm.swappiness?) helped a lot. We have vm.swappiness=20 on all Linux servers which run heavy background JVM tasks.
Decrease maximum heap size value (-Xmx) to prevent excessive pressure on OS itself. We have 9GB value on 12GB machines now.
And the last but very important - code profiling and memory allocations bottlenecks optimizations to eliminate allocation bursts as much as possible.
That's all. Now servers work very well.
-Xms7000M -Xmx7000M
That to me is saying to the JVM "allocate 7gb as an initial heap size with a maximum of 7gb".
So the process will always be 7gb to the OS as that's what the JVM has asked for via the Xms flag.
What it's actually using internal to the JVM is what is being reported as the heap size of a few hundred mb. Normally you set a high Xms when you are preventing slowdowns due to excessive garbage collection. When the JVM hits a (JVM defined) percentage of memory in use it'll do a quick garbage collection. if this fails to free up memory then it'll try a detaillled collection. Finally, if this fails and the max memory defined by Xmx hasn't been reached then it'll ask the OS for more memory. All this takes time and can really notice on a production server - doing this in advance saves this from happening.
You might want to try to hook up a JConsole to your JVM and look at the memory allocation... Maybe your Perm space is taking this extra 2GB... Heap is only a portion of what your VM needs to be alive...
I'm not familiar with jconsole, but are you sure the JVM is using the extra 2Gb? It looks to me like it's the OS or other processes that bring the total up to 9Gb.
Also, a common explanation for a JVM using significantly more virtual memory than the -Xmx param allows is that you have memory-mapped-files (MappedByteBuffer) or use a library that uses MappedByteBuffer.
hi we are getting out of memory exception for one of our process which is running in unix environmnet . how to identify the bug (we observed that there is very little chance of memory leaks in our java process). so whatelse we need analyse to find the rootcauase
I would suggest using a profiler like YourKit (homepage) so that you can easily find what is allocating so much memory.
In any case you should check which settings are specified for your JVM to understand if you need more heap memory for your program. You can set it by specifying -X params:
java -Xmx2g -Xms512m
would start JVM with 2Gb of maximum heap and a starting size of 512Mb
If there are no memory leaks then the application needs more memory. Are you getting out of heap memory, or perm memory or native memory? For heap memory and perm memory you can increase allocation using -Xmx.or -XX:PermSize arguments respectively.
But first try using a profiler to verify that your application is really not leaking any memory.