I am trying to hunt the memory leak in Java Spring Boot app inside docker container.
Heap size of the app is limited like this:
java -XX:NativeMemoryTracking=summary -jar /app-0.1.jar -Xms256m -Xmx512m
Native memory diff looks like this:
./jcmd 7 VM.native_memory summary.diff
Native Memory Tracking:
Total: reserved=8295301KB +1728KB, committed=2794537KB +470172KB
Java Heap (reserved=6469632KB, committed=2245120KB +466944KB)
(mmap: reserved=6469632KB, committed=2245120KB +466944KB)
Class (reserved=1141581KB -9KB, committed=103717KB -9KB)
(classes #16347 -86)
(malloc=13133KB -9KB #23221 -306)
(mmap: reserved=1128448KB, committed=90584KB)
Thread (reserved=85596KB +999KB, committed=85596KB +999KB)
(thread #84 +1)
(stack: reserved=85220KB +1027KB, committed=85220KB +1027KB)
(malloc=279KB +3KB #498 +6)
(arena=97KB -31 #162 +2)
Code (reserved=255078KB +32KB, committed=32454KB +228KB)
(malloc=5478KB +32KB #8900 +80)
(mmap: reserved=249600KB, committed=26976KB +196KB)
GC (reserved=249066KB -2KB, committed=233302KB +1302KB)
(malloc=12694KB -2KB #257 -75)
(mmap: reserved=236372KB, committed=220608KB +1304KB)
Compiler (reserved=227KB +10KB, committed=227KB +10KB)
(malloc=96KB +10KB #807 +15)
(arena=131KB #7)
Internal (reserved=68022KB +720KB, committed=68022KB +720KB)
(malloc=67990KB +720KB #21374 -287)
(mmap: reserved=32KB, committed=32KB)
Symbol (reserved=21938KB -11KB, committed=21938KB -11KB)
(malloc=19436KB -11KB #197124 -188)
(arena=2501KB #1)
Native Memory Tracking (reserved=3962KB -12KB, committed=3962KB -12KB)
(malloc=15KB #178 +1)
(tracking overhead=3947KB -12KB)
Arena Chunk (reserved=199KB, committed=199KB)
(malloc=199KB)
After taking the heap dump:
./jmap -dump:live,format=b,file=/tmp/dump2.hprof 7
The heap Leak Suspects report is quite small - 45MB:
The question:
why is Java Heap committed=2245120KB - almost 2GB? It's not aligned with -Xmx512m nor with heap dump size taken with jmap.
The answer is actually simple:
Params -Xms256m -Xmx512m were passed in wrong place and therefore ignored by JVM. The correct order of params is like this:
java -XX:NativeMemoryTracking=summary -Xms256m -Xmx512m -jar /app-0.1.jar
Also, dump is much smaller than Java Heap committed because only live objects were dumped, due to -dump:live. After changing dump command to:
./jmap -dump:format=b,file=/tmp/dump2.hprof 7
the size of dump is very close to Java Heap committed.
Related
I am using Tomcat 10 and Java 17 on Windows. I have -xmx set at 1GB.
I activated native memory debug and I can see that GC is taking about 9 GB of memory. Here is the full output for jcmd PID VM.native_memory :
Native Memory Tracking:
(Omitting categories weighting less than 1KB)
Total: reserved=12037466KB, committed=10529414KB
Java Heap (reserved=1048576KB, committed=881664KB)
(mmap: reserved=1048576KB, committed=881664KB)
Class (reserved=1049958KB, committed=7846KB)
(classes #11096)
( instance classes #10222, array classes #874)
(malloc=1382KB #33141)
(mmap: reserved=1048576KB, committed=6464KB)
( Metadata: )
( reserved=57344KB, committed=55104KB)
( used=54882KB)
( waste=222KB =0.40%)
( Class space:)
( reserved=1048576KB, committed=6464KB)
( used=6163KB)
( waste=301KB =4.65%)
Thread (reserved=93410KB, committed=5578KB)
(thread #91)
(stack: reserved=93184KB, committed=5352KB)
(malloc=121KB #558)
(arena=105KB #180)
Code (reserved=251223KB, committed=48443KB)
(malloc=3479KB #13870)
(mmap: reserved=247744KB, committed=44964KB)
GC (reserved=9484143KB, committed=9477967KB)
(malloc=9412015KB #1204531)
(mmap: reserved=72128KB, committed=65952KB)
Compiler (reserved=675KB, committed=675KB)
(malloc=510KB #820)
(arena=165KB #5)
Internal (reserved=1359KB, committed=1359KB)
(malloc=1295KB #22480)
(mmap: reserved=64KB, committed=64KB)
Other (reserved=367KB, committed=367KB)
(malloc=367KB #69)
Symbol (reserved=11951KB, committed=11951KB)
(malloc=10185KB #305552)
(arena=1766KB #1)
Native Memory Tracking (reserved=25251KB, committed=25251KB)
(malloc=396KB #5669)
(tracking overhead=24855KB)
Shared class space (reserved=12096KB, committed=12096KB)
(mmap: reserved=12096KB, committed=12096KB)
Arena Chunk (reserved=315KB, committed=315KB)
(malloc=315KB)
Tracing (reserved=32KB, committed=32KB)
(arena=32KB #1)
Logging (reserved=5KB, committed=5KB)
(malloc=5KB #216)
Arguments (reserved=3KB, committed=3KB)
(malloc=3KB #92)
Module (reserved=399KB, committed=399KB)
(malloc=399KB #2322)
Safepoint (reserved=8KB, committed=8KB)
(mmap: reserved=8KB, committed=8KB)
Synchronization (reserved=79KB, committed=79KB)
(malloc=79KB #849)
Serviceability (reserved=1KB, committed=1KB)
(malloc=1KB #14)
Metaspace (reserved=57614KB, committed=55374KB)
(malloc=270KB #236)
(mmap: reserved=57344KB, committed=55104KB)
String Deduplication (reserved=1KB, committed=1KB)
(malloc=1KB #8)
From this, I understand that GC process requires 9 times more memory than my application memory. (9 GB compared with 1 GB app max heap). Am I miss something?
I am using jcmd with VM.native_memory summary.diff scale=MB to see why my JVM uses more and more memory over time.
The output is like that:
Native Memory Tracking:
Total: reserved=1541MB +19MB, committed=290MB +45MB
- Java Heap (reserved=76MB, committed=48MB)
(mmap: reserved=76MB, committed=48MB)
- Class (reserved=1093MB +11MB, committed=77MB +11MB)
(classes #11008 +980)
(malloc=3MB +1MB #22851 +7789)
(mmap: reserved=1090MB +10MB, committed=74MB +11MB)
- Thread (reserved=43MB +1MB, committed=43MB +1MB)
(thread #44 +1)
(stack: reserved=43MB +1MB, committed=43MB +1MB)
- Code (reserved=251MB +5MB, committed=45MB +30MB)
(malloc=7MB +5MB #11790 +7595)
(mmap: reserved=244MB, committed=37MB +25MB)
- GC (reserved=53MB, committed=52MB)
(malloc=18MB #65380 +18712)
(mmap: reserved=35MB, committed=34MB)
- Internal (reserved=3MB, committed=3MB)
(malloc=3MB #20304 +5733)
- Symbol (reserved=18MB +1MB, committed=18MB +1MB)
(malloc=16MB +1MB #134791 +6616)
(arena=2MB #1)
- Native Memory Tracking (reserved=4MB +1MB, committed=4MB +1MB)
(tracking overhead=4MB +1MB)
As you can see, the category code uses +30MB since I started the JVM. According to https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/tooldescr022.html#BABHIFJC
this category means Generated code.
Unfortunately, I did not find any more details on it. What is "generated code"?
I am using one Annotation in my code which is used very often. Could that be the "generated code"?
Thank you!
Why does the JVM process using ZGC of OpenJDK 11 on CentOS within Docker use huge shared memory?
Server: 2 cores/4G memory;
VIRT: 17.032t, RES: 7.974g, SHR: 7.382g, %CPU: 26.9, %MEM: 199
JVM parameters:
-Xmx3296m -Xms1977m -Xss256k
-XX:MetaspaceSize=128m
-XX:MaxMetaspaceSize=256m
-XX:+UnlockExperimentalVMOptions
-XX:+UseZGC
-XX:MinHeapFreeRatio=50
-XX:MaxHeapFreeRatio=80
After turning off ZGC, shared memory uses only 33K as below.
VIRT: 29g, RES: 1.5g, SHR: 33564, %CPU: 26, %MEM: 39.
I'm using Lucene v4.10.4. I have pretty big index, it could be over few GBs. So I get OutOfMemoryError on initializing IndexSearcher:
try (Directory dir = FSDirectory.open(new File(indexPath))) {
//Out of Memory here!
IndexSearcher searcher = new IndexSearcher(DirectoryReader.open(indexDir));
How to tell Lucene's DirectoryReader to not load into memory more than 256 MB at once?
Log
Caused by: java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.util.fst.BytesStore.<init>(BytesStore.java:68)
at org.apache.lucene.util.fst.FST.<init>(FST.java:386)
at org.apache.lucene.util.fst.FST.<init>(FST.java:321)
at org.apache.lucene.codecs.blocktree.FieldReader.<init>(FieldReader.java:85)
at org.apache.lucene.codecs.blocktree.BlockTreeTermsReader.<init>(BlockTreeTermsReader.java:192)
at org.apache.lucene.codecs.lucene41.Lucene41PostingsFormat.fieldsProducer(Lucene41PostingsFormat.java:441)
at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.<init>(PerFieldPostingsFormat.java:197)
at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:254)
at org.apache.lucene.index.SegmentCoreReaders.<init>(SegmentCoreReaders.java:120)
at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:108)
at org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:62)
at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:923)
at org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:53)
at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:67)
First you should check the current heap size of your JVM.
java -XX:+PrintFlagsFinal -version | grep MaxHeapSize
If this number is not reasonable for your use case, you should increase it when running your program with -Xmx option of java command. A sample command to assign 8GB of heap memory would look like:
java -Xmx8g -jar your_jar_file
Hope this helps.
When you start up a JVM WHERE it the PERM GEN allocated? Is it part of the main HEAP or is it in addition to the HEAP size.
for example if I use the following parameters:
-server -Xms10G -Xmx10G -XX:MaxPermSize=1536M
Is the total size of Java going to be 6G + 512M or 6.5G for Java or is the perm generation setup inside of the HEAP meaning that all running application will have 6G - 512M = 5.5MB for young / tenured(aka OLD) and Perm?
This graphic in 4. Sizing the Generations seems to imply it may be outside of the HEAP but I can't seem to find some that states the for sure.
http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html
Looking at the output from jstat it would seem it would be on the outside of main HEAP but this may be just the way it is reported.
[jboss#pts03-taps-03 ~]$ jstat -gccapacity PID
NGCMN NGCMX NGC S0C S1C EC OGCMN OGCMX OGC OC PGCMN PGCMX PGC PC YGC FGC
85184.0 85184.0 85184.0 8512.0 8512.0 68160.0 10400576.0 10400576.0 10400576.0 10400576.0 21248.0 1572864.0 1387840.0 1387840.0 431 43
OGCMX = 10400576.0 (almost 10G OLD GEN)
NGCMX = 85184.0 (OGCMX + NGCMX = very close to 10G NEW GEN)
PGCMX = 1572864.0 (1.5G PERM GEN)
If possible please provide a link to documentation showing you case to be true.
-server -Xms10G -Xmx10G -XX:MaxPermSize=1536M
The total of the heap and perm gen is going to be 11.5 GB. However there are other areas of memory, e.g. direct memory which can be just as big. Another area is shared libraries which is basically a fixed size.
e.g. you can set
-mx128m -XX:MaxPermSize=1g
If the perm gen was insize the heap this would fail.