Why does the JVM process using ZGC of OpenJDK 11 on CentOS within Docker use huge shared memory?
Server: 2 cores/4G memory;
VIRT: 17.032t, RES: 7.974g, SHR: 7.382g, %CPU: 26.9, %MEM: 199
JVM parameters:
-Xmx3296m -Xms1977m -Xss256k
-XX:MetaspaceSize=128m
-XX:MaxMetaspaceSize=256m
-XX:+UnlockExperimentalVMOptions
-XX:+UseZGC
-XX:MinHeapFreeRatio=50
-XX:MaxHeapFreeRatio=80
After turning off ZGC, shared memory uses only 33K as below.
VIRT: 29g, RES: 1.5g, SHR: 33564, %CPU: 26, %MEM: 39.
Related
I am trying to hunt the memory leak in Java Spring Boot app inside docker container.
Heap size of the app is limited like this:
java -XX:NativeMemoryTracking=summary -jar /app-0.1.jar -Xms256m -Xmx512m
Native memory diff looks like this:
./jcmd 7 VM.native_memory summary.diff
Native Memory Tracking:
Total: reserved=8295301KB +1728KB, committed=2794537KB +470172KB
Java Heap (reserved=6469632KB, committed=2245120KB +466944KB)
(mmap: reserved=6469632KB, committed=2245120KB +466944KB)
Class (reserved=1141581KB -9KB, committed=103717KB -9KB)
(classes #16347 -86)
(malloc=13133KB -9KB #23221 -306)
(mmap: reserved=1128448KB, committed=90584KB)
Thread (reserved=85596KB +999KB, committed=85596KB +999KB)
(thread #84 +1)
(stack: reserved=85220KB +1027KB, committed=85220KB +1027KB)
(malloc=279KB +3KB #498 +6)
(arena=97KB -31 #162 +2)
Code (reserved=255078KB +32KB, committed=32454KB +228KB)
(malloc=5478KB +32KB #8900 +80)
(mmap: reserved=249600KB, committed=26976KB +196KB)
GC (reserved=249066KB -2KB, committed=233302KB +1302KB)
(malloc=12694KB -2KB #257 -75)
(mmap: reserved=236372KB, committed=220608KB +1304KB)
Compiler (reserved=227KB +10KB, committed=227KB +10KB)
(malloc=96KB +10KB #807 +15)
(arena=131KB #7)
Internal (reserved=68022KB +720KB, committed=68022KB +720KB)
(malloc=67990KB +720KB #21374 -287)
(mmap: reserved=32KB, committed=32KB)
Symbol (reserved=21938KB -11KB, committed=21938KB -11KB)
(malloc=19436KB -11KB #197124 -188)
(arena=2501KB #1)
Native Memory Tracking (reserved=3962KB -12KB, committed=3962KB -12KB)
(malloc=15KB #178 +1)
(tracking overhead=3947KB -12KB)
Arena Chunk (reserved=199KB, committed=199KB)
(malloc=199KB)
After taking the heap dump:
./jmap -dump:live,format=b,file=/tmp/dump2.hprof 7
The heap Leak Suspects report is quite small - 45MB:
The question:
why is Java Heap committed=2245120KB - almost 2GB? It's not aligned with -Xmx512m nor with heap dump size taken with jmap.
The answer is actually simple:
Params -Xms256m -Xmx512m were passed in wrong place and therefore ignored by JVM. The correct order of params is like this:
java -XX:NativeMemoryTracking=summary -Xms256m -Xmx512m -jar /app-0.1.jar
Also, dump is much smaller than Java Heap committed because only live objects were dumped, due to -dump:live. After changing dump command to:
./jmap -dump:format=b,file=/tmp/dump2.hprof 7
the size of dump is very close to Java Heap committed.
JEP 192: String Deduplication in G1 implemented in Java 8 Update 20 added the new String deduplication feature:
Reduce the Java heap live-data set by enhancing the G1 garbage collector so that duplicate instances of String are automatically and continuously deduplicated.
The JEP page mentions that a command-line option UseStringDeduplication (bool) allows the dedup feature to be enabled or disabled. But the JEP page does not go so far as to indicate the default.
➠ Is the dedup feature ON or OFF by default in the G1 garbage collector bundled with Java 8 and with Java 9?
➠ Is there a “getter” method to verify the current setting at runtime?
I do not know where to look for documentation beyond the JEP page.
In at least the HotSpot-equipped implementations of Java 9, the G1 garbage collector is enabled by default. That fact prompted this Question now. For more info on String interning and deduplication, see this 2014-10 presentation by Aleksey Shipilev at 29:00.
String deduplication off by default
For the versions of Java 8 and Java 9 seen below, UseStringDeduplication is false (disabled) by default.
One way to verify the feature setting: list out all the final flags for JVM and then look for it.
build 1.8.0_131-b11
$ java -XX:+UseG1GC -XX:+UnlockDiagnosticVMOptions -XX:+PrintFlagsFinal -version | grep -i 'duplicat'
bool PrintStringDeduplicationStatistics = false {product}
uintx StringDeduplicationAgeThreshold = 3 {product}
bool StringDeduplicationRehashALot = false {diagnostic}
bool StringDeduplicationResizeALot = false {diagnostic}
bool UseStringDeduplication = false {product}
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
build 9+18
$ java -XX:+UseG1GC -XX:+UnlockDiagnosticVMOptions -XX:+PrintFlagsFinal -version | grep -i 'duplicat'
uintx StringDeduplicationAgeThreshold = 3 {product} {default}
bool StringDeduplicationRehashALot = false {diagnostic} {default}
bool StringDeduplicationResizeALot = false {diagnostic} {default}
bool UseStringDeduplication = false {product} {default}
java version "9"
Java(TM) SE Runtime Environment (build 9+181)
Java HotSpot(TM) 64-Bit Server VM (build 9+181, mixed mode)
Another way to test it is with
package jvm;
import java.util.ArrayList;
import java.util.List;
public class StringDeDuplicationTester {
public static void main(String[] args) throws Exception {
List<String> strings = new ArrayList<>();
while (true) {
for (int i = 0; i < 100_00; i++) {
strings.add(new String("String " + i));
}
Thread.sleep(100);
}
}
}
run without explicitly specifying it.
$ java -Xmx256m -XX:+UseG1GC -XX:+PrintStringDeduplicationStatistics jvm.StringDeDuplicationTester
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at jvm.StringDeDuplicationTester.main(StringDeDuplicationTester.java:12)
Run with explicitly turning it ON.
$ java -Xmx256m -XX:+UseG1GC -XX:+UseStringDeduplication -XX:+PrintStringDeduplicationStatistics jvm.StringDeDuplicationTester
[GC concurrent-string-deduplication, 5116.7K->408.7K(4708.0K), avg 92.0%, 0.0246084 secs]
[Last Exec: 0.0246084 secs, Idle: 1.7075173 secs, Blocked: 0/0.0000000 secs]
[Inspected: 130568]
[Skipped: 0( 0.0%)]
[Hashed: 130450( 99.9%)]
[Known: 0( 0.0%)]
[New: 130568(100.0%) 5116.7K]
[Deduplicated: 120388( 92.2%) 4708.0K( 92.0%)]
[Young: 0( 0.0%) 0.0B( 0.0%)]
[Old: 120388(100.0%) 4708.0K(100.0%)]
[Total Exec: 1/0.0246084 secs, Idle: 1/1.7075173 secs, Blocked: 0/0.0000000 secs]
[Inspected: 130568]
[Skipped: 0( 0.0%)]
[Hashed: 130450( 99.9%)]
[Known: 0( 0.0%)]
[New: 130568(100.0%) 5116.7K]
[Deduplicated: 120388( 92.2%) 4708.0K( 92.0%)]
[Young: 0( 0.0%) 0.0B( 0.0%)]
[Old: 120388(100.0%) 4708.0K(100.0%)]
[Table]
[Memory Usage: 264.9K]
[Size: 1024, Min: 1024, Max: 16777216]
[Entries: 10962, Load: 1070.5%, Cached: 0, Added: 10962, Removed: 0]
[Resize Count: 0, Shrink Threshold: 682(66.7%), Grow Threshold: 2048(200.0%)]
[Rehash Count: 0, Rehash Threshold: 120, Hash Seed: 0x0]
[Age Threshold: 3]
[Queue]
[Dropped: 0]
[GC concurrent-string-deduplication, deleted 0 entries, 0.0000008 secs]
...
output truncated
Note: this output is from build 1.8.0_131-b11. Looks like Java 9 has no option to print String de-duplication statistics. Potential bug ?
No. Unified logging killed this specific option.
$ java -Xmx256m -XX:+UseG1GC -XX:+PrintStringDeduplicationStatistics -version
Unrecognized VM option 'PrintStringDeduplicationStatistics'
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
Though Jigar has precisely provided the way to get to know the JVM flags and stats, yet to link to some useful documents addressing this part of the question:
I do not know where to look for documentation beyond the JEP page.
The Java9 Release Note describes an implementation of JEP 248:Make G1 the Default Garbage Collector with the line -
In JDK 9, the default garbage collector is G1 when a garbage collector
is not explicitly specified.
The java tool which details the usage of the flag
-XX:+UseStringDeduplication
Enables string deduplication. By default, this option is disabled. To
use this option, you must enable the garbage-first (G1) garbage
collector.
String deduplication reduces the memory footprint of String objects on
the Java heap by taking advantage of the fact that many String objects
are identical. Instead of each String object pointing to its own
character array, identical String objects can point to and share the
same character array.
Also addressing the open question there if
Java 9 has no option to print String de-duplication statistics.
With JEP 158:Unified JVM Logging implementation in Java9, the garbage collector flags are marked as legacy and alternate way of tracing them is using -Xlog feature. A detailed list of the replacement for converting GC Logging Flags to Xlog is listed here. One of which suggests replacing
PrintStringDeduplicationStatistics => -Xlog:stringdedup*=debug
I'm using Lucene v4.10.4. I have pretty big index, it could be over few GBs. So I get OutOfMemoryError on initializing IndexSearcher:
try (Directory dir = FSDirectory.open(new File(indexPath))) {
//Out of Memory here!
IndexSearcher searcher = new IndexSearcher(DirectoryReader.open(indexDir));
How to tell Lucene's DirectoryReader to not load into memory more than 256 MB at once?
Log
Caused by: java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.util.fst.BytesStore.<init>(BytesStore.java:68)
at org.apache.lucene.util.fst.FST.<init>(FST.java:386)
at org.apache.lucene.util.fst.FST.<init>(FST.java:321)
at org.apache.lucene.codecs.blocktree.FieldReader.<init>(FieldReader.java:85)
at org.apache.lucene.codecs.blocktree.BlockTreeTermsReader.<init>(BlockTreeTermsReader.java:192)
at org.apache.lucene.codecs.lucene41.Lucene41PostingsFormat.fieldsProducer(Lucene41PostingsFormat.java:441)
at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.<init>(PerFieldPostingsFormat.java:197)
at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:254)
at org.apache.lucene.index.SegmentCoreReaders.<init>(SegmentCoreReaders.java:120)
at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:108)
at org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:62)
at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:923)
at org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:53)
at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:67)
First you should check the current heap size of your JVM.
java -XX:+PrintFlagsFinal -version | grep MaxHeapSize
If this number is not reasonable for your use case, you should increase it when running your program with -Xmx option of java command. A sample command to assign 8GB of heap memory would look like:
java -Xmx8g -jar your_jar_file
Hope this helps.
I have tried to use jmap / eclipse / jvisualvm etc. to diagnose the problem, but did not make much progress. Any of your suggestions will be appreicated!
We have a long running java app that memory leakage issue. We use the following setting for starting the program. We use java 1.7.0_67.
java -server -Xmx500M -Xms500M -XX:NewSize=300M \
-verbosegc -Xloggc:/var/log/singer/gc.log -XX:+UseGCLogFileRotation \
-XX:NumberOfGCLogFiles=100 -XX:GCLogFileSize=2M -XX:+PrintGCDetails \
-XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+PrintClassHistogram \
-XX:+UseConcMarkSweepGC -XX:+UseParNewGC
After running for a few days, "top -p" will show something as follows:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
29503 root 35 15 8240m 1.2g 14m S 36 2.0 215:31.18 java
'top' command shows that the resident memory usage for our program is 1.2G. It is way more than the 500M max heap size that we set.
The following shows some jvm metrics.The count is no where near 1.2G. The program does not have use this much memory when it starts.
jvm_gc_ConcurrentMarkSweep_cycles: 5
jvm_gc_ConcurrentMarkSweep_msec: 110
jvm_gc_ParNew_cycles: 26129
jvm_gc_ParNew_msec: 130964
jvm_gc_cycles: 26134
jvm_gc_msec: 131074
jvm_buffer_direct_count: 27
jvm_buffer_direct_max: 463077
jvm_buffer_direct_used: 463077
jvm_buffer_mapped_count: 0
jvm_buffer_mapped_max: 0
jvm_buffer_mapped_used: 0
jvm_classes_current_loaded: 2821
jvm_classes_total_loaded: 2821
jvm_classes_total_unloaded: 0
jvm_compilation_time_msec: 12976
jvm_current_mem_CMS_Old_Gen_max: 209715200
jvm_current_mem_CMS_Old_Gen_used: 82458736
jvm_current_mem_CMS_Perm_Gen_max: 85983232
jvm_current_mem_CMS_Perm_Gen_used: 20445832
jvm_current_mem_Code_Cache_max: 50331648
jvm_current_mem_Code_Cache_used: 4465792
jvm_current_mem_Par_Eden_Space_max: 251658240
jvm_current_mem_Par_Eden_Space_used: 131968344
jvm_current_mem_Par_Survivor_Space_max: 31457280
jvm_current_mem_Par_Survivor_Space_used: 2681328
jvm_current_mem_used: 242020032
jvm_fd_count: 493
jvm_fd_limit: 65536
jvm_heap_committed: 492830720
jvm_heap_max: 492830720
jvm_heap_used: 217095032
jvm_nonheap_committed: 38780928
jvm_nonheap_max: 136314880
jvm_nonheap_used: 24911624
jvm_num_cpus: 32
jvm_post_gc_CMS_Old_Gen_max: 209715200
jvm_post_gc_CMS_Old_Gen_used: 13095808
jvm_post_gc_CMS_Perm_Gen_max: 85983232
jvm_post_gc_CMS_Perm_Gen_used: 20444448
jvm_post_gc_Par_Eden_Space_max: 251658240
jvm_post_gc_Par_Eden_Space_used: 0
jvm_post_gc_Par_Survivor_Space_max: 31457280
jvm_post_gc_Par_Survivor_Space_used: 2681328
jvm_post_gc_used: 36221584
jvm_start_time: 1440568584192
jvm_thread_count: 65
jvm_thread_daemon_count: 25
jvm_thread_peak_count: 79
jvm_uptime: 50765537
The process status:
Name: java
State: S (sleeping)
Tgid: 29503
Ngid: 0
Pid: 29503
PPid: 1
TracerPid: 0
Uid: 0 0 0 0
Gid: 0 0 0 0
FDSize: 1024
Groups: 0
VmPeak: 8440764 kB
VmSize: 8439440 kB
VmLck: 0 kB
VmPin: 0 kB
VmHWM: 1232168 kB
VmRSS: 1232168 kB
VmData: 8386608 kB
VmStk: 136 kB
VmExe: 4 kB
VmLib: 15320 kB
VmPTE: 3296 kB
VmSwap: 0 kB
Threads: 104
SigQ: 0/241457
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: 0000000000000000
SigIgn: 0000000000000002
SigCgt: 2000000181005ccd
CapInh: 0000000000000000
CapPrm: 0000003fffffffff
CapEff: 0000003fffffffff
CapBnd: 0000003fffffffff
Seccomp: 0
Cpus_allowed: ffffffff,ffffffff,ffffffff,ffffffff
Cpus_allowed_list: 0-127
Mems_allowed: 00000000,00000003
Mems_allowed_list: 0-1
voluntary_ctxt_switches: 52
nonvoluntary_ctxt_switches: 2
This HAS to be a dupe, but let me give you a quick tip
First of all you need a profiler of some sort. There are quite a few to choose from that can do this. Get the profiler to run against your app, then do the following:
Run 2 garbage collections (The profiler can do this)
Create a dump that saves the count of your classes
Let your program run a while, long enough to lose some memory
Run 2 garbage collections again
Make a second dump
Diff the two dumps (there should be a function in the profiler that gives you class count increase since the first dump making this really easy)
What you are looking for is a class count that increase dramatically for one or more classes. Once you find this all you have to do is figure out what is referring to (Holding a reference to) that class (Should also be in your dump somewhere), that's your leak. When there is more than one class increasing in number, look for the root one that contains references to the others--that's the one you have to free.
It's not your VM settings or anything like that, just a simple programming bug somewhere keeping references that you thought were freed--Like adding a listener without removing it or forgetting to dispose of frames.
In order to find a leak we usually use combination of JMeter/VisualVM heap dumps. Procedure is the following:
launch test app & connect to JVM with profiler (in our case VisulaVM)
start JMeter script to emulate real app usage
Start Sampler in visual VM, if app is big create memory dump and analyze it separately. Inside the dump - look at number of bytes and number of class instances created for your application. Initially, pay attention to your application specific classes.
Based on details provided in your description it is hard to say where the root cause of the leak. You should know your app better (it might be session beans which are not cleaned up by GC, etc), but as I mentioned memory dump is a good thing to start with.
For real production app servers it's always good to have JMX configured in order to troubleshoot these kind of problems later on.
One of the articles to start with.
When you start up a JVM WHERE it the PERM GEN allocated? Is it part of the main HEAP or is it in addition to the HEAP size.
for example if I use the following parameters:
-server -Xms10G -Xmx10G -XX:MaxPermSize=1536M
Is the total size of Java going to be 6G + 512M or 6.5G for Java or is the perm generation setup inside of the HEAP meaning that all running application will have 6G - 512M = 5.5MB for young / tenured(aka OLD) and Perm?
This graphic in 4. Sizing the Generations seems to imply it may be outside of the HEAP but I can't seem to find some that states the for sure.
http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html
Looking at the output from jstat it would seem it would be on the outside of main HEAP but this may be just the way it is reported.
[jboss#pts03-taps-03 ~]$ jstat -gccapacity PID
NGCMN NGCMX NGC S0C S1C EC OGCMN OGCMX OGC OC PGCMN PGCMX PGC PC YGC FGC
85184.0 85184.0 85184.0 8512.0 8512.0 68160.0 10400576.0 10400576.0 10400576.0 10400576.0 21248.0 1572864.0 1387840.0 1387840.0 431 43
OGCMX = 10400576.0 (almost 10G OLD GEN)
NGCMX = 85184.0 (OGCMX + NGCMX = very close to 10G NEW GEN)
PGCMX = 1572864.0 (1.5G PERM GEN)
If possible please provide a link to documentation showing you case to be true.
-server -Xms10G -Xmx10G -XX:MaxPermSize=1536M
The total of the heap and perm gen is going to be 11.5 GB. However there are other areas of memory, e.g. direct memory which can be just as big. Another area is shared libraries which is basically a fixed size.
e.g. you can set
-mx128m -XX:MaxPermSize=1g
If the perm gen was insize the heap this would fail.