How to interpret jeprof output - java

Recently I came across some java related memory leaks (continuously decreasing server-free memory and finally getting RAM warning which we have set up using nagios) and I did an investigation and found that the memory leak is not related to the heap ara. But still tomcat process's memory consumption keeps growing.
server memory graph - 7 days
Did a heap memory analysis and nothing found in there ( if I run jcmd <pid> GC.run heap memory usage drops to around 200MB from 2.8GB). heap memory graph - 7 days
Checked metaspace area and other memory areas related to the JVM as per the discussion on this video and post.
https://www.youtube.com/watch?t=2483&v=c755fFv1Rnk&feature=youtu.be
https://github.com/jeffgriffith/native-jvm-leaks/blob/master/README.md
Finally, I added jemalloc to profile native memory allocation, and here is some of the output that I got.
ouptput 1
ouptput 2
But I couldn't interpret this output and I'm not sure whether this output is correct or not.
And also I have a doubt regarding whether that jeprof is working with oracle JDK.
Could you please help me on this?
Additional info:
server memory: 4GB
Xmx: 3072M (recently we changed to this and earlier it was 2048M. but the memory behavior is similar on both occasions)
Xms: 3072M (recently we changed to this and earlier it was 2048M. but the memory behavior is similar on both occasions)
javac -version: jdk1.8.0_72
java version:
"1.8.0_72"
Java(TM) SE Runtime Environment (build 1.8.0_72-b15)
Java HotSpot(TM) 64-Bit Server VM (build 25.72-b15, mixed mode)
jemelloc configs:
jemelloc version: https://github.com/jemalloc/jemalloc/releases/download/5.2.1/jemalloc-5.2.1.tar.bz2
export LD_PRELOAD=/usr/local/lib/libjemalloc.so
export MALLOC_CONF=prof:true,lg_prof_interval:31,lg_prof_sample:17,prof_prefix:/opt/jemalloc/jeprof-output/jeprof
My application is running on a tomcat server in an ec2 instance (only one application running on that server).

Related

OpenJ9 tomcat won't start with high -Xmx heap option

I have a Spring app running in a Tomcat 9.0.6 on Linux 64. Because it needs a lot of memory, I would like to try the OpenJ9 JVM which is supposedly more efficient in that regard (current heap limit with Hotspot: -Xmx128G).
I installed the 64-bit adoptopenjdk-8-jdk-openj9:
/usr/lib/jvm/adoptopenjdk-8-jdk-openj9/bin/java -version
openjdk version "1.8.0_212"
OpenJDK Runtime Environment (build 1.8.0_212-b04)
Eclipse OpenJ9 VM (build openj9-0.14.2, JRE 1.8.0 Linux amd64-64-Bit Compressed References 20190521_315 (JIT enabled, AOT enable
OpenJ9 - 4b1df46fe
OMR - b56045d2
JCL - a8c217d402 based on jdk8u212-b04)
Starting the tomcat causes the following error:
This JVM package only includes the '-Xcompressedrefs' configuration. Please run the VM without specifying the '-Xnocompressedrefs' option or by specifying the '-Xcompressedrefs' option.
After I set this option I get the following error:
JVMJ9GC028E Option too large: '-Xmx'
JVMJ9VM015W Initialization error for library j9gc29(2): Failed to initialize
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
Documentation isn't that clear, but I found this:
https://www.ibm.com/support/knowledgecenter/SSYKE2_8.0.0/com.ibm.java.vm.80.doc/docs/mm_gc_compressed_refs.html
Compressed references are used by default on a 64-bit IBM SDK when the value of -Xmx, which sets the maximum Java heap size, is in the correct range. Start of changes for service refresh 2 fix pack 10On AIX®, Linux and Windows systems, the default range is 0 - 57 GB. For larger heap sizes, you can try to use compressed references by explicitly setting -Xcompressedrefs.End of changes for service refresh 2 fix pack 10 However, larger heap sizes might result in an out of memory condition at run time because the VM requires some memory at low addresses. You might be able to resolve an out of memory condition in low addresses by using the -Xmcrs option.
So basically, at least this build of the JDK only supports compressedrefs, and in order to use that, I must set it manually since my Xmx is above the range where it is enabled by default, but that fails because my OS already allocated to much of <4GB memory ranges, but some is needed to use compressedrefs. Since I can never guarantee that that won't be the case, is there any way I can use OpenJ9 without compressedrefs? And will that even yield the benefits in terms of memory consumption? Or is there any way I can use compressedrefs with very high Xmx settings?
I also tried setting this option, but it didn't help: https://www.ibm.com/support/knowledgecenter/SSYKE2_8.0.0/openj9/xmcrs/index.html?view=embed
How do I find the correct size for it? 1G and 64m failed. Even if I find the correct setting, how would this value guarantee that the OS hasn't already allocated all the lower memory addresses?
The limit to use the compressed refs JVM is 57G and you can't run it if the -Xnocompressedrefs option is specified.
The 57G division is documented here: https://www.eclipse.org/openj9/docs/xcompressedrefs/
The -Xnocompressedrefs problem is mentioned in the release notes: https://github.com/eclipse/openj9/blob/master/doc/release-notes/0.15/0.15.md
With a reference to: https://github.com/eclipse/openj9/issues/479
Creating a single JVM that supports both is covered by: https://github.com/eclipse/openj9/issues/643
https://github.com/eclipse/openj9/pull/7505
(With thanks to the help from the Eclipse OpenJ9 slack community, especially to Peter Shipton)
I found this build which allows noncompressedrefs and thus solves my issues: https://adoptopenjdk.net/releases.html?variant=openjdk8&jvmVariant=openj9#linuxxl

Maximum heap size using for Java process in Windows 10 64 bit running 64 bit JVM

What is maximum Heap size for Java process running on Windows 10 64 bits, with 64 bits JVM? My machine has 8 GB of RAM. And I am running Java 8.
I trying to run BFS on huge graph for experimental purposes. While running BFS I am monitoring Heap size being used in Java Visual VM. According to Visual VM heap utilization is always less than 2000 MB regardless of providing following JVM parameters
-Xms2048m
-Xmx3072m
-XX:ReservedCodeCacheSize=240m
-XX:+UseConcMarkSweepGC
-XX:SoftRefLRUPolicyMSPerMB=50
-ea
-Dsun.io.useCanonCaches=false
-Djava.net.preferIPv4Stack=true
-XX:+HeapDumpOnOutOfMemoryError
-XX:-OmitStackTraceInFastThrow
I did some research over internet but could not find any specific answer related to the system specification I am using. Can a java process use more than 2 GB on Windows 10 64 bit and 64 bit JVM? As Guidelines for Java Heap sizing the limit for Windows XP/2008/7 is 2 GB.
On a 64-bit machine, with 64-bit JVM you can work with multi gigabyte heaps (dozens and dozens of GBs). I'm not sure it's limited in any way except by the available memory (and the theoretical address space of a 64-bit pointer).
Of course if you're working with a huge heap, the GC has a lot more work to do and you may find that you need to scale horizontally instead of vertically, to maintain a good performance.
If VisualVM isn't showing you using more than 2GB (the initial heap size given with -Xms), then it probably just doesn't need more than that. You've given the permission to use up to 3GB (-Xmx), but the JVM won't allocate more memory just for the fun of it.
Maximum Heap can be allocated for 32bit JVM is 2^32 = 4G, Again 4gb will be devided into 1+ GB for VM to use for runtime classes. It varies windows it is ~2GB and linux it is ~3GB.
As you are using 64bit machine maximum heap available is 2^64 it will be big enough for you to run BFS easily.
You can monitor the available memory using vm flags "-XX+PrintFlagsFinal | grep -iE HeapSize" will tell you the maximum available heap size that can be used. Configure slightly less than that and start using...
There is no definite size you could specify for 64 bit architecture but simple test helps you find what is the maximum contiguous space available or could be allocated for a process. This could be tested as follow by using simple command.
Try as below
java -Xmx -version
If the above command gives result then your system could be allowed to have Xmx to that level, If it fails then you can't specify that value.
Few test from system.
I tested the value with 20G.40g,100G,160G,300G all these gave java -version output but tried with 1600G that throws the error.
Output of the test
C:\Users\mpalanis>java -Xmx300G -version
java version "1.7.0_80"
Java(TM) SE Runtime Environment (build 1.7.0_80-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.80-b11, mixed mode)
C:\Users\mpalanis>java -Xmx1600G -version
Error occurred during initialization of VM
Unable to allocate 52431424KB bitmaps for parallel garbage collection for the requested 1677805568KB heap.
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
Hope this explanation helps.
If you are using IntelliJ Idea as an IDE you can do this directly from it,
From the main menu, select Help | Change Memory Settings
Set the necessary amount of memory that you want to allocate and click Save and Restart.
This changes the value of the -Xmx option used by the JVM and restarts IntelliJ IDEA with the new setting.

JVM maximum memory allocation in Windows 2007

I am using a computer withe following specification:
OS-Windows7 professional
Installed Memory(RAM) :8Gb
System Type: 64 bit Operating System
JVM: Java version 8 update 91(jre1.8.0_91) 64 bit version
java version "1.8.0_91"
Java(TM) SE Runtime Environment (build 1.8.0_91-b15)
Java HotSpot(TM) 64-Bit Server VM (build 25.91-b15, mixed mode)
For using one desktop application I need large JVM memory allocation. With the above specification I can set the JVM at maximum 1.5gb(1536mb) using -Xmx command.
If I am increasing the value above 1.5 GB getting the following error:
"The JVM could not be started.The main method may have thrown out an exception."
Please let me know how can I allocate more memory to th e JVM.
The 1.5 GB limit means you are most likely using the Windows XP/32-bit version. When this question has come up before the OP has been sure they were using the 64-bit version but on investigation found they were not.
I suggest making sure there is only one version of Java installed, the 64-bit version you want to use.
In my opinion the best way of detecting your problem is to use JProfiler, it can detects if you're having a problem inside your vm and can monitor the status, it's simple to use and give a lot of debug. I saw at some articles that sometimes increasing too much the memory cause an error like this but I don't have a clue, try to look this too, hope it helps.

Full GC does not fully recover memory

here is the jvm settings for Jboss AS 7 / EAP 6
java version "1.6.0_35"
Java(TM) SE Runtime Environment (build 1.6.0_35-b10)
Java HotSpot(TM) 64-Bit Server VM (build 20.10-b01, mixed mode)
VM Arguments: -XX:+UseCompressedOops -Dprogram.name=standalone.bat
-XX:-TieredCompilation -XX:+PrintGCDetails -Xloggc:E:\serverLog\jvm.log
-Xms1303M -Xmx1303M -XX:MaxPermSize=256M
-Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000
-Djava.net.preferIPv4Stack=true -Dorg.jboss.resolver.warning=true
-Djboss.modules.system.pkgs=org.jboss.byteman
-Djboss.server.default.config=standalone.xml -Dorg.jboss.boot.log.file=E:\JAVA
\JBOSS\EAP-6.0.0.GA\jboss-eap-6.0\standalone\log\boot.log
-Dlogging.configuration=file:E:\JAVA\JBOSS\EAP-6.0.0.GA\jboss-eap-6.0
\standalone/configuration/logging.properties
I made several heavy-loading pages refresh every 30s, then I found in gc log, there are gradually frequent full garbage collection occurs, each full GC release part of the old generation but it getting smaller and smaller and finally just get overhead, here is the jvm log
I wonder whether this indicates a memory leak or a jvm tune up matter and how to get the full gc recover most of the memory each time?
UPDATE
Thanks everyone for the guideline, after retrieve heap dump and analyze with Eclipse MAT, it looks all the leaking coming from org.jboss.as.web.deployment.WebInjectionContainer
here are screenshots of the results
800+m memory leak
UPDATE 2
I don't know if it is the same issue but I tried to apply the same changes from another thread, I can see the application use less memory but the leak is still there. the full GC only recover small amount of tenured generation hence(well young generate leaked anyway) so after several full gc the server got its overhead...

Why doesn't it look like -Xms and -Xmx are doing anything to Java's memory usage in Windows?

I am running this version of Java:
java version "1.7.0_07"
Java(TM) SE Runtime Environment (build 1.7.0_07-b11)
Java HotSpot(TM) 64-Bit Server VM (build 23.3-b01, mixed mode)
I am running this on a Windows 2008 R2 64-bit server on Amazon EC2, in an m1.large instance that has 7.5GB of memory.
When I start my Java app, I am using this command line:
java -Xms6G -Xmx6G -server -jar start.jar
My intent is to have Java reserve 6GB for it's heap, so that when my application runs, it will be able to load its entire dataset into memory.
However, when I start the app, I only see 1.3GB of memory used in Task Manager?
The issue is you're looking at the default memory column shown in Task Manager, which is "Memory - Private Working Set." This doesn't reflect what is actually reserved for use.
In Task Manager, go to the View menu and choose Select Columns. Add the "Memory - Commit Size" column. You should see this column reflects the reserved heap size from Java. In my tests, it shows around 6.6GB committed for a commandline of -Xms6G.
Here is Microsoft's page that explains what each column means.
"Memory - Private Working Set: Subset of working set that specifically describes the amount of memory a process is using that cannot be shared by other processes."
"Memory - Commit Size: Amount of virtual memory that is reserved for use by a process."

Categories