Glassfish PermGen Not Collecting - java

Today after many days of running without issue, my glassfish application started throwing OOM: PermGen messages. After a restart it worked for about an hour and then failed again for the same reason. When I attach jconsole or visual vm to the instance, I notice that PermGen allocation grows and never is collected. If I force a GC, the PermGen memory is collected correctly and it goes back to the same level repeatedly. If I leave the GC alone, it never occurs and PermGen repeatedly grows to the max and then crashes. Why would this happen, and why would this happen out of the blue?
java version "1.7.0_51"
OpenJDK Runtime Environment (IcedTea 2.4.4) (7u51-2.4.4-0ubuntu0.12.04.2)
OpenJDK 64-Bit Server VM (build 24.45-b08, mixed mode)

Related

How to interpret jeprof output

Recently I came across some java related memory leaks (continuously decreasing server-free memory and finally getting RAM warning which we have set up using nagios) and I did an investigation and found that the memory leak is not related to the heap ara. But still tomcat process's memory consumption keeps growing.
server memory graph - 7 days
Did a heap memory analysis and nothing found in there ( if I run jcmd <pid> GC.run heap memory usage drops to around 200MB from 2.8GB). heap memory graph - 7 days
Checked metaspace area and other memory areas related to the JVM as per the discussion on this video and post.
https://www.youtube.com/watch?t=2483&v=c755fFv1Rnk&feature=youtu.be
https://github.com/jeffgriffith/native-jvm-leaks/blob/master/README.md
Finally, I added jemalloc to profile native memory allocation, and here is some of the output that I got.
ouptput 1
ouptput 2
But I couldn't interpret this output and I'm not sure whether this output is correct or not.
And also I have a doubt regarding whether that jeprof is working with oracle JDK.
Could you please help me on this?
Additional info:
server memory: 4GB
Xmx: 3072M (recently we changed to this and earlier it was 2048M. but the memory behavior is similar on both occasions)
Xms: 3072M (recently we changed to this and earlier it was 2048M. but the memory behavior is similar on both occasions)
javac -version: jdk1.8.0_72
java version:
"1.8.0_72"
Java(TM) SE Runtime Environment (build 1.8.0_72-b15)
Java HotSpot(TM) 64-Bit Server VM (build 25.72-b15, mixed mode)
jemelloc configs:
jemelloc version: https://github.com/jemalloc/jemalloc/releases/download/5.2.1/jemalloc-5.2.1.tar.bz2
export LD_PRELOAD=/usr/local/lib/libjemalloc.so
export MALLOC_CONF=prof:true,lg_prof_interval:31,lg_prof_sample:17,prof_prefix:/opt/jemalloc/jeprof-output/jeprof
My application is running on a tomcat server in an ec2 instance (only one application running on that server).

Full GC does not fully recover memory

here is the jvm settings for Jboss AS 7 / EAP 6
java version "1.6.0_35"
Java(TM) SE Runtime Environment (build 1.6.0_35-b10)
Java HotSpot(TM) 64-Bit Server VM (build 20.10-b01, mixed mode)
VM Arguments: -XX:+UseCompressedOops -Dprogram.name=standalone.bat
-XX:-TieredCompilation -XX:+PrintGCDetails -Xloggc:E:\serverLog\jvm.log
-Xms1303M -Xmx1303M -XX:MaxPermSize=256M
-Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000
-Djava.net.preferIPv4Stack=true -Dorg.jboss.resolver.warning=true
-Djboss.modules.system.pkgs=org.jboss.byteman
-Djboss.server.default.config=standalone.xml -Dorg.jboss.boot.log.file=E:\JAVA
\JBOSS\EAP-6.0.0.GA\jboss-eap-6.0\standalone\log\boot.log
-Dlogging.configuration=file:E:\JAVA\JBOSS\EAP-6.0.0.GA\jboss-eap-6.0
\standalone/configuration/logging.properties
I made several heavy-loading pages refresh every 30s, then I found in gc log, there are gradually frequent full garbage collection occurs, each full GC release part of the old generation but it getting smaller and smaller and finally just get overhead, here is the jvm log
I wonder whether this indicates a memory leak or a jvm tune up matter and how to get the full gc recover most of the memory each time?
UPDATE
Thanks everyone for the guideline, after retrieve heap dump and analyze with Eclipse MAT, it looks all the leaking coming from org.jboss.as.web.deployment.WebInjectionContainer
here are screenshots of the results
800+m memory leak
UPDATE 2
I don't know if it is the same issue but I tried to apply the same changes from another thread, I can see the application use less memory but the leak is still there. the full GC only recover small amount of tenured generation hence(well young generate leaked anyway) so after several full gc the server got its overhead...

Java Memory issue while executing sbt package in spark

Can you please suggest me solution for the below issues.
hduser#hduser-VirtualBox:/usr/local/spark1/project$ sbt package
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000a8000000, 1073741824, 0) failed; error='Cannot allocate memory' (errno=12)
#
There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (malloc) failed to allocate 1073741824 bytes for committing reserved memory.
An error report file with more information is saved as:
/usr/local/spark-1.1.0-bin-hadoop1/project/hs_err_pid26824.log
hduser#hduser-VirtualBox:/usr/local/spark1/project$ java -version
java version "1.7.0_65"
OpenJDK Runtime Environment (IcedTea 2.5.3) (7u71-2.5.3-0ubuntu0.14.04.1)
OpenJDK 64-Bit Server VM (build 24.65-b04, mixed mode)
Looks like you're trying to run with quite a large Java heap size (1GB). I'd start by reducing that. If you really do need that much, you might be in trouble: it looks as though your machine just doesn't have enough RAM to allocate it for you.

'Cannot allocate memory' (errno=12)' errors during runtime of Java application

While running a distributed application, I get a lot of these errors on the server as well as on the worker nodes:
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f4f8c800000, 549453824, 2097152, 0) failed; error='Cannot allocate memory' (errno=12)
Most of the time the process continues and finishes as expected but sometimes the process also fails.
I am calling my application with java -Xms512M -Xmx50G -cp myjar.jar myclass.Main
The nodes have 128 GBs of RAM where about 120 GBs are free.
I'm using the Oracle JVM:
$ java -version
java version "1.7.0_45"
Java(TM) SE Runtime Environment (build 1.7.0_45-b18)
Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode)
What do these messages mean and how can I get rid of them?
As Platypus suggested in the comments to my question, I downgraded Java to version 1.7.0_41. Unfortunately the problem persisted.
I went even farther back to version 1.7.0_25 and apparently this solved the error. I tried it many times and the error message didn't occur a single more time.

JVM error when try to allocate more than 128M Xms, without specifying Xmx

I am seeing an JVM issue when I am running my application and I simply to below java commands:
C:\Users\optitest>I:\j2sdk\bin\java -version
java version "1.6.0_17"
Java(TM) SE Runtime Environment (build 1.6.0_17-b04)
Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01, mixed mode)
C:\Users\optitest>I:\j2sdk\bin\java -Xms4g -version
Error occurred during initialization of VM
Incompatible minimum and maximum heap sizes specified
Even Xms is set to 128M does not work:
C:\Users\optitest>I:\j2sdk\bin\java -Xms128m -version
Error occurred during initialization of VM
Incompatible minimum and maximum heap sizes specified
Works only when Xms is set to 64M or less:
C:\Users\optitest>I:\j2sdk\bin\java -Xms64m -version
java version "1.6.0_17"
Java(TM) SE Runtime Environment (build 1.6.0_17-b04)
Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01, mixed mode)
The interesting thing is if I specify Xmx, then it works well.
C:\Users\optitest>I:\j2sdk\bin\java -Xms4g -Xmx4g-version
java version "1.6.0_17"
Java(TM) SE Runtime Environment (build 1.6.0_17-b04)
Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01, mixed mode)
C:\Users\optitest>I:\j2sdk\bin\java -Xms4g -Xmx8g-version
java version "1.6.0_17"
Java(TM) SE Runtime Environment (build 1.6.0_17-b04)
Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01, mixed mode)
More interesting thing is: All above commands run well on another machine with same OS (Windows Server 2008 R2 Enterprise SP1) & same jdk version. Physical Memory is 16GB.
Any idea?
Any idea?
Well the obvious conclusion is that if you want to use -Xms to specify the initial heap size, and you want to set the size to a value that is larger than the default maximum heap size, you need to specify a maximum heap size.
The reason that you are getting different results on different machines is that the JVM computes the default heap size in different ways, depending on the version of Java and on the execution platform. In some cases it is a constant. In others, it depends on the amount of physical memory on the system.
Just set the maximum heap size explicitly and you won't have this problem.
If you want to find out what the default heap size is for a given machine, run this command:
java -XX:+PrintFlagsFinal -version
The heap size of your machine depends on lot more than how much RAM you got!
Maximum heap size for 32 bit or 64 bit JVM looks easy to determine by
looking at addressable memory space like 2^32 (4GB) for 32 bit JVM and
2^64 for 64 bit JVM.
You can not really set 4GB as maximum heap size for 32 bit JVM using
-Xmx JVM heap options. You will get could not create the Java virtual machine Invalid maximum heap size: -Xmx error.
You can look here for a well explained document about the heap size.
Another important thing is that, You can only postpone the OutofMemory Exception by in creasing the Heap size. Unless your clean up your memory you will get the exception one time or another Use the applications like Visual VM to understand what's going on in the background. I suggest you try to Optimise code, for increasing performance.
I have this same issue. I'm still debugging it right now, but it appears that it might have something to do with the default MaxHeapSize being set to TOTALRAM/4 or (16GB/4 = 4GB = 2^32):
(uint32) 2^32 = 0
I get the following output from -XX:PrintFlagsFinal:
uintx MaxHeapSize := 0 {product}
And also the output of -XX:+PrintCommandLineFlags confirms the 4G value:
-XX:InitialHeapSize=268428160 -XX:MaxHeapSize=4294850560 -XX:+PrintCommandLineFlags -XX:+UseCompressedOops -XX:-UseLargePagesIndividualAllocation -XX:+UseParallelGC

Categories