JVM code cache exceeds ReservedCodeCacheSize - java

I have a java app running in a docker with flags on OpenJDK8:
-XX:+UseContainerSupport -XX:MaxRAMPercentage=80.0 -XX:NativeMemoryTracking=summary
and I've noticed that Code Cache memory allocation reported by Native Memory Tracking tool exceeds 240MB (default ReservedCodeCacheSize value):
jcmd 1 VM.native_memory summary | grep -i code
- Code (reserved=260013KB, committed=60465KB)
which is ~ 254MB reserved memory. Here's printed flag and java version:
java -XX:+UnlockDiagnosticVMOptions -XX:+PrintFlagsFinal -version | grep -i reserved
uintx ReservedCodeCacheSize = 251658240 {pd product}
openjdk version "1.8.0_262"
OpenJDK Runtime Environment (build 1.8.0_262-b10)
OpenJDK 64-Bit Server VM (build 25.262-b10, mixed mode)
My question is if this is expected behavior? If yes, then is it possible to calculate actual limit for the max code cache size?
thanks!

Code in the Native Memory Tracking report accounts not just Code Cache, but also a few other things. The report includes:
Fixed size spaces reserved with mmap:
Code Cache - 240 MB;
the map of the Code Cache segments - 1/64 of the Code Cache size = 3.75 MB.
Auxiliary VM structures malloc'ed in the native heap:
code strings, OopMaps, exception handler caches, adapter handler tables, and other structures for maintaining the generated code.
These structures are allocated dynamically; there is no a dedicated limit for them, but usually they make up only a small portion of the total generated code (see malloc= line in the Code section of the NMT report).
Note that reserved memory does not actually consume resources other than the address space. For analyzing the real memory usage, committed is more relevant.

Related

Kubernetes and JVM memory settings

In a Kubernetes cluster with numerous microservices, one of them is used exclusively for a Java Virtual Machine (JVM) that runs a Java 1.8 data processing application.
Up to recently, jobs running in that JVM pod consumed less than 1 GB of RAM, so the pod has been setup with 4 GB of maximum memory, without any explicit heap size settings for the JVM.
Some new data now require about 2.5 GB for the entire pod, including the JVM (as reported by the kubernetes top command, after launching with an increased memory limit of 8 GB), but the pod crashes soon after starting with a limit of 4 GB.
Using a head size range like -Xms256m -Xmx3072m with a limit of 4 GB does not solve the problem. In fact, now the pod does not even start.
Is there any way to parameterize the JVM for accommodating the 2.5 GB needed, without increasing the 4 GB maximum memory for the pod?
The default "max heap" if you don't specify -Xmx is 1/4 (25%) of the host RAM.
JDK 10 improved support for containers in that it uses container's RAM limits instead of underlying host. As pointed by #David Maze this has been backported to JDK 8.
Assuming you have a sufficiently recent version of JDK 8, you can use -XX:MaxRAMPercentage to modify the default percentage of total RAM used for Max heap. So instead of specifying -Xmx you can tell, e.g. -XX:MaxRAMPercentage=75.0. See also https://medium.com/adorsys/usecontainersupport-to-the-rescue-e77d6cfea712
Here's an example using alpine JDK docker image: https://hub.docker.com/_/openjdk (see section "Make JVM respect CPU and RAM limits" in particular).
# this is running on the host with 2 GB RAM
docker run --mount type=bind,source="$(pwd)",target=/pwd -it openjdk:8
# running with MaxRAMPercentage=50 => half of the available RAM is used as "max heap"
root#c9b0b4d9e85b:/# java -XX:+PrintFlagsFinal -XX:MaxRAMPercentage=50.0 -version | grep -i maxheap
uintx MaxHeapFreeRatio = 100 {manageable}
uintx MaxHeapSize := 1044381696 {product}
openjdk version "1.8.0_265"
OpenJDK Runtime Environment (build 1.8.0_265-b01)
OpenJDK 64-Bit Server VM (build 25.265-b01, mixed mode)
# running without MaxRAMPercentage => default 25% of RAM is used
root#c9b0b4d9e85b:/# java -XX:+PrintFlagsFinal -version | grep -i maxheap
uintx MaxHeapFreeRatio = 100 {manageable}
uintx MaxHeapSize := 522190848 {product}
openjdk version "1.8.0_265"
In my K8s setup, I am using consul to manage the pod configuration. Here is a command to override the jvm setting on the fly. It is a pretty much project specific but it might give you a hint if you are using consul for configuration.
kubectl -n <namespace> exec -it consul-server -- bash -c "export CONSUL_HTTP_ADDR=https://localhost:8500 && /opt/../home/bin/bootstrap-config --token-file /opt/../config/etc/SecurityCertificateFramework/tokens/consul/default/management.token kv write config/processFlow/jvm/java_option_xmx -Xmx8192m"

OpenJDK 1.8.0_242, MaxRAMFraction setting not reflecting

I am running a Springboot application in the alpine-OpenJDK image and facing OutOfMemory issues. Max heap is being capped at 256MB. I tried updating the MaxRAMFraction setting to 1 but did not see it getting reflected in the Java_process. I have an option to increase the container memory limit to 3000m but would prefer to use Cgroup memory with MaxRamfraction=1. Any thoughts?
Java-Version
openjdk version "1.8.0_242"
OpenJDK Runtime Environment (IcedTea 3.15.0) (Alpine 8.242.08-r0)
OpenJDK 64-Bit Server VM (build 25.242-b08, mixed mode)
bash-5.0$ java -XX:+PrintFlagsFinal -version | grep -Ei "maxheapsize|MaxRAMFraction"
uintx DefaultMaxRAMFraction = 4 {product}
uintx MaxHeapSize := 262144000 {product}
uintx MaxRAMFraction = 4 {product}
openjdk version "1.8.0_242"
OpenJDK Runtime Environment (IcedTea 3.15.0) (Alpine 8.242.08-r0)
OpenJDK 64-Bit Server VM (build 25.242-b08, mixed mode)
Container Resource limits
ports:
- containerPort: 8080
name: 8080tcp02
protocol: TCP
resources:
limits:
cpu: 350m
memory: 1000Mi
requests:
cpu: 50m
memory: 1000Mi
securityContext:
capabilities: {}
Container JAVA_OPTS screenshot
Max heap is being capped at 256MB.
You mean via -m in docker? If such, this is not the java heap you are specifying, but the total memory.
I tried updating the MaxRAMFraction setting to 1
MaxRAMFraction is deprecated and un-used, forget about it.
UseCGroupMemoryLimitForHeap
is deprecated and will be removed. Use UseContainerSupport that was ported to java-8 also.
MaxRAM=2g
Do you know what this actually does? It sets the value for the "physical" RAM that the JVM is supposed to think you have.
I assume that you did not set -Xms and -Xmx on purpose here? Since you do not know how much memory the container will have? If such, we are in the same shoes. We do know that the min we are going to get is 1g, but I have no idea of the max, as such I prefer not to set -Xms and -Xmx explicitly.
Instead, we do:
-XX:InitialRAMPercentage=70
-XX:MaxRAMPercentage=70
-XX:+UseContainerSupport
-XX:InitialHeapSize=0
And that's it. What this does?
InitialRAMPercentage is used to calculate the initial heap size, BUT only when InitialHeapSize/Xms are missing. MaxRAMPercentage is used to calculate the maximum heap. Do not forget that a java process needs more than just heap, it needs native structures also; that is why that 70 (%).

Maximum heap size using for Java process in Windows 10 64 bit running 64 bit JVM

What is maximum Heap size for Java process running on Windows 10 64 bits, with 64 bits JVM? My machine has 8 GB of RAM. And I am running Java 8.
I trying to run BFS on huge graph for experimental purposes. While running BFS I am monitoring Heap size being used in Java Visual VM. According to Visual VM heap utilization is always less than 2000 MB regardless of providing following JVM parameters
-Xms2048m
-Xmx3072m
-XX:ReservedCodeCacheSize=240m
-XX:+UseConcMarkSweepGC
-XX:SoftRefLRUPolicyMSPerMB=50
-ea
-Dsun.io.useCanonCaches=false
-Djava.net.preferIPv4Stack=true
-XX:+HeapDumpOnOutOfMemoryError
-XX:-OmitStackTraceInFastThrow
I did some research over internet but could not find any specific answer related to the system specification I am using. Can a java process use more than 2 GB on Windows 10 64 bit and 64 bit JVM? As Guidelines for Java Heap sizing the limit for Windows XP/2008/7 is 2 GB.
On a 64-bit machine, with 64-bit JVM you can work with multi gigabyte heaps (dozens and dozens of GBs). I'm not sure it's limited in any way except by the available memory (and the theoretical address space of a 64-bit pointer).
Of course if you're working with a huge heap, the GC has a lot more work to do and you may find that you need to scale horizontally instead of vertically, to maintain a good performance.
If VisualVM isn't showing you using more than 2GB (the initial heap size given with -Xms), then it probably just doesn't need more than that. You've given the permission to use up to 3GB (-Xmx), but the JVM won't allocate more memory just for the fun of it.
Maximum Heap can be allocated for 32bit JVM is 2^32 = 4G, Again 4gb will be devided into 1+ GB for VM to use for runtime classes. It varies windows it is ~2GB and linux it is ~3GB.
As you are using 64bit machine maximum heap available is 2^64 it will be big enough for you to run BFS easily.
You can monitor the available memory using vm flags "-XX+PrintFlagsFinal | grep -iE HeapSize" will tell you the maximum available heap size that can be used. Configure slightly less than that and start using...
There is no definite size you could specify for 64 bit architecture but simple test helps you find what is the maximum contiguous space available or could be allocated for a process. This could be tested as follow by using simple command.
Try as below
java -Xmx -version
If the above command gives result then your system could be allowed to have Xmx to that level, If it fails then you can't specify that value.
Few test from system.
I tested the value with 20G.40g,100G,160G,300G all these gave java -version output but tried with 1600G that throws the error.
Output of the test
C:\Users\mpalanis>java -Xmx300G -version
java version "1.7.0_80"
Java(TM) SE Runtime Environment (build 1.7.0_80-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.80-b11, mixed mode)
C:\Users\mpalanis>java -Xmx1600G -version
Error occurred during initialization of VM
Unable to allocate 52431424KB bitmaps for parallel garbage collection for the requested 1677805568KB heap.
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
Hope this explanation helps.
If you are using IntelliJ Idea as an IDE you can do this directly from it,
From the main menu, select Help | Change Memory Settings
Set the necessary amount of memory that you want to allocate and click Save and Restart.
This changes the value of the -Xmx option used by the JVM and restarts IntelliJ IDEA with the new setting.

Why do I encounter an out of memory error on a linux vm but it works on a normal Windows 10?

My command line java program works when run on a Windows 10 workstation with a dual core CPU with 1GB RAM allocated.
However, it encounters an out of memory error when I run it on a Ubuntu virtual machine with four vCPUs with 12GB RAM.
Both are run using -Xms and -Xmx set accordingly to the same value.
This is a strange error.
UPDATE:
1) I can't share the code and not to sound arrogant or anything but I think there is nothing wrong with the code.
2) The error is as follows:
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 12288 bytes for committing reserved memory.
# Possible reasons:
# The system is out of physical RAM or swap space
# In 32 bit mode, the process size limit was hit
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Use 64 bit Java on a 64 bit OS
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
# Out of Memory Error (os_linux.cpp:2627), pid=22403, tid=0x00007f52f9f2c700
#
# JRE version: Java(TM) SE Runtime Environment (8.0_101-b13) (build 1.8.0_101-b13)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.101-b13 mixed mode linux-amd64 compressed oops)
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
Btw, my vm's max memory is 12GB and I used -Xms12160m -Xmx12160m.
As for my workstation, my max memory is 2GB but I used -Xms1024m -Xmx1024m
So it seems to me on the Linux system you are asking it to reserve the full 12 MBs as the initial heap size; this is not really practical as it obviously needs memory for other stuff (code, operating system etc).
From other questions on SO it seems clear that the defaults for minimum is something like 1/64th of the physical RAM.

java file executed from command line but not from browser(apache)?

I have a java file which is being triggered from a shell script. If I execute the shell script at command line it is executing the java file without any issues but if i execute this shell script from browser( i have a index.php which executes this shell script in linux server ) it is not executing the java file in shell script. The shell script is executed properly If I remove the java execution line from the shell script.
below is the error i received when executed from browser.
Error From browser:Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00007fcf589ac000, 2555904, 1) failed; error='Permission denied' (errno=13) # # There is insufficient memory for the Java Runtime Environment to continue. # Native memory allocation (malloc) failed to allocate 2555904 bytes for committing reserved memory. # An error report file with more information is saved as: # /tmp/hs_err_pid306.log
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 2555904 bytes for committing reserved memory.
# Possible reasons:
# The system is out of physical RAM or swap space
# In 32 bit mode, the process size limit was hit
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Use 64 bit Java on a 64 bit OS
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
# Out of Memory Error (os_linux.cpp:2726), pid=306, tid=140528680765184
#
# JRE version: (7.0_51-b13) (build )
# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.51-b03 mixed mode linux-amd64 compressed oops)
Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
Please help me on how I can fix this problem.. Stuck with this issue from last one week. :|
Permission problem.
Probably you run this java file as a different user from a browser.
An error report file with more information is saved as: # /tmp/hs_err_pid306.log
What does this error say?
Issue you have is with the HEAP memory.You don't have set enough memory to run the application.
Default size of Heap space in Java is 128MB on most of 32 bit Sun's JVM but its highly varies from JVM to JVM e.g. default maximum and start heap size for the 32-bit Solaris Operating System (SPARC Platform Edition) is -Xms=3670K and -Xmx=64M and Default values of heap size parameters on 64-bit systems have been increased up by approximately 30%. Also if you are using throughput garbage collector in Java 1.5 default maximum heap size of JVM would be Physical Memory/4 and default initial heap size would be Physical Memory/16. Another way to find default heap size of JVM is to start an application with default heap parameters and monitor in using JConsole which is available on JDK 1.5 onwards, on VMSummary tab you will be able to see maximum heap size.
By the way you can increase size of java heap space based on your application need and I always recommend this to avoid using default JVM heap values. if your application is large and lots of object created you can change size of heap space by using JVM options -Xms and -Xmx. Xms denotes starting size of Heap while -Xmx denotes maximum size of Heap in Java. There is another parameter called -Xmn which denotes Size of new generation of Java Heap Space. Only thing is you can not change the size of Heap in Java dynamically, you can only provide Java Heap Size parameter while starting JVM. I have shared some more useful JVM options related to Java Heap space and Garbage collection on my post 10 JVM options Java programmer must know, you may find useful.
Read more: http://javarevisited.blogspot.com/2011/05/java-heap-space-memory-size-jvm.html#ixzz30FsKCqeT
If it's tomcat you have to set this Memory Variables in "catalina.sh".
Eg : If you a starting the application through command Line :
/bin/java -Xms2048M -Xmx2048M Djava.util.logging.config.file= -Xms2048M -Xmx2048M

Categories