I am running a Springboot application in the alpine-OpenJDK image and facing OutOfMemory issues. Max heap is being capped at 256MB. I tried updating the MaxRAMFraction setting to 1 but did not see it getting reflected in the Java_process. I have an option to increase the container memory limit to 3000m but would prefer to use Cgroup memory with MaxRamfraction=1. Any thoughts?
Java-Version
openjdk version "1.8.0_242"
OpenJDK Runtime Environment (IcedTea 3.15.0) (Alpine 8.242.08-r0)
OpenJDK 64-Bit Server VM (build 25.242-b08, mixed mode)
bash-5.0$ java -XX:+PrintFlagsFinal -version | grep -Ei "maxheapsize|MaxRAMFraction"
uintx DefaultMaxRAMFraction = 4 {product}
uintx MaxHeapSize := 262144000 {product}
uintx MaxRAMFraction = 4 {product}
openjdk version "1.8.0_242"
OpenJDK Runtime Environment (IcedTea 3.15.0) (Alpine 8.242.08-r0)
OpenJDK 64-Bit Server VM (build 25.242-b08, mixed mode)
Container Resource limits
ports:
- containerPort: 8080
name: 8080tcp02
protocol: TCP
resources:
limits:
cpu: 350m
memory: 1000Mi
requests:
cpu: 50m
memory: 1000Mi
securityContext:
capabilities: {}
Container JAVA_OPTS screenshot
Max heap is being capped at 256MB.
You mean via -m in docker? If such, this is not the java heap you are specifying, but the total memory.
I tried updating the MaxRAMFraction setting to 1
MaxRAMFraction is deprecated and un-used, forget about it.
UseCGroupMemoryLimitForHeap
is deprecated and will be removed. Use UseContainerSupport that was ported to java-8 also.
MaxRAM=2g
Do you know what this actually does? It sets the value for the "physical" RAM that the JVM is supposed to think you have.
I assume that you did not set -Xms and -Xmx on purpose here? Since you do not know how much memory the container will have? If such, we are in the same shoes. We do know that the min we are going to get is 1g, but I have no idea of the max, as such I prefer not to set -Xms and -Xmx explicitly.
Instead, we do:
-XX:InitialRAMPercentage=70
-XX:MaxRAMPercentage=70
-XX:+UseContainerSupport
-XX:InitialHeapSize=0
And that's it. What this does?
InitialRAMPercentage is used to calculate the initial heap size, BUT only when InitialHeapSize/Xms are missing. MaxRAMPercentage is used to calculate the maximum heap. Do not forget that a java process needs more than just heap, it needs native structures also; that is why that 70 (%).
Related
I have a java app running in a docker with flags on OpenJDK8:
-XX:+UseContainerSupport -XX:MaxRAMPercentage=80.0 -XX:NativeMemoryTracking=summary
and I've noticed that Code Cache memory allocation reported by Native Memory Tracking tool exceeds 240MB (default ReservedCodeCacheSize value):
jcmd 1 VM.native_memory summary | grep -i code
- Code (reserved=260013KB, committed=60465KB)
which is ~ 254MB reserved memory. Here's printed flag and java version:
java -XX:+UnlockDiagnosticVMOptions -XX:+PrintFlagsFinal -version | grep -i reserved
uintx ReservedCodeCacheSize = 251658240 {pd product}
openjdk version "1.8.0_262"
OpenJDK Runtime Environment (build 1.8.0_262-b10)
OpenJDK 64-Bit Server VM (build 25.262-b10, mixed mode)
My question is if this is expected behavior? If yes, then is it possible to calculate actual limit for the max code cache size?
thanks!
Code in the Native Memory Tracking report accounts not just Code Cache, but also a few other things. The report includes:
Fixed size spaces reserved with mmap:
Code Cache - 240 MB;
the map of the Code Cache segments - 1/64 of the Code Cache size = 3.75 MB.
Auxiliary VM structures malloc'ed in the native heap:
code strings, OopMaps, exception handler caches, adapter handler tables, and other structures for maintaining the generated code.
These structures are allocated dynamically; there is no a dedicated limit for them, but usually they make up only a small portion of the total generated code (see malloc= line in the Code section of the NMT report).
Note that reserved memory does not actually consume resources other than the address space. For analyzing the real memory usage, committed is more relevant.
In a Kubernetes cluster with numerous microservices, one of them is used exclusively for a Java Virtual Machine (JVM) that runs a Java 1.8 data processing application.
Up to recently, jobs running in that JVM pod consumed less than 1 GB of RAM, so the pod has been setup with 4 GB of maximum memory, without any explicit heap size settings for the JVM.
Some new data now require about 2.5 GB for the entire pod, including the JVM (as reported by the kubernetes top command, after launching with an increased memory limit of 8 GB), but the pod crashes soon after starting with a limit of 4 GB.
Using a head size range like -Xms256m -Xmx3072m with a limit of 4 GB does not solve the problem. In fact, now the pod does not even start.
Is there any way to parameterize the JVM for accommodating the 2.5 GB needed, without increasing the 4 GB maximum memory for the pod?
The default "max heap" if you don't specify -Xmx is 1/4 (25%) of the host RAM.
JDK 10 improved support for containers in that it uses container's RAM limits instead of underlying host. As pointed by #David Maze this has been backported to JDK 8.
Assuming you have a sufficiently recent version of JDK 8, you can use -XX:MaxRAMPercentage to modify the default percentage of total RAM used for Max heap. So instead of specifying -Xmx you can tell, e.g. -XX:MaxRAMPercentage=75.0. See also https://medium.com/adorsys/usecontainersupport-to-the-rescue-e77d6cfea712
Here's an example using alpine JDK docker image: https://hub.docker.com/_/openjdk (see section "Make JVM respect CPU and RAM limits" in particular).
# this is running on the host with 2 GB RAM
docker run --mount type=bind,source="$(pwd)",target=/pwd -it openjdk:8
# running with MaxRAMPercentage=50 => half of the available RAM is used as "max heap"
root#c9b0b4d9e85b:/# java -XX:+PrintFlagsFinal -XX:MaxRAMPercentage=50.0 -version | grep -i maxheap
uintx MaxHeapFreeRatio = 100 {manageable}
uintx MaxHeapSize := 1044381696 {product}
openjdk version "1.8.0_265"
OpenJDK Runtime Environment (build 1.8.0_265-b01)
OpenJDK 64-Bit Server VM (build 25.265-b01, mixed mode)
# running without MaxRAMPercentage => default 25% of RAM is used
root#c9b0b4d9e85b:/# java -XX:+PrintFlagsFinal -version | grep -i maxheap
uintx MaxHeapFreeRatio = 100 {manageable}
uintx MaxHeapSize := 522190848 {product}
openjdk version "1.8.0_265"
In my K8s setup, I am using consul to manage the pod configuration. Here is a command to override the jvm setting on the fly. It is a pretty much project specific but it might give you a hint if you are using consul for configuration.
kubectl -n <namespace> exec -it consul-server -- bash -c "export CONSUL_HTTP_ADDR=https://localhost:8500 && /opt/../home/bin/bootstrap-config --token-file /opt/../config/etc/SecurityCertificateFramework/tokens/consul/default/management.token kv write config/processFlow/jvm/java_option_xmx -Xmx8192m"
I'm using an r4.8xlarge on AWS Batch Service to run Spark. This is already a big machine, 32 vCPU, and 244 GB. On AWS Batch Service the process runs inside a Docker container. Out of multiple sources, I saw that we should use java with the parameters:
-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=1
Even with this parameters the process never when over 31Gb resident memory and 45 GB of virtual memory.
As analyzes I did:
java -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=1 -XshowSettings:vm -version
VM settings:
Max. Heap Size (Estimated): 26.67G
Ergonomics Machine Class: server
Using VM: OpenJDK 64-Bit Server VM
openjdk version "1.8.0_151"
OpenJDK Runtime Environment (build 1.8.0_151-8u151-b12-1~deb9u1-b12)
OpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode)
second test
docker run -it --rm 650967531325.dkr.ecr.eu-west-1.amazonaws.com/java8_aws java -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=2 -XshowSettings:vm -version
VM settings:
Max. Heap Size (Estimated): 26.67G
Ergonomics Machine Class: server
Using VM: OpenJDK 64-Bit Server VM
openjdk version "1.8.0_151"
OpenJDK Runtime Environment (build 1.8.0_151-8u151-b12-1~deb9u1-b12)
OpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode)
third test
java -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=10 -XshowSettings:vm -version
VM settings:
Max. Heap Size (Estimated): 11.38G
Ergonomics Machine Class: server
Using VM: OpenJDK 64-Bit Server VM
openjdk version "1.8.0_151"
OpenJDK Runtime Environment (build 1.8.0_151-8u151-b12-1~deb9u1-b12)
OpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode)
The system is build with Native Packager as a standalone application. A SparkSession is built as follows with Cores equal to 31 (32-1):
SparkSession
.builder()
.appName(applicationName)
.master(s"local[$Cores]")
.config("spark.executor.memory", "3g")
Answer to egorlitvinenko:
$ docker stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
0c971993f830 ecs-marcos-BatchIntegration-DedupOrder-3-default-aab7fa93f0a6f1c86800 1946.34% 27.72GiB / 234.4GiB 11.83% 0B / 0B 72.9MB / 160kB 0
a5d6bf5522f6 ecs-agent 0.19% 19.56MiB / 240.1GiB 0.01% 0B / 0B 25.7MB / 930kB 0
More tests, now with Oracle JDK, the memory never went over 4G:
$ 'spark-submit' '--class' 'integration.deduplication.DeduplicationApp' '--master' 'local[31]' '--executor-memory' '3G' '--driver-memory' '3G' '--conf' '-Xmx=150g' '/localName.jar' '--inPath' 's3a://dp-import-marcos-refined/platform-services/order/merged/*/*/*/*' '--outPath' 's3a://dp-import-marcos-refined/platform-services/order/deduplicated' '--jobName' 'DedupOrder' '--skuMappingPath' 's3a://dp-marcos-dwh/redshift/item_code_mapping'
I used the parameters -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=2 on my Spark, clearly not using all the memory. How can I go about this issue?
tl;dr Use --driver-memory and --executor-memory while spark-submit your Spark application or set the proper memory settings of the JVM that hosts the Spark application.
The memory for the driver is by default 1024M which you can check out using spark-submit:
--driver-memory MEM Memory for driver (e.g. 1000M, 2G) (Default: 1024M).
The memory for the executor is by default 1G which you can check out again using spark-submit:
--executor-memory MEM Memory per executor (e.g. 1000M, 2G) (Default: 1G).
With that said, it does not really matter how much memory your execution environment has in total as a Spark application won't use more that the default 1G for the driver and executors.
Since you use local master URL the memory settings of the driver's JVM are already set when you execute your Spark application. It is simply too late to set the memory settings while creating a SparkSession. The single JVM of the Spark application (with the driver and a single executor all running on the same JVM) has already been up and so no config can change it.
In other words, how much memory a Docker container has has no impact on how much memory the Spark application use. They are environments configured independently. Of course, the more memory a Docker container has the more a process inside could ever have (so they are indeed interconnected).
Use --driver-memory and --executor-memory while spark-submit your Spark application or set the proper memory settings of the JVM that hosts the Spark application.
I am running tomcat on RHEL 7 machine with 1GB RAM. I have setup tomcat and java both to have Xmx=1G and below statements support that,
[root#ip-172-31-28-199 bin]# java -XX:+PrintFlagsFinal -version | grep
HeapSize Picked up _JAVA_OPTIONS: -Xmx1g
uintx ErgoHeapSizeLimit = 0 {product}
uintx HeapSizePerGCThread = 87241520 {product}
uintx InitialHeapSize := 16777216 {product}
uintx LargePageHeapSizeThreshold = 134217728 {product}
uintx MaxHeapSize := 1073741824 {product} openjdk version "1.8.0_161"
and
tomcat 2799 1 1 02:21 ? 00:00:07 /usr/bin/java
-Djava.util.logging.config.file=/opt/tomcat/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djava.awt.headless=true -Djdk.tls.ephemeralDHKeySize=2048 -Djava.protocol.handler.pkgs=org.apache.catalina.webresources -Xmx1024M -Dignore.endorsed.dirs= -classpath /opt/tomcat/bin/bootstrap.jar:/opt/tomcat/bin/tomcat-juli.jar
-Dcatalina.base=/opt/tomcat -Dcatalina.home=/opt/tomcat -Djava.io.tmpdir=/opt/tomcat/temp org.apache.catalina.startup.Bootstrap start
But when I get exception, I get following message,
There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (mmap) failed to map 244043776 bytes for committing reserved memory.
I know java can never claim 1GB memory as that is the total memory of the machine. but why I am getting error with this size mentioned?
Try adding -Xms1g too, so it initially allocates all the memory, and you'll find that it cannot even start Tomcat.
If you want to squeeze as much memory into Tomcat as possible (not recommended), slowly reduce both numbers (same value for mx and ms) until Tomcat starts.
That is the absolute maximum you can give Tomcat, but you shouldn't do that. Java may still need more as it runs, and the OS will need more occasionally, so you should give Tomcat less than that absolute maximum.
Now that you've found the number, you can leave -Xms undefined again, if you want to.
There is an interesting post here that suggests disabling OOPS.
Are you running a physical server or a VM ?
I agree a 1G server is under-sized, you should run Xmx=512M and allow some swappiness (vm.swappiness = 60 is the default, which should be OK for a small Tomcat)
I am seeing an JVM issue when I am running my application and I simply to below java commands:
C:\Users\optitest>I:\j2sdk\bin\java -version
java version "1.6.0_17"
Java(TM) SE Runtime Environment (build 1.6.0_17-b04)
Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01, mixed mode)
C:\Users\optitest>I:\j2sdk\bin\java -Xms4g -version
Error occurred during initialization of VM
Incompatible minimum and maximum heap sizes specified
Even Xms is set to 128M does not work:
C:\Users\optitest>I:\j2sdk\bin\java -Xms128m -version
Error occurred during initialization of VM
Incompatible minimum and maximum heap sizes specified
Works only when Xms is set to 64M or less:
C:\Users\optitest>I:\j2sdk\bin\java -Xms64m -version
java version "1.6.0_17"
Java(TM) SE Runtime Environment (build 1.6.0_17-b04)
Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01, mixed mode)
The interesting thing is if I specify Xmx, then it works well.
C:\Users\optitest>I:\j2sdk\bin\java -Xms4g -Xmx4g-version
java version "1.6.0_17"
Java(TM) SE Runtime Environment (build 1.6.0_17-b04)
Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01, mixed mode)
C:\Users\optitest>I:\j2sdk\bin\java -Xms4g -Xmx8g-version
java version "1.6.0_17"
Java(TM) SE Runtime Environment (build 1.6.0_17-b04)
Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01, mixed mode)
More interesting thing is: All above commands run well on another machine with same OS (Windows Server 2008 R2 Enterprise SP1) & same jdk version. Physical Memory is 16GB.
Any idea?
Any idea?
Well the obvious conclusion is that if you want to use -Xms to specify the initial heap size, and you want to set the size to a value that is larger than the default maximum heap size, you need to specify a maximum heap size.
The reason that you are getting different results on different machines is that the JVM computes the default heap size in different ways, depending on the version of Java and on the execution platform. In some cases it is a constant. In others, it depends on the amount of physical memory on the system.
Just set the maximum heap size explicitly and you won't have this problem.
If you want to find out what the default heap size is for a given machine, run this command:
java -XX:+PrintFlagsFinal -version
The heap size of your machine depends on lot more than how much RAM you got!
Maximum heap size for 32 bit or 64 bit JVM looks easy to determine by
looking at addressable memory space like 2^32 (4GB) for 32 bit JVM and
2^64 for 64 bit JVM.
You can not really set 4GB as maximum heap size for 32 bit JVM using
-Xmx JVM heap options. You will get could not create the Java virtual machine Invalid maximum heap size: -Xmx error.
You can look here for a well explained document about the heap size.
Another important thing is that, You can only postpone the OutofMemory Exception by in creasing the Heap size. Unless your clean up your memory you will get the exception one time or another Use the applications like Visual VM to understand what's going on in the background. I suggest you try to Optimise code, for increasing performance.
I have this same issue. I'm still debugging it right now, but it appears that it might have something to do with the default MaxHeapSize being set to TOTALRAM/4 or (16GB/4 = 4GB = 2^32):
(uint32) 2^32 = 0
I get the following output from -XX:PrintFlagsFinal:
uintx MaxHeapSize := 0 {product}
And also the output of -XX:+PrintCommandLineFlags confirms the 4G value:
-XX:InitialHeapSize=268428160 -XX:MaxHeapSize=4294850560 -XX:+PrintCommandLineFlags -XX:+UseCompressedOops -XX:-UseLargePagesIndividualAllocation -XX:+UseParallelGC