Configuration of the Xmx flag - java

Running a Java microservice in an environment you have to make/decide some configuration.
The configuration that I refer now is how to set the Xmx flag.
I consider that any type of instance you choose, you have to leave some RAM for the system and other programs.
I have a t2.medium (4 GB) AWS EB and I configured the Xmx flag for Java microservice to 3 GB.
After the load, the committed memory reaches a bit above 3.0 GB, around 3.1 GB. There are a lot of free memory. The problem is that AWS EB reports 97% RAM, this means that left memory is used for CloudWatch Agent, NGINX, Linux and maybe for other things.
If the values reaches or goes beyond 100%, something can be killed by Linux Kernel due to high usage of the RAM.
Question: Do you have any idea, the best practices in choosing the value for Xmx flag based on total available RAM of the machine? Do you decide it by doing experiments?
Thank you

There is no formula for tuning JVM. It is highly dependent on each specific machine, the application, application server, and usage pattern by end users. It is recommended that you set the same value for Xmx and Xms. You should try different values and profile JVM using JProfiler or Visual VM. This is an article for Liferay JVM tuning. Hopefully, it is useful and brings you some ideas.

Related

Is -XX:MaxRAMFraction=1 safe for production in a containered environment?

Java 8/9 brought support for -XX:+UseCGroupMemoryLimitForHeap (with -XX:+UnlockExperimentalVMOptions). This sets -XX:MaxRAM to the cgroup memory limit. Per default, the JVM allocates roughly 25% of the max RAM, because -XX:MaxRAMFraction defaults to 4.
Example:
MaxRAM = 1g
MaxRAMFraction = 4
JVM is allowed to allocate: MaxRAM / MaxRAMFraction = 1g / 4 = 256m
Using only 25% of the quota seems like waste for a deployment which (usually) consists of a single JVM process. So now people set -XX:MaxRAMFraction=1, so the JVM is theoretically allowed to use 100% of the MaxRAM.
For the 1g example, this often results in heap sizes around 900m. This seems a bit high - there is not a lot of free room for the JVM or other stuff like remote shells or out-of-process tasks.
So is this configuration (-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=1) considered safe for prod or even best practice? Or should I still hand pick -Xmx, -Xms, -Xss and so on?
We did some simple testing which showed that setting -XX:MaxRAM=$QUOTA and -XX:MaxRAMFraction=1 results in killed containers under load. The JVM allocates more than 900M heap, which is way too much. -XX:MaxRAMFraction=2 seems safe(ish).
Keep in mind that you may want to leave headroom for other processes like getting a debug shell (docker exec) or diagnostics in the container.
Edit: we've written up what we've learned in detail in an article. Money quotes:
TL'DR:
Java memory management and configuration is still complex. Although the JVM can read cgroup memory limits and adapt memory usage accordingly since Java 9/8u131, it’s not a golden bullet. You need to know what -XX:+UseCGroupMemoryLimitForHeap does and you need to fine tune some parameters for every deployment. Otherwise you risk wasting resources and money or getting your containers killed at the worst time possible. -XX:MaxRAMFraction=1 is especially dangerous. Java 10+ brings a lot of improvements but still needs manual configuration. To be safe, load test your stuff.
and
The most elegant solution is to upgrade to Java 10+. Java 10 deprecates -XX:+UseCGroupMemoryLimitForHeap (11) and introduces -XX:+UseContainerSupport (12), which supersedes it. It also introduces -XX:MaxRAMPercentage (13) which takes a value between 0 and 100. This allows fine grained control of the amount of RAM the JVM is allowed to allocate. Since +UseContainerSupport is enabled by default, everything should work out of the box.
Edit #2: we've written a little bit more about -XX:+UseContainerSupport
Java 10 introduced +UseContainerSupport (enabled by default) which makes the JVM use sane defaults in a container environment. This feature is backported to Java 8 since 8u191, potentially allowing a huge percentage of Java deployments in the wild to properly configure their memory.
The recent oracle-jdk-8(8u191) brings the following options to allow Docker container users to gain more fine grained control over the amount of system memory that will be used for the Java Heap:
-XX:InitialRAMPercentage
-XX:MaxRAMPercentage
-XX:MinRAMPercentage
Three new JVM options have been added to allow Docker container users
to gain more fine grained control over the amount of system memory
that will be used for the Java Heap:
-XX:InitialRAMPercentage
-XX:MaxRAMPercentage
-XX:MinRAMPercentage These options replace the deprecated Fraction forms (-XX:InitialRAMFraction, -XX:MaxRAMFraction, and
-XX:MinRAMFraction).
See https://www.oracle.com/technetwork/java/javase/8u191-relnotes-5032181.html

JVM Optimizations for Docker and DC/OS

I'm moving a bare metal java application (jar jdk8) to docker containers and DC/OS. I am noticing an odd pattern on the dockers, we set -XMX to 32 gig and allocate a 36 gig docker container. Every few hours or so the application will spike in old gen mem allocation and the GC will get stuck in a loop ( maxing CPU) while it tries to do the heap dump.
Are there any optimizations or things I can use to see why in that 1-5 second interval we are spiking so fast? Are there any gotchas I might need to be aware of with Docker and JVM?
We are using default GC
Just for future reference:
We are using JDK 8 and it seems as if Oracle has just recently added some experimental flags for using Docker. I believe the case could have been when GC was allocating threads it wasn't respecting the docker thread count from cgroup. The experimental flags seemed to have fixed our "off the rails issue"
https://blogs.oracle.com/java-platform-group/java-se-support-for-docker-cpu-and-memory-limits
Usually you would like to avoid this gigantic applications with > 30GB of memory and split your application into smaller parts with less memory requirements if you have the possibility to use a container platform like DC/OS.
In general about GC and heap size: If you have big heap sizes, full GC can take a long time. Personally I experienced full GC freezes up to a minute or more with a quite similar heap size to your mentioned 30GB.
About Java in containers: The JVM actually needs more memory than you configure with -Xmx. So, if you specify a memory limit of 2GB within your DC/OS (Marathon) application, you can not set -Xmx2G, because this memory restriction is a hard limitation. If your process inside the container will exceed these memory limit, the container will be killed. By the fact that the JVM will reserve temporary more memory than in -Xmx configured, this is really likely to happen. In general I would suggest to use around 75% of your configured memory as value for -Xmx.
You could have a look at newer JRE versions, which support -XX:+UseCGroupMemoryLimits. This is a JRE flag to use cgroup container limitations for memory consumption, see https://developers.redhat.com/blog/2017/04/04/openjdk-and-containers/ for more informations.

Limit total memory consumption of Java process (in Cloud Foundry)

Related to these two questions:
How to set the maximum memory usage for JVM?
What would cause a java process to greatly exceed the Xmx or Xss limit?
I run a Java application on Cloud Foundry and need to make sure that the allocated memory is not exceeded. Otherwise, and this is the current issue, the process is killed by Cloud Foundry monitoring mechanisms (Linux CGROUP).
The Java Buildpack automatically sets sane values for -Xmx and -Xss. By tuning the arguments and configuring the (maximum) number of expected threads, I'm pretty sure that the memory consumed by the Java process should be less than the upper limit which I assigned to my Cloud Foundry application.
However, I still experience Cloud Foundry "out of memory" errors (NOT the Java OOM error!):
index: 3, reason: CRASHED, exit_description: out of memory, exit_status: 255
I experimented with the MALLOC_ARENA_MAX setting. Setting the value to 1 or 2 leads to slow startups. With MALLOC_ARENA_MAX=4 I still saw an error as described above, so this is no solution for my problem.
Currently I test with very tight memory settings so that the problem is easier to reproduce. However, even with this, I have to wait about 20-25 minutes for the error to occur.
Which arguments and/or environment variables do I have to specify to ensure that my Java process never exceeds a certain memory limit? Crashing with a Java OOM Error is acceptable if the application actually needs more memory.
Further information regarding MALLOC_ARENA_MAX:
https://github.com/cloudfoundry/java-buildpack/pull/160
https://www.infobright.com/index.php/malloc_arena_max/#.VmgdprgrJaQ
https://www.ibm.com/developerworks/community/blogs/kevgrig/entry/linux_glibc_2_10_rhel_6_malloc_may_show_excessive_virtual_memory_usage?lang=en
EDIT: A possible explaination is this: http://www.evanjones.ca/java-bytebuffer-leak.html. As I currently see the OOM issue when doing lots of outgoing HTTP/REST requests, these buffers might be to blame.
Unfortunately, there is no way to definitively enforce a memory limit on the JVM. Most of the memory regions are configurable (-Xmx, -Xss, -XX:MaxPermSize, -XX: MaxMetaspaceSize, etc.) but the one you can't control is Native memory. Native memory contains a whole host of things from memory mapped files to native libraries to JNI code. The best you can do is profile your application, find out where the memory growth is occurring, and either solve the growth or give yourself enough breathing room to survive.
Certainly unsatisfying, but in the end not much different from other languages and runtimes that have no control over their memory footprint.

Can current Java/Tomcat 6 application utilize 64bit windows platform advantage in term of performance and memory usage

Currently we have developed application using Java 6 based on windows 32bit(Dual core & 3G Ram).
If we install into 64bit windows OS, does it will perform better because of the resources advantage that having in the 64bit(Same OS diff. bit)? The 64bit machine is having Quad core processor and Ram more than 4g. Is the any different for JVM between 32bit vs 64bit.
Thank you in advance for your feedback.
Extra info
I am doing Security Information Event management Sys.(SIEM) - log management.
We have 4 important parts ,
Collector -to collect logs from devices/system,
Aggregator -To aggregate the syslog to be meta data for reporting,
Real Time Monitoring-To display realtime analisys report/charts and dashboard that must run every second
GUI - Struts2 apps. that runs the web GUI, log analytics, backup and other things
So far the most resources cpu and memory are used by 1-Collector, 2-RealTime, 3-Aggregator.
Right now in 32bit, collector can recieved up to 2000logs per seconds. If more than that it will crash to memory heap. So we used tanuki software to auto restart back the collector service. We use the Tanuki to split the memory usage and auto restart once detected memory heap.
Our objective is to increase event per second from 2000logs to maximum if possible by using 64bit advantages.
For the GC we let the Java handle automatically, more important we can process the more logs in 1 second without any problem.
Switching to a 64-bit JVM doesn't guarantee any performance differences. You will, however see a huge difference in the amount of RAM that can be allocated. On 32-bit Windows, the maximum amount of RAM that could be allocated for the heap maxed out at around 1.6 GB.
If you see a lot of swapping with your application on the 32-bit machine, then switching to the 64-bit machine and adding sufficient RAM is likely to improve your performance. You might also be able to make design choices that favor faster, but more memory hungry algorithms where such choices exist.
As of this writing, you will probably not see significant difference between running your app on a 32-bit JVM and a 64-bit JVM on the exact same hardware. Eventually, support for 32-bit operating systems and JVMs will probably be discontinued, but that's a different concern than performance.
I strongly recommend you start out by profiling your app first to see where your performance hot spots are.
It's a common misconception that 64-bit automatically means better performance than 32-bit. See e.g. this JVM faq and this MS Windows 7 FAQ.
It really depends on the nature of your application and where your performance bottlenecks are.
If you have relatively un-tuned garbage collection, and your application is latency sensitive (i.e. must respond to a user request such as an http request quickly), adding more memory can actually worsen your GC pauses.
Is your application multi-threaded, as most web servers are? If so, going from 2 to 4 cores will very likely help if you don't have significant locking / contention issues.
If you look into GC tuning, you might want to try parallel GC on the 4 core cpu. This can significantly reduce GC pause times while incurring some extra overhead. For a latency sensitive app I worked on this was definitely worth it.
Please feel free to reply with more info - we could use some context on your app, it's workload, in-memory working set, etc.

How to improve the amount of memory used by Jboss?

I have a Java EE application running on jboss-5.0.0.GA. The application uses BIRT report tool to generate several reports.
The server has 4 cores of 2.4 Ghz and 8 Gb of ram.
The startup script is using the next options:
-Xms2g -Xmx2g -XX:MaxPermSize=512m
The application has reached some stability with this configuration, some time ago I had a lot of crashes because of the memory was totally full.
Rigth now, the application is not crashing, but memory is always fully used.
Example of top command:
Mem: 7927100k total, 7874824k used, 52276k free
The java process shows a use of 2.6g, and this is the only application running on this server.
What can I do to ensure an amount of free memory?
What can I do to try to find a memory leak?
Any other suggestion?
TIA
Based in answer by mezzie:
If you are using linux, what the
kernel does with the memory is
different with how windows work. In
linux, it will try to use up all the
memory. After it uses everything, it
will then recycle the memory for
further use. This is not a memory
leak. We also have jboss tomcat on our
linux server and we did research on
this issue a while back.
I found more information about this,
https://serverfault.com/questions/9442/why-does-red-hat-linux-report-less-free-memory-on-the-system-than-is-actually-ava
http://lwn.net/Articles/329458/
And well, half memory is cached:
total used free shared buffers cached
Mem: 7741 7690 50 0 143 4469
If you are using linux, what the kernel does with the memory is different with how windows work. In linux, it will try to use up all the memory. After it uses everything, it will then recycle the memory for further use. This is not a memory leak. We also have jboss tomcat on our linux server and we did research on this issue a while back.
I bet those are operating system mem values, not Java mem values. Java uses all the memory up to -Xmx and then starts to garbage collect, to vastly oversimplify. Use jconsole to see what the real Java memory usage is.
To make it simple, the JVM's max amount of memory us is equal to MaxPermGen (permanently used as your JVM is running. It contains the class definitions, so it should not grow with the load of your server) + Xmx (max size of the object heap, which contains all instances of the objects currently running in the JVM) + Xss (Thread stacks space, depending on the number of threads running in you JVM, which can most of the time be limited for a server) + Direct Memory Space (set by -XX:MaxDirectMemorySize=xxxx)
So do the math.If you want to be sure you have free memory left, you will have to limit the MaxPermGen, the Xmx and the number of threads allowed on your server.
Risk is, if the load on your server grows, you can get an OutOfMemoryError...

Categories