How to improve the amount of memory used by Jboss? - java

I have a Java EE application running on jboss-5.0.0.GA. The application uses BIRT report tool to generate several reports.
The server has 4 cores of 2.4 Ghz and 8 Gb of ram.
The startup script is using the next options:
-Xms2g -Xmx2g -XX:MaxPermSize=512m
The application has reached some stability with this configuration, some time ago I had a lot of crashes because of the memory was totally full.
Rigth now, the application is not crashing, but memory is always fully used.
Example of top command:
Mem: 7927100k total, 7874824k used, 52276k free
The java process shows a use of 2.6g, and this is the only application running on this server.
What can I do to ensure an amount of free memory?
What can I do to try to find a memory leak?
Any other suggestion?
TIA
Based in answer by mezzie:
If you are using linux, what the
kernel does with the memory is
different with how windows work. In
linux, it will try to use up all the
memory. After it uses everything, it
will then recycle the memory for
further use. This is not a memory
leak. We also have jboss tomcat on our
linux server and we did research on
this issue a while back.
I found more information about this,
https://serverfault.com/questions/9442/why-does-red-hat-linux-report-less-free-memory-on-the-system-than-is-actually-ava
http://lwn.net/Articles/329458/
And well, half memory is cached:
total used free shared buffers cached
Mem: 7741 7690 50 0 143 4469

If you are using linux, what the kernel does with the memory is different with how windows work. In linux, it will try to use up all the memory. After it uses everything, it will then recycle the memory for further use. This is not a memory leak. We also have jboss tomcat on our linux server and we did research on this issue a while back.

I bet those are operating system mem values, not Java mem values. Java uses all the memory up to -Xmx and then starts to garbage collect, to vastly oversimplify. Use jconsole to see what the real Java memory usage is.

To make it simple, the JVM's max amount of memory us is equal to MaxPermGen (permanently used as your JVM is running. It contains the class definitions, so it should not grow with the load of your server) + Xmx (max size of the object heap, which contains all instances of the objects currently running in the JVM) + Xss (Thread stacks space, depending on the number of threads running in you JVM, which can most of the time be limited for a server) + Direct Memory Space (set by -XX:MaxDirectMemorySize=xxxx)
So do the math.If you want to be sure you have free memory left, you will have to limit the MaxPermGen, the Xmx and the number of threads allowed on your server.
Risk is, if the load on your server grows, you can get an OutOfMemoryError...

Related

Windows memory management and java

I'm running a Windows 2016 (x64) server with 32GB RAM. According to Resource Monitor the memory map looks like this:
1MB Reserved, 17376MB In Use, 96MB Modified, 4113MB Standby, 11016MB Free. Summary:
15280MB Available,
4210 MB Cached,
32767MB Total,
32768MB Installed
I have a java (64-bit JVM) service that I want to run on 8GB of memory:
java -Xms8192m -Xmx8192m -XX:MaxMetaspaceSize=128m ...
which results in
Error occurred during initialization of VM
Could not reserve enough space for object heap
I know that 32-bit OS and 32-bit JVM would limit the usable heap, but I verified both are 64-bit. I read that on 32-bit windows / JVM, the heap has be be contiguous. But here I had hoped to be able to even allocate 15GB for the heap, as over 15GB are 'Available' (available for whom / what?).
Page file size is automatically managed, and currently at 7680MB.
I'd be thankful for an explanation why Windows refuses to hand out the memory (or why java cannot make use of it), and what are my options (apart from resizing the host or using like 4GB, which works but is insufficient for the service).
I have tried rebooting the server, but when it's this service's turn to start, other services have already "worked" the memory quite a bit.
Edit: I noticed that the Resource Monitor has a graph called 'Commit Charge' which is over 90%. Task manager has a 'Committed' line which (today) lists 32,9/40,6 GB. Commit charge explains the term, and yes, I've seen the mentioned virtual memory popups already. It seems that for a reason unknown to me, a very high Commit Charge has built up and prevents the 8GB-java from starting. This puts even more emphasis on the question: What does '15 GB Available' memory mean - and to whom is it available, if not for a process?

JVM Optimizations for Docker and DC/OS

I'm moving a bare metal java application (jar jdk8) to docker containers and DC/OS. I am noticing an odd pattern on the dockers, we set -XMX to 32 gig and allocate a 36 gig docker container. Every few hours or so the application will spike in old gen mem allocation and the GC will get stuck in a loop ( maxing CPU) while it tries to do the heap dump.
Are there any optimizations or things I can use to see why in that 1-5 second interval we are spiking so fast? Are there any gotchas I might need to be aware of with Docker and JVM?
We are using default GC
Just for future reference:
We are using JDK 8 and it seems as if Oracle has just recently added some experimental flags for using Docker. I believe the case could have been when GC was allocating threads it wasn't respecting the docker thread count from cgroup. The experimental flags seemed to have fixed our "off the rails issue"
https://blogs.oracle.com/java-platform-group/java-se-support-for-docker-cpu-and-memory-limits
Usually you would like to avoid this gigantic applications with > 30GB of memory and split your application into smaller parts with less memory requirements if you have the possibility to use a container platform like DC/OS.
In general about GC and heap size: If you have big heap sizes, full GC can take a long time. Personally I experienced full GC freezes up to a minute or more with a quite similar heap size to your mentioned 30GB.
About Java in containers: The JVM actually needs more memory than you configure with -Xmx. So, if you specify a memory limit of 2GB within your DC/OS (Marathon) application, you can not set -Xmx2G, because this memory restriction is a hard limitation. If your process inside the container will exceed these memory limit, the container will be killed. By the fact that the JVM will reserve temporary more memory than in -Xmx configured, this is really likely to happen. In general I would suggest to use around 75% of your configured memory as value for -Xmx.
You could have a look at newer JRE versions, which support -XX:+UseCGroupMemoryLimits. This is a JRE flag to use cgroup container limitations for memory consumption, see https://developers.redhat.com/blog/2017/04/04/openjdk-and-containers/ for more informations.

How JVM inccurs paging in/out even though I limit the maximum heap space size?

I am a newbie on Java.
I run an application on top of distributed framework implemented in Java.
The application is disk and network I/O intensive job.
Each machine has 32 GB memory. I run 4 workers per machine and assign 7 GB maximum heap space for each of them. So in total, there are 28 GB memory space reserved for JVMs. The remaining 4 GB is reserved for OS (Cent OS 7). There were no heavy programs that are concurrently running.
Surprisingly, when I monitor the system resource usage by dstat, there are significant amounts of paging are occurring.
How can it be possible? I restricted the memory usage of JVMs!
I appreciate your helps, thanks
The JVM does not page out memory. The operating system does. How and when the OS chooses which pages to evict depends on configuration.
And setting -Xmx only configures the upper limit for the managed heap within the JVM. It does not restrict file mappings, direct memory allocations, native libraries or the page caches kept in memory whenever you do IO.
So you have not really "reserved" 28GB for JVMs, because the OS knows nothing about that and the JVMs know nothing of the other JVMs.

Difference between Resident Set Size (RSS) and Java total committed memory (NMT) for a JVM running in Docker container

Scenario:
I have a JVM running in a docker container. I did some memory analysis using two tools: 1) top 2) Java Native Memory Tracking. The numbers look confusing and I am trying to find whats causing the differences.
Question:
The RSS is reported as 1272MB for the Java process and the Total Java Memory is reported as 790.55 MB. How can I explain where did the rest of the memory 1272 - 790.55 = 481.44 MB go?
Why I want to keep this issue open even after looking at this question on SO:
I did see the answer and the explanation makes sense. However, after getting output from Java NMT and pmap -x , I am still not able to concretely map which java memory addresses are actually resident and physically mapped. I need some concrete explanation (with detailed steps) to find whats causing this difference between RSS and Java Total committed memory.
Top Output
Java NMT
Docker memory stats
Graphs
I have a docker container running for most than 48 hours. Now, when I see a graph which contains:
Total memory given to the docker container = 2 GB
Java Max Heap = 1 GB
Total committed (JVM) = always less than 800 MB
Heap Used (JVM) = always less than 200 MB
Non Heap Used (JVM) = always less than 100 MB.
RSS = around 1.1 GB.
So, whats eating the memory between 1.1 GB (RSS) and 800 MB (Java Total committed memory)?
You have some clue in "
Analyzing java memory usage in a Docker container" from Mikhail Krestjaninoff:
(And to be clear, in May 2019, three years later, the situation does improves with openJDK 8u212 )
Resident Set Size is the amount of physical memory currently allocated and used by a process (without swapped out pages). It includes the code, data and shared libraries (which are counted in every process which uses them)
Why does docker stats info differ from the ps data?
Answer for the first question is very simple - Docker has a bug (or a feature - depends on your mood): it includes file caches into the total memory usage info. So, we can just avoid this metric and use ps info about RSS.
Well, ok - but why is RSS higher than Xmx?
Theoretically, in case of a java application
RSS = Heap size + MetaSpace + OffHeap size
where OffHeap consists of thread stacks, direct buffers, mapped files (libraries and jars) and JVM code itse
Since JDK 1.8.40 we have Native Memory Tracker!
As you can see, I’ve already added -XX:NativeMemoryTracking=summary property to the JVM, so we can just invoke it from the command line:
docker exec my-app jcmd 1 VM.native_memory summary
(This is what the OP did)
Don’t worry about the “Unknown” section - seems that NMT is an immature tool and can’t deal with CMS GC (this section disappears when you use an another GC).
Keep in mind, that NMT displays “committed” memory, not "resident" (which you get through the ps command). In other words, a memory page can be committed without considering as a resident (until it directly accessed).
That means that NMT results for non-heap areas (heap is always preinitialized) might be bigger than RSS values.
(that is where "Why does a JVM report more committed memory than the linux process resident set size?" comes in)
As a result, despite the fact that we set the jvm heap limit to 256m, our application consumes 367M. The “other” 164M are mostly used for storing class metadata, compiled code, threads and GC data.
First three points are often constants for an application, so the only thing which increases with the heap size is GC data.
This dependency is linear, but the “k” coefficient (y = kx + b) is much less then 1.
More generally, this seems to be followed by issue 15020 which reports a similar issue since docker 1.7
I'm running a simple Scala (JVM) application which loads a lot of data into and out of memory.
I set the JVM to 8G heap (-Xmx8G). I have a machine with 132G memory, and it can't handle more than 7-8 containers because they grow well past the 8G limit I imposed on the JVM.
(docker stat was reported as misleading before, as it apparently includes file caches into the total memory usage info)
docker stat shows that each container itself is using much more memory than the JVM is supposed to be using. For instance:
CONTAINER CPU % MEM USAGE/LIMIT MEM % NET I/O
dave-1 3.55% 10.61 GB/135.3 GB 7.85% 7.132 MB/959.9 MB
perf-1 3.63% 16.51 GB/135.3 GB 12.21% 30.71 MB/5.115 GB
It almost seems that the JVM is asking the OS for memory, which is allocated within the container, and the JVM is freeing memory as its GC runs, but the container doesn't release the memory back to the main OS. So... memory leak.
Disclaimer: I am not an expert
I had a production incident recently when under heavy load, pods had a big jump in RSS and Kubernetes killed the pods. There was no OOM error exception, but Linux stopped the process in the most hardcore way.
There was a big gap between RSS and total reserved space by JVM. Heap memory, native memory, threads, everything looked ok, however RSS was big.
It was found out that it is due to the fact how malloc works internally. There are big gaps in the memory where malloc takes chunks of memory from. If there are a lot of cores on your machine, malloc tries to adapt and give every core each own space to take free memory from to avoid resource contention. Setting up export MALLOC_ARENA_MAX=2 solved the issue. You can find more about this situation here:
Growing resident memory usage (RSS) of Java Process
https://devcenter.heroku.com/articles/tuning-glibc-memory-behavior
https://www.gnu.org/software/libc/manual/html_node/Malloc-Tunable-Parameters.html
https://github.com/jeffgriffith/native-jvm-leaks
P.S. I don't know why there was a jump in RSS memory. Pods are built on Spring Boot + Kafka.

Java memory usage on Linux

I'm running a handfull of Java Application servers that are all running the latest versions of Tomcat 6 and Sun's Java 6 on top of CentOS 5.5 Linux. Each server runs multiple instances of Tomcat.
I'm setting the -Xmx450m -XX:MaxPermSize=192m parameters to control how large the heap and permgen will grow. These settings apply to all the Tomcat instances across all of the Java Application servers, totaling about 70 Tomcat instances.
Here is a typical memory usage of one of those Tomcat instances as reported by Psi-probe
Eden = 13M
Survivor = 1.5M
Perm Gen = 122M
Code Cache = 19M
Old Gen = 390M
Total = 537M
CentOS however is reporting RAM usage for this particular process at 707M (according to RSS) which leaves 170M of RAM unaccounted for.
I am aware that the JVM itself and some of it's dependancy libraries must be loaded into memory so I decided to fire up pmap -d to find out their memory footprint.
According to my calculations that accounts for about 17M.
Next there is the Java thread stack, which is 320k per thread on the 32 bit JVM for Linux.
Again, I use Psi-probe to count the number of threads on that particular JVM and the total is 129 threads. So 129 + 320k = 42M
I've read that NIO uses memory outside of the heap, but we don't use NIO in our applications.
So here I've calculated everything that comes to (my) mind. And I've only accounted for 60M of the "missing" 170M.
What am I missing?
Try using the incremental garbage collector, using the -Xincgc command line option.
It's little more aggressive on the whole GC efforts, and has a special happy little anomaly: it actually hands back some of its unused memory to the OS, unlike the default and other GC choices !
This makes the JVM consume a lot less memory, which is especially good if you're running multiple JVM's on one machine. At the expense of some performance - but you might not notice it. The incgc is a little secret it seems, because noone ever brings it up... It's been there for eons (90's even).
Arnar, In JVM initialization process JVM will allocate a memory (mmap or malloc) of size specified by -Xmx and MaxPermSize,so anyways JVM will allocate 450+192=642m of heap space for application at the start of the JVM process. So java heap space for application is not 537 but its 642m.So now if you do the calculation it will give you your missing memory.Hope it helps.
Java allocates as much virtual memory as it might need up front, however the resident side will be how much you actually use. Note: Many of the libraries and threads have their own over heads and while you don't use direct memory, it doesn't mean none of the underlying system do. e.g. if you use NIO, it will use some direct memory even if you use heap ByteBuffers.
Lastly, 100 MB is worth about £8. It may be that its not worth spending too much time worrying about it.
Not a direct answer, but, have you also considered hosting multiple sites within the same Tomcat instance? This could save you some memory at the expense of some additional configuration.
Arnar, the JVM also mmap's all jar files in use, which will use NIO and will contribute to the RSS. I don't believe those are accounted for in any of your measurements above. Do you by chance have a significant number of large jar files? If so, the pages used for those could be your missing memory.

Categories