Java high memory usage - java

I have a problem with a Java app. Yesterday, when i deployed it to have a test run, we noticed that our machine started swapping, even though this is not a real monster app, if you know what i mean.
Anyway, i checked the results of top and saw that it eats around 100mb of memory (RES in top) I tried to profile memory and check if there is a memory leak, but i couldn't find one. There was an unclosed PreparedStatement, which i fixed, but it didn't mean much.
I tried setting the min and max heap size (some said that min heap size is not required), and it didn't make any difference.
This is how i run it now:
#!/bin/sh
$JAVA_HOME/bin/java -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9025 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -XX:MaxPermSize=40m -Xmx32M -cp ./jarName.jar uk.co.app.App app.properties
Here is the result of top:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
16703 root 20 0 316m 109m 6048 S 0.0 20.8 0:14.80 java
The thing i don't understand that i configure max PermSize and max Heap size, which add up to 72mb. Which is enough, the app runs well. Why is it eating 109mb of memory still and what is eating it up? It is a 37mb difference, which is quite high ratio. (34%).
I don't think this is a memory leak, because max heap size is set and there is no out of memory error, or anything.
One intertesting thing may be that i made a heap dump with VisualVM and then checked it with EclipseMAT and it said that there is a possible leak in a classloader.
This is what it says:
The classloader/component "sun.misc.Launcher$AppClassLoader #
0x87efa40" occupies 9,807,664 (64.90%) bytes. The memory is
accumulated in one instance of "short[][]" loaded by "".Keywords sun.misc.Launcher$AppClassLoader # 0x87efa40
I cannot make much of this, but may be useful.
Thanks for your help in advance.
EDIT
I found this one, maybe there is nothing i can do...
Tomcat memory consumption is more than heap + permgen space

Java's memory includes
the heap space.
the perm gen space
thread stack areas.
shared libraries, including that of the JVM (will be shared)
the direct memory size.
the memory mapped file size (will be shared)
There is likely to be others which are for internal use.
Given that 37 MB of PC memory is worth about 20 cents, I wouldn't worry about it too much. ;)

Did you try using JConsole to profile the application http://docs.oracle.com/javase/1.5.0/docs/guide/management/jconsole.html
Otherwise you can also use JProfiler trial version to profile the application
http://www.ej-technologies.com/products/jprofiler/overview.html?gclid=CKbk1p-Ym7ACFQ176wodgzy4YA
However first step to check high memory usage should be to check if you are using collection of objects in your application like array,map,set,list etc. If yes then check if they keep the references to objects (even though of not used) with them ?

Related

Java process takes much more RAM than heap size

I have a Java program that has been running for days, it processes incoming messages and forward them out.
A problem I noticed today is that, the heap size I printed via Runtime.totalMemory() shows only ~200M,but the RES column in top command shows it is occupying 1.2g RAM.
The program is not using direct byte buffer.
How can I find out why JVM is taking this much extra RAM?
Some other info:
I am using openjdk-1.8.0
I did not set any JVM options to limit the heap size, the startup command is simply: java -jar my.jar
I tried heap dump using jcmd, the dump file size is only about 15M.
I tried pmap , but there seemed to be too much info printed and I don't know which of them is useful.
The Java Native Memory Tracking tool is very helpful in situations like this. You enable it by starting the JVM with the flag -XX:NativeMemoryTracking=summary.
Then when your process is running you can get the stats by executing the following command:
jcmd [pid] VM.native_memory
This will produce a detailed output listing e.g. the heap size, metaspace size as well as memory allocated directly on the heap.
You can also use this tool to create a baseline to monitor allocations over time.
As you will be able to see using this tool, the JVM reserves by default about 1GB for the metaspace, even though just a fraction may be used. But this may account for the RSS usage you are seeing.
One thing is that if your heap is not taking much memory, then check from a profiler tool how much has it taken for your non-heap memory. If that amount is high and even after a GC cycle, if its not coming down, then probably you should be looking for a memory leak ( non-heap ).
If the non-heap memory is not taking much and everything looks good when you look into the memory using profiling tools, then I guess its the JVM which holds the memory rather releasing them.
So you better check if your GC hasn't work at all or if GC is being forcefully executed using a profiling tool, whether the memory comes down do does it expands or what is happening.
JVM memory and Heap memory are having 2 different behaviors and JVM could assume that it should expand after a GC cycle based on
-XX:MinHeapFreeRatio=
-XX:MaxHeapFreeRatio=
above parameters. So the basic concept behind this is that after a GC cycle, the JVM starts to get measures of free memory and used memory and starts to expand itself or shrink down based on the values for above JVM flags. By default they are set to 40 and 70, which you may interested in tuning up. This is critical specially in containerized environment.
You can use VisualVM to monitor what is happening inside your JVM. You can also use JConsole for a primary overview. It comes with JDK itself. If your JDK is setup with an environment variable, then start it from teriminal with jconsole. Then select your application and start monitoring.

Difference between Resident Set Size (RSS) and Java total committed memory (NMT) for a JVM running in Docker container

Scenario:
I have a JVM running in a docker container. I did some memory analysis using two tools: 1) top 2) Java Native Memory Tracking. The numbers look confusing and I am trying to find whats causing the differences.
Question:
The RSS is reported as 1272MB for the Java process and the Total Java Memory is reported as 790.55 MB. How can I explain where did the rest of the memory 1272 - 790.55 = 481.44 MB go?
Why I want to keep this issue open even after looking at this question on SO:
I did see the answer and the explanation makes sense. However, after getting output from Java NMT and pmap -x , I am still not able to concretely map which java memory addresses are actually resident and physically mapped. I need some concrete explanation (with detailed steps) to find whats causing this difference between RSS and Java Total committed memory.
Top Output
Java NMT
Docker memory stats
Graphs
I have a docker container running for most than 48 hours. Now, when I see a graph which contains:
Total memory given to the docker container = 2 GB
Java Max Heap = 1 GB
Total committed (JVM) = always less than 800 MB
Heap Used (JVM) = always less than 200 MB
Non Heap Used (JVM) = always less than 100 MB.
RSS = around 1.1 GB.
So, whats eating the memory between 1.1 GB (RSS) and 800 MB (Java Total committed memory)?
You have some clue in "
Analyzing java memory usage in a Docker container" from Mikhail Krestjaninoff:
(And to be clear, in May 2019, three years later, the situation does improves with openJDK 8u212 )
Resident Set Size is the amount of physical memory currently allocated and used by a process (without swapped out pages). It includes the code, data and shared libraries (which are counted in every process which uses them)
Why does docker stats info differ from the ps data?
Answer for the first question is very simple - Docker has a bug (or a feature - depends on your mood): it includes file caches into the total memory usage info. So, we can just avoid this metric and use ps info about RSS.
Well, ok - but why is RSS higher than Xmx?
Theoretically, in case of a java application
RSS = Heap size + MetaSpace + OffHeap size
where OffHeap consists of thread stacks, direct buffers, mapped files (libraries and jars) and JVM code itse
Since JDK 1.8.40 we have Native Memory Tracker!
As you can see, I’ve already added -XX:NativeMemoryTracking=summary property to the JVM, so we can just invoke it from the command line:
docker exec my-app jcmd 1 VM.native_memory summary
(This is what the OP did)
Don’t worry about the “Unknown” section - seems that NMT is an immature tool and can’t deal with CMS GC (this section disappears when you use an another GC).
Keep in mind, that NMT displays “committed” memory, not "resident" (which you get through the ps command). In other words, a memory page can be committed without considering as a resident (until it directly accessed).
That means that NMT results for non-heap areas (heap is always preinitialized) might be bigger than RSS values.
(that is where "Why does a JVM report more committed memory than the linux process resident set size?" comes in)
As a result, despite the fact that we set the jvm heap limit to 256m, our application consumes 367M. The “other” 164M are mostly used for storing class metadata, compiled code, threads and GC data.
First three points are often constants for an application, so the only thing which increases with the heap size is GC data.
This dependency is linear, but the “k” coefficient (y = kx + b) is much less then 1.
More generally, this seems to be followed by issue 15020 which reports a similar issue since docker 1.7
I'm running a simple Scala (JVM) application which loads a lot of data into and out of memory.
I set the JVM to 8G heap (-Xmx8G). I have a machine with 132G memory, and it can't handle more than 7-8 containers because they grow well past the 8G limit I imposed on the JVM.
(docker stat was reported as misleading before, as it apparently includes file caches into the total memory usage info)
docker stat shows that each container itself is using much more memory than the JVM is supposed to be using. For instance:
CONTAINER CPU % MEM USAGE/LIMIT MEM % NET I/O
dave-1 3.55% 10.61 GB/135.3 GB 7.85% 7.132 MB/959.9 MB
perf-1 3.63% 16.51 GB/135.3 GB 12.21% 30.71 MB/5.115 GB
It almost seems that the JVM is asking the OS for memory, which is allocated within the container, and the JVM is freeing memory as its GC runs, but the container doesn't release the memory back to the main OS. So... memory leak.
Disclaimer: I am not an expert
I had a production incident recently when under heavy load, pods had a big jump in RSS and Kubernetes killed the pods. There was no OOM error exception, but Linux stopped the process in the most hardcore way.
There was a big gap between RSS and total reserved space by JVM. Heap memory, native memory, threads, everything looked ok, however RSS was big.
It was found out that it is due to the fact how malloc works internally. There are big gaps in the memory where malloc takes chunks of memory from. If there are a lot of cores on your machine, malloc tries to adapt and give every core each own space to take free memory from to avoid resource contention. Setting up export MALLOC_ARENA_MAX=2 solved the issue. You can find more about this situation here:
Growing resident memory usage (RSS) of Java Process
https://devcenter.heroku.com/articles/tuning-glibc-memory-behavior
https://www.gnu.org/software/libc/manual/html_node/Malloc-Tunable-Parameters.html
https://github.com/jeffgriffith/native-jvm-leaks
P.S. I don't know why there was a jump in RSS memory. Pods are built on Spring Boot + Kafka.

Swap memory continuolsy increasing

I have a web application running in glassfish in RHEL. For the application, these are set:
Heap Memory:4GB
Perm Gen:1GB
JConsole shows:
heap memory - 500mb
non heap memory - 350mb
threads =378
Top shows:
PID User PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
17948 root 20 0 12.8g 1.9g 22m S 1.5 16.0 14:09.11 java
From starting itself process is consuming 12.8G.
Top also shows:
Mem: 12251392k total, 11915584k used, 335808k free, 47104k buffers
Swap: 8322944k total, 6747456k used, 1575488k free, 177088k cached
The problem is swap space is continuosly increasing. When no swap space is left, the web application stops respondng.
Killing process does not reduces used swap space but only after computer reboot. Why?
Why is the process consuming 12.8 GB of virtual space when started?
How to approach to resolve this issue?
Update:
The jconsole output(recorded for 24 hours) shows that heap memory and non heap memory didn't increase much. Even though the swap space redcued by 1.5Gb in the same period:
Jconsole output
You could have a look at these answers to get an impression of the meaning of top's output. You can use a script to roughly report what uses your swap space.
To my knowledge Linux' swap system is not that straightforward. The kernel first swaps out inactive memory, probably other application's memory, to give GF enough resources. This will not be instantly swapped back in when GF is terminated. You could try to swapoff -a to force Linux to swap things back in, but remember to re-enable it via swapon -a.
The VIRT space due to top's manpage:
The total amount of virtual memory used by the task. It includes all code, data and shared libraries plus pages that have been swapped out and pages that have been mapped but not used.
I doubt that the OS reports on memory usage are that good in order to debug your Java application. You should have a look into your JVM's memory with tools like JVisualVM (part of Oracle's JDK). Observe the progress of memory usage for a relevant period of time.
Further you can try to analyze a heap dump with a tool like the Eclipse Memory Analyzer (MAT). MAT has some nice reports that can help to find memory leaks. If your application's memory usage constantly grows it seems to have a leak. Otherwise it would simply have not enough memory available.

Java "Out of memory error" - heap/system - where to look?

In my Tomcat application I am eventually getting "Out of memory" and "Cannot allocate memory" errors. I suppose it is nothing to do with the heap as it completely fulls up the system memory and I am hardly able to run bash commands.
How this problem is connected to the heap? How can I correctly set up heap size so that the application has enough memory and so it does not consume too much of the system resources?
Strange thing is that "top" command keeps saying that tomcat consumes only 20% of mem and there is still free memory, once the problem happens.
Thanks.
EDIT:
Follow-up:
BufferedImage leaks - are there any alternatives?
Problems with running bash scripts may indicate I/O issues, and this might be the case if your JVM is doing Full GCs all the time (which is the case, if your heap is almost-full).
The first thing to do, is to increase the heap with -Xmx. This may solve the problem, or - if you have a memory leak, it won't, and you will eventually get OutOfMemoryError again.
In this case, you need to analyze memory dumps. See my answer in this thread for some instructions.
Also, it might be useful to enable Garbage Collection Logs (using -Xloggc:/path/to/log.file -XX:+PrintGCDetails) and then analyzing them with GCViewer or HPJmeter.
You can set JVM Heap size by specifying the options
-Xmx1024m //for 1024 MB
Refer this for setting the option forTomcat
If you have 4 GB ram then can allocate 3GB to HEAP -
-Xmx3GB
you can also change the available perm gen size by using the following commands:
-XX:PermSize=128m
-XX:MaxPermSize=256m

Estimating maximum safe JVM heap size in 64-bit Java

In the course of profiling a 64-bit Java app that's having some issues, I notice that the profiler itself (YourKit) is using truly colossal amounts of memory. What I've got in the YourKit launch script is:
JAVA_HEAP_LIMIT="-Xmx3072m -XX:PermSize=256m -XX:MaxPermSize=768m"
Naively, assuming some overhead, this would lead me to guess that YourKit is going to use a max of something maybe a bit over four GB. However, what I actually see in PS is:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
dmoles 31379 4.4 68.2 14440032 8321396 ? Sl 11:47 10:42 java -Xmx3072m -XX:PermSize=256m -XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -Dyjp.probe.table.length.limit=20000 -Xbootclasspath/a:/home/dmoles/Applications/yjp-9.5.6/bin/../lib/tools.jar -jar /home/dmoles/Applications/yjp-9.5.6/bin/../lib/yjp.jar
That's a virtual size of nearly 14 GB and a resident size of nearly 8 GB -- nearly 3x the Java heap.
Now, I've got enough memory on my dev box to run this, but going back to the original memory problem I'm trying to diagnose: How do I know how much Java heap I have to play with?
Clearly, if the customer has, say, 16 GB physical RAM, it's not a great idea for me to tell them to set -Xmx to 16 GB.
So what is a reasonable number? 12 GB? 8 GB?
And how do I estimate it?
Clearly, if the customer has, say, 16 GB physical RAM, it's not a great idea for me to tell them to set -Xmx to 16 GB.
If the customer was running nothing else significant on his/her machine, then setting the heap size to 16G isn't necessarily a bad idea. It depends on what the application is doing.
So what is a reasonable number? 12 GB? 8 GB?
The ideal number would be to have "JVM max heap + JVM non-heap overheads + OS + other active applications' working sets + buffer cache working set" add up to the amount of physical memory. But the problem is that none of those components (apart from the max heap size) can be pinned down without detailed measurements on the customer's machine ... while the application is running on the actual problem.
And how do I estimate it?
The bottom line is that you can't. The best you can do is to guess ... and be conservative.
An alternative approach is to estimate how much heap the application actually needs for the problem it is trying to solve. Then add an extra 50 or 100 percent to give the GC room to work efficiently. (And then tune ...)

Categories