JVM Memory usage patter issue - java

I recently configured one JBOSS application with application monitoring tools (StatsD) which helps to capture JVM utilization of the application. Even without any single users using the application, the memory pattern touches around 90-95% (850 - 970 MB)of the allocated memory (1024 MB).
Minor GC runs at every point when the memory reaches 90-95%. Please see the below screenshot for the same.
Request your help to know what can be the reason/s for such a memory pattern.
*No batch jobs or background process is running.

This just looks like normal behavior to me. The heap space used rises gradually to a point where a GC runs. Then the GC reclaims a lot of free space and the heap space used drops steeply. Then repeat.
It looks like you have stats from two separate JVMs in the same graph, but I guess you knew that. (You have obscured the labels on the graph that could explain that.)
The only other thing I can glean from this is that the memory allocation rate is on the high side to be causing the GC to run that frequently. It may be advisable to do some GC tuning. But I would only advise that if application-level performance was suffering. (And it may well be that the real problem was application efficiency rather than GC performance.)
Then:
But I got an issue here too where the heap dump says it is of ~1GB but when I upload it on Eclipse MAT, it only shows the dump of 11MB. Most of the heavy objects are seen under "unreached objects" section of MAT. Please let me know why a 1GB dump is only showing 11MB size in MAT if you have any idea or have used MAT for analysis.
That is also easy to explain. The "unreached objects" are garbage. You must have run the heap dump tool at an instant when the heap usage was close to one of the peaks.
Stepping back, it is not clear to me what you are actually looking for here:
If you are just curious to understand what the monitoring looks like, this is what a JVM normally looks like.
If you are trying to investigate a performance problem (GC pauses, etc) you need to look at the other evidence.
If you are looking for evidence of a memory leak, you are looking in the wrong place. These graphs won't help with that. You need to look at the JVM's behavior over the long term. Look for things like long term trends in the "saw tooth" such as the level of the bottom of troughs trending upwards. And to investigate a suspected memory leak you need to compare MAT analyses for dumps taken over time.
But bear in mind, increasing memory usage over time is not necessarily a memory leak. It could be the application of a library caching things. A properly implemented cache will release objects if the JVM starts running out of memory.

Related

Make ZGC run often

ZGC runs not often enough. GC logs show that it runs once every 2-3 minutes for my application and because of this, my memory usage goes high between GC cycles (as high as 90%). After GC, it drops to as low as 20%.
How to increase GC run's frequency to run more often?
-XX:ZCollectionInterval=N - set maximum gap between collections to N seconds.
-XX:ZUncommitDelay=M - set the delay until unused memory is returned to the OS to M seconds.
Before tuning the GC, I would recommend to investigate why this is happening. Might have some issue/bug in your application.
[Some notes about GC]
-XX:ZUncommitDelay=M (Check if it is supported by your linux kernel)
-XX:+ZProactive: Enables proactive GC cycles when using ZGC. By default, this option is enabled. ZGC will start a proactive GC cycle if doing so is expected to have minimal impact on the running application. This is useful if the application is mostly idle or allocates very few objects, but you still want to keep the heap size down and allow reference processing to happen even when there are a lot of free space on the heap.
More details about ZGC config. options can be found:
ZGC Home Page.
Oracle Documentation
Presently (as of JDK 17), ZGC's primary strategy is to wait until the last possible moment of the heap filling up and then do a collection. Its goals are
Avoid unnecessary CPU load by running GC only when it's necessary.
Start the GC early enough so that it will finish before the heap actually fills up (since the heap filling up would be bad, leading to a temporary application stall).
It does this by measuring how fast your app is allocating memory, how long the GC takes to run, and predicting at what point it should start the GC. You can find the exact algorithm in the source code.
ZGC also exposes some knobs for running GC more often (ie, proactively), but honestly I don't find them terribly effective. You can find more info in my other answer. G1 does a better job of being proactive, but whether that's good or not depends on your use-case. (It sounds like you care more about throughput than memory usage, so I think you should prefer ZGC's behavior.)
However, if you find that ZGC is making mistakes in predicting when the heap will fill up and that your application really is hitting stalls, please share that info here or on the ZGC mailing list.

JConsole heap dump much smaller than memory usage

We have a few containers running java processes with docker. One thing we've been noticing is a huge amount of memory that is taken up just by running a simple spring-boot app without even including our own code (just to try and get some kind of memory profile independent of any issues we might introduce).
What I saw was the memory consumed by docker/the JVM was hovering around 2.5. We did have a decent amount of extra deps included in it (camel, hibernate, some spring-boot deps) but that wasn't what really threw me off. What I saw was that despite docker saying it consumed 2.5GB of memory for the app, running jconsole against it read that it was consuming up to 1GB (down to ~200MB after a GC and slowly climbing). The memory footprint on docker remained where it was after the GC as well (2.5GB).
Furthermore, when I dumped the heap to see what kinds of object are taking up that space, it looks like the heap was only 33MB large after I loaded the .hprof file into MAT. None of this makes much sense to me. Currently, I'm looking at the non-heap space in jconsole reported at 115MB while the heap space is at 331MB.
I've already read a ton (on SO and other sites) about the JVM memory regions and some things specifically reporting that the heap dumps might be smaller but none of them were this far off that I could tell and beyond that, many of the suggested things to watch for were that the GC is run whenever a heap dump is taken and that MAT has a setting to show or hide unreachable objects. All of this was taken into account before posting here and now I just feel like something else is at play that I can't capture myself and I haven't found online.
I fully expect that the numbers might be a little off but it seems extreme that they're off by a factor of 10 in the best case scenario and off by nearly a factor of 100 when looking at the docker-reported memory usage.
Does anyone know what I might be missing here?
EDIT: This is also an app running with Java 8, not yet running with Java 11. It's on the JIRA board to do but not yet planned for.
EDIT2: Adding screenshots. Spike in the JConsole screen shot is from running GC.
JConsole gives you the amount of committed memory: 3311616 KiB ~= 3GiB
This is how much memory your java process consumes, as seen by the OS.
It is unrelated to how much heap is currently in use to hold Java objects, also reported by JConsole as 130237 kbyte ~= 130 MiB.
This is also unrelated to how many Objects are actually alive: By default MAT will remove unreachable Objects when you load the heap dump. You can enable the option by going to Preferences -> Memory Analyzer -> Keep Unreachable Objects (See the MAT documentation). So if you have a lot of short lived objects, the difference can be quite massive.
I see that it also reports a Max Heap of about 9GiB. It means that you have set Xmx parameter to a large value.
Hotspot GC's are not very good at reclaiming unused memory. They tend to use all the space available to them (the Max heap size, set by Xmx) and then never decommit the heap, effectively keeping it reserved for the Java process instead of releasing it to the OS.
If you want to minimize the memory footprint of your process from the OS perspective, I recommend that you set a lower Xmx, maybe -Xmx1g, so as to not allow Java to grow too much (of course, Xmx will also need to be high enough to accomodate for your application workload!).
If really want an adaptative heap, you can also switch to G1 (-XX:+UseG1GC) and a more recent Java, as the hotspot team has delivered some improvements recently.
Dave
OS monitoring tools will show to you the amount of memory that is allocated by a process. So this:
mean that your java process have 2.664G of memory allocated (java heap + meta space)
JConsole shows to you the memory that your code is "consuming" (ignoring the meta space)
I see 2 possible explanations:
You have set -Xms with a huge value
You have a lot of static
code (or other content) loaded on your meta space.

Confusing Zookeeper Memory usage

I have an instance of zookeeper that has been running for some time... (Java 1.7.0_131, ZK 3.5.1-1), with -Xmx10G -XX:+UseParallelGC.
Recently there was a leadership change, and the memory usage on most instances in the quorum went from ~200MB to 2GB+. I took a jmap dump, and what I found that was interesting was that there was a lot of byte[] serialization data (>1GB) that had no GC Root, but hadn't been collected.
(This is ByteArrayOutputStream, DataOutputStream, org.apache.jute.BinaryOutputArchive, or HeapByteBuffer, BinaryOutputArchive).
Looking at the gc log, shortly before the election change, the full GC was running every 4-5 minutes. After the election, the tenuring threshold increases from 1 to 15 (max) and the full GC runs less and less often, eventually it doesn't even run on some days.
After severals days, suddenly, and mysteriously to me, something changes, and the memory plummets back to ~200MB with Full GC running every 4-5 minutes.
What I'm confused about here, is how so much memory can have no GC Root, and not get collected by a full GC. I even tried triggering a GC.run from jcmd a few times.
I wondered if something in ZK native land was holding onto this memory, or leaking this memory... which could explain it.
I'm looking for any debugging suggestions; I'm planning on upgrading Java 1.8, maybe ZK 3.5.4, but would really like to root cause this before moving on.
So far I've used visualvm, GCviewer and Eclipse MAT.
(Solid vertical black lines are full GC. Yellow is young generation).
I am not an expert on ZK. However, I have been tuning JVMs on Weblogic for a while and I feel, based on this information, that your configuration is generating the expansion and shrinking of the heaps (-Xmx10G -XX:+UseParallelGC). Thus, perhaps you should try using -Xms10G and -Xmx10G to avoid this resizing. Importantly, each time the JVM is resized a full GC is executed so avoiding this process is a good way to minimize the number of full garbage collections.
Please read this
"When a Hotspot JVM starts, the heap, the young generation and the perm generation space are
allocated to their initial sizes determined by the -Xms, -XX:NewSize, and -XX:PermSize parameters
respectively, and increment as-needed to the maximum reserved size, which are -Xmx, -
XX:MaxNewSize, and -XX:MaxPermSize. The JVM may also shrink the real size at runtime if the
memory is not needed as much as originally specified. However, each resizing activity triggers a
Full Garbage Collection (GC), and therefore impacts performance. As a best practice, we
recommend that you make the initial and maximum sizes identical"
Source: http://www.oracle.com/us/products/applications/aia-11g-performance-tuning-1915233.pdf
If you could provide your gc.log, it would be useful to analyse this case thoroughly.
Best regards,
RCC

JVM Garbage Collector suddenly consumes 100% CPU after running for several hours

I've got a strange problem in my Clojure app.
I'm using http-kit to write a websocket based chat application.
Client's are rendered using React as a single page app, the first thing they do when they navigate to the home page (after signing in) is create a websocket to receive things like real-time updates and any chat messages. You can see the site here: www.csgoteamfinder.com
The problem I have is after some indeterminate amount of time, it might be 30 minutes after a restart or even 48 hours, the JVM running the chat server suddenly starts consuming all the CPU. When I inspect it with NR (New Relic) I can see that all that time is being used by the garbage collector -- at this stage I have no idea what it's doing.
I've take a number of screenshots where you can see the effect.
You can see a number of spikes, those spikes correspond to large increases in CPU usage because of the garbage collector. To free up CPU I usually have to restart the JVM, I have been relying on receiving a CPU alert from NR in my slack account to make sure I jump on these quickly....but I really need to get to the root of the problem.
My initial thought was that I was possibly holding onto the socket reference when the client closed it at their end, but this is not the case. I've been looking at socket count periodically and it is fairly stable.
Any ideas of where to start?
Kind regards, Jason.
It's hard to imagine what could have caused such an issue. But at first what I would do is taking a heap dump at the time of crash. This can be enabled with -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=<path_to_your_heap_dump> JVM args. As a general practice don't increase heap size more the size of physical memory available on your server machine. In some rare cases JVM is unable to dump heap space because process is doomed; in such cases you can use gcore(if you're on Linux, not sure about Windows).
Once you grab the heap dump, analyse it with mat, I have debugged such applications and this worked perfectly to pin down any memory related issues. Mat allows you to dissect the heap dump in depth so you're sure to find the cause of your memory issue if it is not the case that you have allocated very small heap space.
If your program is spending a lot of CPU time in garbage collection, that means that your heap is getting full. Usually this means one of two things:
You need to allocate more heap to your program (via -Xmx).
Your program is leaking memory.
Try the former first. Allocate an insane amount of memory to your program (16GB or more, in your case, based on the graphs I'm looking at). See if you still have the same symptoms.
If the symptoms go away, then your program just needed more memory. Otherwise, you have a memory leak. In this case, you need to do some memory profiling. In the JVM, the way this is usually done is to use jmap to generate a heap dump, then use a heap dump analyser (such as jhat or VisualVM) to look at it.
(Fair disclosure: I'm the creator of a jhat fork called fasthat.)
Most likely your tenure space is filling up triggering a full collection. At this time the GC uses all the CPUS for sometime seconds at time.
To diagnose why this is happening you need to look at your rate of promotion (how much data is moving from young generation to tenured space)
I would look at increasing the young generation size to decrease rate of promotion. You could also look at using CMS as this has shorter pause times (though it uses more CPU)
Things to try in order:
Reduce the heap size
Count the number of objects of each class, and see if the numbers makes sense
Do you have big byte[] that lives past generation 1?
Change or tune GC algorithm
Use high-availability, i.e. more than one JVM
Switch to Erlang
You have triggered a global GC. The GC time grows faster-than-linear depending on the amount of memory, so actually reducing the heap space will trigger the global GC more often and make it faster.
You can also experiment with changing GC algorithm. We had a system where the global GC went down from 200s (happened 1-2 times per 24 hours) to 12s. Yes, the system was at a complete stand still for 3 minutes, no the users were not happy :-) You could try -XX:+UseConcMarkSweepGC
http://www.fasterj.com/articles/oraclecollectors1.shtml
You will always have stops like this for JVM and similar; it is more about how often you will get it, and how fast the global GC will be. You should make a heap dump and get the count of the different objects of each class. Most likely, you will see that you have millions of one of them, somehow, you are keeping a pointer to them unnecessary in a ever growing cache or sessions or similar.
http://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/memleaks001.html#CIHCAEIH
You can also start using a high-availability solution with at least 2 nodes, so that when one node is busy doing GC, the other node will have to handle the total load for a time. Hopefully, you will not get the global GC on both systems at the same time.
Big binary objects like byte[] and similar is a real problem. Do you have those?
At some time, these needs to be compacted by the global GC, and this is a slow operation. Many of the data-processing JVM based solution actually avoid to store all data as plain POJOs on the heap, and implement heaps themselves in order to overcome this problem.
Another solution is to switch from JVM to Erlang. Erlang is near real time, and they got by not having the concept of a global GC of the whole heap. Erlang has many small heaps. You can read a little about it at
https://hamidreza-s.github.io/erlang%20garbage%20collection%20memory%20layout%20soft%20realtime/2015/08/24/erlang-garbage-collection-details-and-why-it-matters.html
Erlang is slower than JVM, since it copies data, but the performance is much more predictable. It is difficult to have both. I have a websocket Erlang based solution, and it really works well.
So you run into a problem that is expected and normal for JVM, Microsoft CLR and similar. It will get worse and more common during the next couple of years when heap sizes grows.

Analysing Memory Issues

We have an issue where on the JVMs memory usage increases gradually and then this impacts CPU performance. It increases CPU time.
We are trying to take heap dump to analyse the issue. But wanted to understand what is the typical procedure - does looking at gc log, looking at heap dump provide the required information.
What are the other things which one needs to watch out for?
Please look at the heap dump and GC dump and try to anaylse that at the time when GC reaches its peak, any major task is going on. For example - Cron jobs scehduled at that time on your server etc.
You will get an idea as to what exactly is affecting your application.
The approach I take it is to use a memory profile to reduce both the amount memory retained after a GC and the amount of garbage discarded. Reducing object discarding, can improve performance and reduce noise in the profile results.
Here is how we analyzed the problem:
We looked at the recent code changes to identify the areas changed
Took a heapdump
Looked at thread dump.(was not very useful for our current analysis). Heapdump told us the kind of the objects which are very high in memory, these were cache objects.
But the recent code changes kind of helped us explain our current situation. Actually the interesting thing was that we had increased timeouts for some of threads we were forking. This internally caused lot of concurrency and the result was increased memory usage since some objects might now be held longer before being GC-able. Our usecases supported possibility of such contention.
Overall what I could understand was that heap dumps, thread dumps and recent changes - all looked together helped us understand the issue. Any one of them individually did not make sense.

Categories