Used Heap grow from 150Mb to 2Gb in half a second - java

Hi guys,
as you can see I Have a strange spike of used heap, that never return to the normal level. In the image, in 0.27 seconds the used heap grow from 100Mb to 706Mb. We tryed to give more memory to Tomcat, but the problem remain the same, except for the fact that now the used heap grow from 150Mb to 1.7Gb of memory.
We are monitoring the situation in ant way we know, but neither the memory monitor neither the various log give us a solution.
Have you a hint?
Thank You,
Marco

Are you getting out of memory errors? If not, this is a non issue. Many things utilize caching and will take up as much memory as you can give them to improve performance. So unless you are getting out of memory errors or GC thrashing, this is not something to be concerned about.

Related

JVM Memory usage patter issue

I recently configured one JBOSS application with application monitoring tools (StatsD) which helps to capture JVM utilization of the application. Even without any single users using the application, the memory pattern touches around 90-95% (850 - 970 MB)of the allocated memory (1024 MB).
Minor GC runs at every point when the memory reaches 90-95%. Please see the below screenshot for the same.
Request your help to know what can be the reason/s for such a memory pattern.
*No batch jobs or background process is running.
This just looks like normal behavior to me. The heap space used rises gradually to a point where a GC runs. Then the GC reclaims a lot of free space and the heap space used drops steeply. Then repeat.
It looks like you have stats from two separate JVMs in the same graph, but I guess you knew that. (You have obscured the labels on the graph that could explain that.)
The only other thing I can glean from this is that the memory allocation rate is on the high side to be causing the GC to run that frequently. It may be advisable to do some GC tuning. But I would only advise that if application-level performance was suffering. (And it may well be that the real problem was application efficiency rather than GC performance.)
Then:
But I got an issue here too where the heap dump says it is of ~1GB but when I upload it on Eclipse MAT, it only shows the dump of 11MB. Most of the heavy objects are seen under "unreached objects" section of MAT. Please let me know why a 1GB dump is only showing 11MB size in MAT if you have any idea or have used MAT for analysis.
That is also easy to explain. The "unreached objects" are garbage. You must have run the heap dump tool at an instant when the heap usage was close to one of the peaks.
Stepping back, it is not clear to me what you are actually looking for here:
If you are just curious to understand what the monitoring looks like, this is what a JVM normally looks like.
If you are trying to investigate a performance problem (GC pauses, etc) you need to look at the other evidence.
If you are looking for evidence of a memory leak, you are looking in the wrong place. These graphs won't help with that. You need to look at the JVM's behavior over the long term. Look for things like long term trends in the "saw tooth" such as the level of the bottom of troughs trending upwards. And to investigate a suspected memory leak you need to compare MAT analyses for dumps taken over time.
But bear in mind, increasing memory usage over time is not necessarily a memory leak. It could be the application of a library caching things. A properly implemented cache will release objects if the JVM starts running out of memory.

What's the appropriate strategy for handling maximum memory in Java in a desktop application?

I'm getting a few exceptions from my desktop application that is wrapped using launch4j about running of memory. Specifically:
OutOfMemoryError: Java heap space
Since I don't know how much RAM is in those computers, what's the appropriate strategy to minimize this sort of errors?
Are there any dangers in passing a humongous -Xmx, such as -Xmx64g? I understand my application might run out of actual physical RAM, but that's a problem the user can improve by adding more RAM, whereas having a limited maximum heap is not something they can do anything about.
But that makes me think, why isn't -Xmx essentially infinite and leave it up to the OS and the user to kill the application if it's trying to use more RAM than available.
-Xmx is an important memory tuning parameter. Generally, more heap space is better, but it's a very situational setting, so it's up to the user to decide how much is appropriate. Obviously there are problems in trying to use a larger heap than the system has memory, as you will run into swapping. If unspecified the JVM will use up to 1/4 of the system ram by default.
Java will keep claiming memory up to the maximum so you need to tell it where to stop. If there was no upper limit, the heap would just keep getting bigger and bigger. The JVM doesn't clear unneeded objects from memory until the heap gets full, so "unlimited size" would mean that the heap never gets full, and just keeps growing forever and unneeded memory would never get released.
While bigger is typically better for heap, this isn't a hard rule and it will require testing and tuning to find the best amount. It will help throughput, but can hurt latency since the bigger the heap, the longer GC pause times will be since there is more memory to clear.
Another factor is that if you have more than 32GB of heap, you need to give at least 40-42GB. Something in the middle like 36GB will actually hurt performance and give less usable memory. This is because for small heaps the JVM is able to optimize object pointers, but it can't do that for heaps larger than 32GB.
Note that just adding more heap isn't necessarily the solution to an out of memory error. It can be just as likely that an improvement to the program to use less memory is feasible, and if it is that's typically the preferred solution. Especially if your program is leaking memory somehow, more heap will just make it take longer before you get out of memory.

JVM Garbage Collector suddenly consumes 100% CPU after running for several hours

I've got a strange problem in my Clojure app.
I'm using http-kit to write a websocket based chat application.
Client's are rendered using React as a single page app, the first thing they do when they navigate to the home page (after signing in) is create a websocket to receive things like real-time updates and any chat messages. You can see the site here: www.csgoteamfinder.com
The problem I have is after some indeterminate amount of time, it might be 30 minutes after a restart or even 48 hours, the JVM running the chat server suddenly starts consuming all the CPU. When I inspect it with NR (New Relic) I can see that all that time is being used by the garbage collector -- at this stage I have no idea what it's doing.
I've take a number of screenshots where you can see the effect.
You can see a number of spikes, those spikes correspond to large increases in CPU usage because of the garbage collector. To free up CPU I usually have to restart the JVM, I have been relying on receiving a CPU alert from NR in my slack account to make sure I jump on these quickly....but I really need to get to the root of the problem.
My initial thought was that I was possibly holding onto the socket reference when the client closed it at their end, but this is not the case. I've been looking at socket count periodically and it is fairly stable.
Any ideas of where to start?
Kind regards, Jason.
It's hard to imagine what could have caused such an issue. But at first what I would do is taking a heap dump at the time of crash. This can be enabled with -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=<path_to_your_heap_dump> JVM args. As a general practice don't increase heap size more the size of physical memory available on your server machine. In some rare cases JVM is unable to dump heap space because process is doomed; in such cases you can use gcore(if you're on Linux, not sure about Windows).
Once you grab the heap dump, analyse it with mat, I have debugged such applications and this worked perfectly to pin down any memory related issues. Mat allows you to dissect the heap dump in depth so you're sure to find the cause of your memory issue if it is not the case that you have allocated very small heap space.
If your program is spending a lot of CPU time in garbage collection, that means that your heap is getting full. Usually this means one of two things:
You need to allocate more heap to your program (via -Xmx).
Your program is leaking memory.
Try the former first. Allocate an insane amount of memory to your program (16GB or more, in your case, based on the graphs I'm looking at). See if you still have the same symptoms.
If the symptoms go away, then your program just needed more memory. Otherwise, you have a memory leak. In this case, you need to do some memory profiling. In the JVM, the way this is usually done is to use jmap to generate a heap dump, then use a heap dump analyser (such as jhat or VisualVM) to look at it.
(Fair disclosure: I'm the creator of a jhat fork called fasthat.)
Most likely your tenure space is filling up triggering a full collection. At this time the GC uses all the CPUS for sometime seconds at time.
To diagnose why this is happening you need to look at your rate of promotion (how much data is moving from young generation to tenured space)
I would look at increasing the young generation size to decrease rate of promotion. You could also look at using CMS as this has shorter pause times (though it uses more CPU)
Things to try in order:
Reduce the heap size
Count the number of objects of each class, and see if the numbers makes sense
Do you have big byte[] that lives past generation 1?
Change or tune GC algorithm
Use high-availability, i.e. more than one JVM
Switch to Erlang
You have triggered a global GC. The GC time grows faster-than-linear depending on the amount of memory, so actually reducing the heap space will trigger the global GC more often and make it faster.
You can also experiment with changing GC algorithm. We had a system where the global GC went down from 200s (happened 1-2 times per 24 hours) to 12s. Yes, the system was at a complete stand still for 3 minutes, no the users were not happy :-) You could try -XX:+UseConcMarkSweepGC
http://www.fasterj.com/articles/oraclecollectors1.shtml
You will always have stops like this for JVM and similar; it is more about how often you will get it, and how fast the global GC will be. You should make a heap dump and get the count of the different objects of each class. Most likely, you will see that you have millions of one of them, somehow, you are keeping a pointer to them unnecessary in a ever growing cache or sessions or similar.
http://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/memleaks001.html#CIHCAEIH
You can also start using a high-availability solution with at least 2 nodes, so that when one node is busy doing GC, the other node will have to handle the total load for a time. Hopefully, you will not get the global GC on both systems at the same time.
Big binary objects like byte[] and similar is a real problem. Do you have those?
At some time, these needs to be compacted by the global GC, and this is a slow operation. Many of the data-processing JVM based solution actually avoid to store all data as plain POJOs on the heap, and implement heaps themselves in order to overcome this problem.
Another solution is to switch from JVM to Erlang. Erlang is near real time, and they got by not having the concept of a global GC of the whole heap. Erlang has many small heaps. You can read a little about it at
https://hamidreza-s.github.io/erlang%20garbage%20collection%20memory%20layout%20soft%20realtime/2015/08/24/erlang-garbage-collection-details-and-why-it-matters.html
Erlang is slower than JVM, since it copies data, but the performance is much more predictable. It is difficult to have both. I have a websocket Erlang based solution, and it really works well.
So you run into a problem that is expected and normal for JVM, Microsoft CLR and similar. It will get worse and more common during the next couple of years when heap sizes grows.

Does JVM store memory in system ? If so, how to clear it?

I am running an application using NetBeans and in the project properties I have set the Max JVM heap space to 1 GiB.
But still, the application crashes with Out of Memory.
Does the JVM have memory stored in system? If so how to clear that memory?
You'll want to analyse your code with a profiler - Netbeans has a good one. This will show you where the memory is tied up in your application, and should give you an idea as to where the problem lies.
The JVM will garbage collect objects as much as it can before it runs out of memory, so chances are you're holding onto references long after you no longer need them. Either that, or your application is genuinely one that requires a lot of memory - but I'd say it's far more likely to be a bug in your code, especially if the issue only crops up after running the application for a long period of time.
I do not fully understand all details of your question, but I guess the important part is understandable.
The OutOfMemoryError (not an exception) is thrown if the memory allocated to your JVM does not suffice for the objects created in your program. In your case it might help to increase the available heap space to more than 1 GByte. If you think 1 GByte is enough, you may have a memory leak (which, in Java, most likely means that you have references to objects that you do not need anymore - maybe in some sort of cache?).
Java reserves virtual memory for its maximum heap size on startup. As the program uses this memory, more main memory is allocated to it by the OS. Under UNIX this appear as resident memory. While Java programs can swap to disk, the Garbage Collection performs extremely badly if part of the heap is swapped and it can result in the whole machine locking up or having to reboot. If your program is not doing this, you can be sure it is entirely in main memory.
Depending on what your application does it might need 1 GB, 10 GB or 100 GB or more. If you cannot increase the maximum memory size, you can use a memory profiler to help you find ways to reduce consumption. Start with VisualVM as it is built in and free and does a decent job. If this is not enough, try a commercial profiler such as YourKit for which you can get a free evaluation license (usually works long enough to fix your problem ;)
The garbage collector automatically cleans out the memory as required and may be doing this every few seconds, or even more than once per second. If it is this could be slowing down your application, so you should consider increasing the maximum size or reducing consumption.
As mentioned by #camobap the reason for the OutOfMemory was because Perm Gen size was set very low. Now the issue is resolved.
Thank you all for the answers and comments.
The Java compiler doesn't allocate 1 GiB as I think you are thinking. Java dynamically allocates the needed memory and garbage collects it too, every time it allocates memory it checks whether it has enough space to do so, and if not crashes. I am guessing somewhere in your code, because it would be near impossible to write code that allocates that many variables, you have an array or ArrayList that takes up all the memory. In the case of an array you probably has a variable allocating the size of it and you did some calculation to it that made it take too much memory. In the case of an ArrayList I believe you might have a loop that goes too many iterations adding elements to it.
Check your code for the above errors and you should be good.

WeakReference and memory leaks

I'm profiling my application using VisualVM and I see that the heap size increased by about 7MB in about 3 days. When I use memory sampler, I also see that java.lang.ref.WeakReference is in the top five for the instances number. The number of WeakReference is increasing and GC has almost no effect.
Any idea?
You do not have a memory leak.
Java's GC only runs when the heap is full (actually is a bit more complicated since the heap itself is divided into generations, but anyway), so unless you are filling the heap (which is very unlikely since 7Mb is too little memory for any heap) you can't tell wether you have a leak or not.
WeakReferences are small wrappers that actually help preventing memory leaks by marking the objet they reference as elegible for GC. My guess is that you're including some kind of cache library that creates a bunch of these, and since the heap still has plenty of room there's no need to garbage collect them.
Again, unless you see that the GC runs often and your heap size still increases I wouldn't worry about memory issues.
Here's a great article on this matter
WeakReferences are the among first to get collected in case the JVM runs a full GC, however, they must not be strongly/ softly reachable (no strong/ soft reference must be holding a reference to it). I am usually least worried about WeakReferences, they do get GC-ed eventually. You should check your GC cycles (jstat) and see if even GC is not claiming these references. Also, please do not extrapolate the leak, your application may not necessarily grow its memory consumption in the next few days. I would suggest running a long (48 hr?) performance test with a significant load on a non production environment and see if you run into memory issues.
VisualVM uses resources in the system. This is one of its weakness compared with commercial profilers. As such small differences cannot be easily seen with VisualVM because it creates its own noise.
Lets say you have a leak of 7 MB in 3 days (which I doubt). How much times is it worth you spending to fix it? 16 GB of memory costs about $100 so 7 MB is worth about 5 cents, or about 3 seconds of your time. I would worry about it more if it were larger, much larger.

Categories