we have a really strange behaviour with one of our apps running on a Tomcat 7 (with Java 1.6)
The app runs really good for some days, then we see a peak in the Garbage Collector Time, the CPU Usage is more than 10 times of he normal load and the memory isn't freed anymore:
The last drop was a restart of the app so the performance gets better. as you can see in the graph the space which is freed by the GC gets lower and lower at each run and at the end it isn't going to free any memory, so the performance of the app goes really low.
how can this behaviour be improved?
This looks like a memory leak - if the GC can't free the memory any more, it is most probably due to some code retaining references to unused objects. You should try to track the objects remaining in memory (that graphic tool of yours should have some way to peek into the heap memory regions and give you information on created objects) and make sure you erase any reference to unused objects so the GC can free them.
Related
We are having frecuent outages in our app, basically the heap grows over time to the point were the GC takes a lot of CPU time and execute for several minutes, decreasing the app perfomance drastically. The app is in JSF with a tomcat server.
In the mean time, we:
Increased the heap size from 15G to 26G (-Xms27917287424 -Xmx27917287424)
Take several heap dumps (we are trying to determine the problem using these)
Activated GC logs
With the heap size increase, GC is not executing for that much time but still takes a lot of CPU and frezees the app.
So the question is:
Is this normal? When the GC executes it frees memory, so i think this probably isn't a memory leak (Am I right?)
Is there a way of optimize the GC or maybe this behavior is just a sympthom of something wrong in the app itself?
How can I monitor and analyze this without taking a heap dump?
UPDATE:
I changed JSF from 2.2 to 2.3 because some heap dumps were pointing that JSF was using a lot of memory.
That didn't work out, and yesterday we had and outage again, but this time a little different (from my point of view). Also this time, we had to reset tomcat because the app didn't work anymore after a while
In this case, the garbage collector is running when de old gen heap is not full, and the new generation GC is running all the time.
¿What can be the cause of this?
As has been said in the comments, the behaviour of the application does not look unreasonable. Your code is continually allocating objects that leads to heap space filling up, causing the GC to run. There does not appear to be a memory leak since GC reclaims a lot of space and the overall used space is not continually increasing.
What does appear to be an issue is that a significant number of objects are being promoted to the old-gen before being collected. Major GC cycles are more expensive in terms of CPU due to the relocation and remapping of objects (assuming you're using a compacting algorithm).
To reduce this, you could try increasing the size of the young generation. This will have happened when you increased the overall heap size but not by enough. Ideally, you want the majority of objects to be collected during a minor GC cycle since this is effectively free (the GC does nothing to the objects in Eden space as they are collected). You can do this with the -XX:NewRatio= or -XX:NewSize= flags. You could also try changing the survivor space sizes, again to increase the number of objects collected before tenuring. (use the -XX:SurvivorRatio= flag for this).
For monitoring, I find Flight Recorder and Mission Control very useful as you can drill down into details of how many objects of specific types are allocated. It's also easy to connect to a running JVM or take dumps for later analysis.
I'm monitoring a Java application running on a Jvm 6.
Here a screenshot of the jvisualVM panel 1.
I notice that when the heap size is small (before 12:39 in the picture) the garbage collector runs frequently.
Then I run a memory expensive task a couple of times (from 12:39 to 12:41) and the heap space grows. Why from that point on the garbage collector runs less frequently?
After one hour or more, if I avoid executing the expensive tasks on the application the heap space slowly decrease.
Why the used heap space takes so long to decrease?
Is there something I can do to avoid this behaviour?
Does the new Java8 VM have a different behaviour?
Is there something I can do to avoid this behaviour?
Set -XX:MaxHeapFreeRatio=30 -XX:MinHeapFreeRatio=15, that'll shrink the heap size more aggressively. Note that not all GC implementations yield the memory they don't use back to the OS. At least G1 does, but that's not available on java 6.
The behaviour looks normal.
Up until 12:39 on your attached profile snapshot there isn't a lot of GC going on.
Then you run your tasks and as objects that are no longer reachable become eligible for GC the sweep marks them and they get removed.
You do not necessarily need to worry about the size of the heap unless you are maxing out and crashing frequently due to some memory leak. The GC will take care of removing eligible objects from the heap and you are limited in terms of how you can impact GC (unless of course you switch GC implementation).
Each major release of the platform includes some JVM and GC changes/improvements but the behaviour of application will be very similar in Hotspot 7/8. Try it.
Modern JVMs have highly optimized garbage collectors and you shouldn't need to worry about how/when it reclaims memory, but more about making sure you release objects so that they become eligible for collection. How often after startup do you experience out of memory issues?
If you are getting crashes due to out of memory configure the JVM to take a heap dump on exit:
-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=date.hprof
I'm profiling my application using VisualVM and I see that the heap size increased by about 7MB in about 3 days. When I use memory sampler, I also see that java.lang.ref.WeakReference is in the top five for the instances number. The number of WeakReference is increasing and GC has almost no effect.
Any idea?
You do not have a memory leak.
Java's GC only runs when the heap is full (actually is a bit more complicated since the heap itself is divided into generations, but anyway), so unless you are filling the heap (which is very unlikely since 7Mb is too little memory for any heap) you can't tell wether you have a leak or not.
WeakReferences are small wrappers that actually help preventing memory leaks by marking the objet they reference as elegible for GC. My guess is that you're including some kind of cache library that creates a bunch of these, and since the heap still has plenty of room there's no need to garbage collect them.
Again, unless you see that the GC runs often and your heap size still increases I wouldn't worry about memory issues.
Here's a great article on this matter
WeakReferences are the among first to get collected in case the JVM runs a full GC, however, they must not be strongly/ softly reachable (no strong/ soft reference must be holding a reference to it). I am usually least worried about WeakReferences, they do get GC-ed eventually. You should check your GC cycles (jstat) and see if even GC is not claiming these references. Also, please do not extrapolate the leak, your application may not necessarily grow its memory consumption in the next few days. I would suggest running a long (48 hr?) performance test with a significant load on a non production environment and see if you run into memory issues.
VisualVM uses resources in the system. This is one of its weakness compared with commercial profilers. As such small differences cannot be easily seen with VisualVM because it creates its own noise.
Lets say you have a leak of 7 MB in 3 days (which I doubt). How much times is it worth you spending to fix it? 16 GB of memory costs about $100 so 7 MB is worth about 5 cents, or about 3 seconds of your time. I would worry about it more if it were larger, much larger.
I have a java application that uses a lot of memory when used, but when the program is not being used, the memory usage doesnt go down.
Is there a way to force Java to release this memory? Because this memory is not needed at that time, I can understand to reserve a small amount of memory, but Java just reserves all the memory it ever uses. It also reuses this memory later but there must be a way to force Java to release it when its not needed.
System.gc is not working.
As pointed out in the comments, it's not certain that, while the garbage collector disposes objects, it gives back memory to the system.
Perhaps Tuning Garbage Collection Outline provides the solution to your problem:
By default the JVM grows or shrinks the heap at each GC to keep the ratio of free space to live objects at each collection within a specified range.
-XX:MinHeapFreeRatio - when the percentage of free space in a generation falls below this value the generation will be expanded to meet this percentage. Default is 40
-XX:MaxHeapFreeRatio - when the percentage of free space in a generation exceeded this value the generation will shrink to meet this value. Default is 70
Otherwise, if you suspect that you're leaking references you can figure out how, what and where objects are leaked is to monitor the heap in JVisualVM (a tool bundled with the standard SDK). You can, through this program, perform a heap-dump and get a histogram over object memory consumption:
What memory do you mean? If it is RAM (as opposed to the amount of used heap space of the Java VM itself) then this might be normal. It is a relatively expensive operation to allocate memory so once the JVM got some it is quite reluctant to give it back even if it is not needed at the time.
Have you considered using a memory profiler? If you don't have access to one, you can start with capturing a bunch of jmap -histo <pid> and writing a script to figure the differences.
System.gc has no guarantees about if it should free any memory when ran. See Why is it bad practice to call System.gc()?
Try tweaking the Xmx JVM arg down if it is set to a large value and take a look in JConsole to see what's going on with memory usage and GC activity. Normally you'd see a saw tooth pattern.
You might also want to use a profiler to see where the memory is being used and to identify any leaks.
One of two things is happening:
1) Your application is leaking references. Are you sure that you aren't hanging on to objects when you'll no longer need them? If you do, Java must maintain them in memory.
2) Java's working just fine. You get no benefit from memory that you aren't using.
I have a server application that, in rare occasions, can allocate large chunks of memory.
It's not a memory leak, as these chunks can be claimed back by the garbage collector by executing a full garbage collection. Normal garbage collection frees amounts of memory that are too small: it is not adequate in this context.
The garbage collector executes these full GCs when it deems appropriate, namely when the memory footprint of the application nears the allotted maximum specified with -Xmx.
That would be ok, if it wasn't for the fact that these problematic memory allocations come in bursts, and can cause OutOfMemoryErrors due to the fact that the jvm is not able to perform a GC quickly enough to free the required memory. If I manually call System.gc() beforehand, I can prevent this situation.
Anyway, I'd prefer not having to monitor my jvm's memory allocation myself (or insert memory management into my application's logic); it would be nice if there was a way to run the virtual machine with a memory threshold, over which full GCs would be executed automatically, in order to release very early the memory I'm going to need.
Long story short: I need a way (a command line option?) to configure the jvm in order to release early a good amount of memory (i.e. perform a full GC) when memory occupation reaches a certain threshold, I don't care if this slows my application down every once in a while.
All I've found till now are ways to modify the size of the generations, but that's not what I need (at least not directly).
I'd appreciate your suggestions,
Silvio
P.S. I'm working on a way to avoid large allocations, but it could require a long time and meanwhile my app needs a little stability
UPDATE: analyzing the app with jvisualvm, I can see that the problem is in the old generation
From here (this is a 1.4.2 page, but the same option should exist in all Sun JVMs):
assuming you're using the CMS garbage collector (which I believe the server turns on by default), the option you want is
-XX:CMSInitiatingOccupancyFraction=<percent>
where % is the % of memory in use that will trigger a full GC.
Insert standard disclaimers here that messing with GC parameters can give you severe performance problems, varies wildly by machine, etc.
When you allocate large objects that do not fit into the young generation, they are immediately allocated in the tenured generation space. This space is only GC'ed when a full-GC is run which you try to force.
However I am not sure this would solve your problem. You say "JVM is not able to perform a GC quickly enough". Even if your allocations come in bursts, each allocation will cause the VM to check if it has enough space available to do it. If not - and if the object is too large for the young generation - it will cause a full GC which should "stop the world", thereby preventing new allocations from taking place in the first place. Once the GC is complete, your new object will be allocated.
If shortly after that the second large allocation is requested in your burst, it will do the same thing again. Depending on whether the initial object is still needed, it will either be able to succeed in GC'ing it, thereby making room for the next allocation, or fail if the first instance is still referenced.
You say "I need a way [...] to release early a good amount of memory (i.e. perform a full GC) when memory occupation reaches a certain threshold". This by definition can only succeed, if that "good amount of memory" is not referenced by anything in your application anymore.
From what I understand here, you might have a race condition which you might sometimes avoid by interspersing manual GC requests. In general you should never have to worry about these things - from my experience an OutOfMemoryError only occurs if there are in fact too many allocations to be fit into the heap concurrently. In all other situations the "only" problem should be a performance degradation (which might become extreme, depending on the circumstances, but this is a different problem).
I suggest you do further analysis of the exact problem to rule this out. I recommend the VisualVM tool that comes with Java 6. Start it and install the VisualGC plugin. This will allow you to see the different memory generations and their sizes. Also there is a plethora of GC related logging options, depending on which VM you use. Some options have been mentioned in other answers.
The other options for choosing which GC to use and how to tweak thresholds should not matter in your case, because they all depend on enough memory being available to contain all the objects that your application needs at any given time. These options can be helpful if you have performance problems related to heavy GC activity, but I fear they will not lead to a solution in your particular case.
Once you are more confident in what is actually happening, finding a solution will become easier.
Do you know which of the garbage collection pools are growing too large?....i.e. eden vs. survivor space? (try the JVM option -Xloggc:<file> log GC status to a file with time stamps)...When you know this, you should be able to tweak the size of the effected pool with one of the options mentioned here: hotspot options for Java 1.4
I know that page is for the 1.4 JVM, I can't seem to find the same -X options on my current 1.6 install help options, unless setting those individual pool sizes is a non-standard, non-standard feature!
The JVM is only supposed to throw an OutOfMemoryError after it has attempted to release memory via garbage collection (according to both the API docs for OutOfMemoryError and the JVM specification). Therefore your attempts to force garbage collection shouldn't make any difference. So there might be something more significant going on here - either a problem with your program not properly clearing references or, less likely, a JVM bug.
There's a very detailed explanation of how GC works here and it lists parameters to control memory available to different memory pools/generations.
Try to use -server option. It will enable parallel gc and you will have some performance increase if you use multi core processor.
Have you tried playing with G1 gc? It should be available in 1.6.0u14 onwards.