In my application I have a large collection of long lived objects. I am using relatively large heap-size of 100gb (xmx and xms) and once my application hits about 30gb of used heap the garbage collector starts a really long stop-the-world pause that can take up to 15 minutes. After a while the application terminates with the gc overhead limit exceeded exception.
I want to keep all of the objects for whole application's lifetime, so freeing any memory is not an option.
I know that one of the solutions would be to use off-heap storage, but I'd like to avoid that at the moment.
The other would be to tune garbage collector's parameters. I tried with different algorithms and young generation sizes, but it didn't help and I don't know where to go from there.
My problem lied in wrong order of jvm parameters. I wasn't aware you have to specify all of them before -jar switch.
I ended up setting the bigger heap size correctly and using G1GC with relatively small young generation (ratio 20). Now the GC pauses are much more manageable.
Performance-wise rewriting the code for off-heap storage would probably be preferable in this use case though.
I deleted my comments to avoid potential confusion.
Related
So, the jest of it is, a version of an application at my company is having some memory issues lately, and I'm not fully sure the best way to fix it that isn't just "Allocate more memory", so I wanted to get some guidance.
For the application, It looks like the eden heap is getting full pretty quickly when it has a concurrent users, so objects that won't be alive very long end up in the old heap. After running for a while, the old heap simply gets fulls, and never seems to automatically clean up, but manually running the garbage collection in VisualVM will clear it out (So I assume this means the old heap is full of dead objects)
Is there any setting suggested I could add so garbage collection gets run on the old heap once it gets to a certain threshold? And is there any pitfalls from changing the old/edin ratio from the stock 2:1 to 1:1? For the application, the majority of objects created are what I would consider short lived (From milliseconds to a few minutes)
It looks like the eden heap is getting full pretty quickly when it has a concurrent users, so objects that won't be alive very long end up in the old heap.
This is called "premature promotion"
After running for a while, the old heap simply gets fulls,
When it fills, the GC triggers a major or even a full collection.
never seems to automatically clean up
In which case, it is either used or it is not completely full. It might appear to be almost full, but the GC will be performed when it is actually full.
but manually running the garbage collection in VisualVM will clear it out
So the old gen wasn't almost but not actually full.
I could add so garbage collection gets run on the old heap once it gets to a certain threshold?
You can run System.gc() but this means more work for you application and slow it down. You don't want to be doing this.
If you use the CMS collector you can change the threshold at which it kicks in but unless you need low latency you might be better off leaving your settings as they are.
And is there any pitfalls from changing the old/edin ratio from the stock 2:1 to 1:1?
You reduce the old gen, you you may half the number of GCs you perform and double the amount of time an object can live and not end up in the old gen.
I work in the low latency space and usually set the young space to 24 GB and the old gen to 2 GB. I also use a lot of off heap data so I don't need much old gen. This is not an average use case, but it can work depending on your requirements.
If you are using < 32 GB, just adding a few more GB may be the simplest answer. Also you can use something like -Xmn4g -Xms6g to set the young space and maximum heap not worry about ratios.
For the application, the majority of objects created are what I would consider short lived (From milliseconds to a few minutes)
In that case, ideally you want your eden space large enough so you have a minor collection every few minutes. This way most of your objects will die in the eden space, and not be copied around.
Note: in extreme cases it is possible to have an application produce less than one GB per hour of garbage and run all day with a 24 GB Eden space without even a minor collection.
I know many things about what should be perceived from gc.logs like
you should check how frequently "Full GC" runs, if it is running frequently then it is sign of problem
you should also check how much memory "Full GC" is able to reclaim while finishes, if it is not much then again it is sign of problem as it would force "Full GC" to run again
you should revisit your heap space allocated for java process if "Full GC" runs frequently.
These are some points on which I am working on, I would be interested to know what else should be taken care, when I look at gc logs.
FYI, I have already gone through following threads....
What does "GC--" in gc.log mean?
What does "GC--" mean in a java garbage collection log?
How to analyse and monitor gc.log garbage collector log files from the JVM
Is gc.log writing asynchronous? safe to put gc.log on NFS mount?
First you need to know what wrong can GC do to your program. Depending on the type of collectors that you use for tenured and old gen contents of GC logs may vary. But all in all the baseline inference that we need to derive from gc logs is mostly concentrated to the following:
How long are the minor collections taking?
How long are the major collections taking?
What is the frequency of minor collections?
What is the frequency of major collections?
How much does a minor collection reclaim?
How much does a major collection reclaim?
Combinations of the above
Most Program have a very frequent minor collections that claim about 90-95% of heap and pass the rest to Survivor spaces. Subsequent collections clean up survivors by about 80% again and in essence just 2% to 4% of you actual minor collection makes it to old gen and tis cycles keeps on going no matter which Collector you use.
Now the pain areas are when you have hundreds of small sized minor collections per application request or thread and when added up they make a sizable time mostly in double digit seconds. Since in modern collectors minor pass and sweep are not stop the world cases so somethings this is bearable. With Old gen the problems come when collectors run but don't reclaim anything major. e.g: normally a collector runs when the old gen is about 80-85% full. This may be a stop the world episode since new data cannot be saved on heap unless the heap has more space which is probably the case here. So app threads are paused to let GC threads cleanup the space first. but once the collector finishes the heap fill ratio doesn't come down much as it should. A good sizing should reduce your heap by more than 40% in a single go. If it doesn't that means you need more heap to save your long lived objects.
So in essence GC analysis is not a 'do it based of a set of predefined steps' things. Its more of a hti and trial analysis. It more of an experiment were you set the initial sizes and settings and then note or monitor the GC activity and record findings. Then after say 8-10 runs you compare notes and see what works for your app and what doesn't. Its really an interesting hard work to do.
I have a Java client which consumes a large amount of data from a server. If the client does not keep up with the data stream at a fast enough rate, the server disconnects the socket connection. My client gets disconnected a few times per day. I ran jconsole to see the memory usage, and the heap space graph looks like a fairly well defined sawtooth pattern, oscillating between about 0.5GB and 1.8GB (2GB of heap space is allocated). But every time I get disconnected is during a full GC (but not on every full GC). I see the full GC takes a bit over 1 second on average. Depending on the time of day, full GC happens as often as every 5 minutes when busy, or up to 30 minutes can go by in between full GCs during the slow periods.
I suspect if I can reduce the full GC time, the client will be able to better keep up with the incoming data, but I do not have much experience with GC tuning. Does anyone have some insight on if this might be a good idea, and how to do it? Or is there an alternative idea which may work as well?
** UPDATE **
I used -XX:+UseConcMarkSweepGC and it improved, but I still got disconnected during the very busy moments. So I increased the heap allocation to 3GB to help weather through the busy moments and it seems to be chugging along pretty well now, but it's only been 1 day without a disconnection. Maybe if I get some time I will go through and try to reduce the amount of garbage created which I'm confident will help as well. Thanks for all the suggestions.
Full GC could take very long to complete, and is not that easy to tune.
One way to (easily) tune it is to increase the heap space - generally speaking, double the heap space can double the interval between two GCs, but will double the time consumed by a GC. If the program you are running has very clear usage patterns, maybe you can consider increase the heap space to make the interval so large that you can guarantee to have some idle time to try to make the system perform a GC. On the other hand, following this logic, if the heap is small a full garbage collection will finish in a instant, but that seems like inviting more troubles than helping.
Also, -XX:+UseConcMarkSweepGC might help since it will try to perform the GC operations concurrently (not stopping your program; see here).
Here's a very nice talk by Til Gene (CTO of Azul systems, maker of high performance JVM, and published several GC algos), about GC in JVM in general.
It is not easy to tune away the Full GC. A much better approach is to produce less garbage. Producing less garbage reduces pressure on the collection to pass objects into the tenured space where they are more expensive to collect.
I suggest you use a memory profiler to
reduce the amount of garbage produced. In many applications this can be reduce by a factor of 2 - 10x relatively easily.
reduce the size of the objects you are creating e.g. use primitive and smaller datatypes like double instead of BigDecimal.
recycle mutable object instead of discarding them.
retain less data on the client if you can.
By reducing the amount of garbage you create, objects are more likely to die in the eden, or survivor spaces meaning you have far less Full collections, which can be shorter as well.
Don't take it for granted you have to live with lots of collections, in extreme cases you can avoid it almost completely http://vanillajava.blogspot.ro/2011/06/how-to-avoid-garbage-collection.html
Take out calls to Runtime.getRuntime().gc() - When garbage collection is triggered manually it either does nothing or it does a full stop-the-world garbage collection. You want incremental GC to happen.
Have you tried using the server jvm from a jdk install? It changes a bunch of the default configuration settings (including garbage collection) and is easy to try - just add -server to your java command.
java -server
What is all the garbage that gets created? Can you generate less of it? Where possible, try to use the valueOf methods. By using less memory you'll save yourself time in gc AND in memory allocation.
I am currently running an application which requires a maximum heap size of 16GB.
Currently I use the following flags to handle garbage collection.
-XX\:+UseParNewGC, -XX\:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=50, -XX\:+DisableExplicitGC, -XX\:+PrintGCDateStamps, -XX\:+PrintGCDetails, -Xloggc\:/home/user/logs/gc.log
However, I have noticed that during some garbage collections, the application locks up for a few seconds and then carries on - This is completely unacceptable as it's a game server.
An exert from my garbage collection logs can be found here.
Any advice on what I should change in order to reduce these long pauses would be greatly appreciated.
Any advice on what I should change in order to reduce these long pauses would be greatly appreciated.
The chances are that the CMS GC cannot keep up with the amount of garbage your system is generating. But the work that the GC has to perform is actually more closely related to the amount of NON-garbage that your system is retaining.
So ...
Try to reduce the actual memory usage of your application; e.g. by not caching so much stuff, or reducing the size of your "world".
Try to reduce the rate at which your application generates garbage.
Upgrade to a machine with more cores so that there are more cores available to run the parallel GC threads when necessary.
To Mysticial:
Yes in hindsight, it might have been better to implement the server in C++. However, we don't know anything about "the game". If it involves a complicated world model with complicated heterogeneous data structures, then implementing it in C++ could mean that that you replace the "GC pause" problem with the problem that the server crashes all the time due to problems with the way it manages its data structures.
Looking at your logs, I don't see any long pauses. But young GC is very frequent. Promotion rate is very low though (most garbage cleared by young GC as it should). At same time your old space utilization is low.
BTW are we talking about minecraft server?
To reduce frequency of young GC you should increase its size. I would suggest start with -XX:NewSize=8G -XX:MaxNewSize=8G
For such large young space, you should also reduce survivor space size -XX:SurvivorRatio=512
GC tuning is a path of trial and errors, so you may need some more iterations and tweaking.
You can find couple of useful articles at mu blog
HotSpot JVM GC options cheatsheet
Understanding young GC pauses in HotSpot JVM
I'm not an expert on Java garbage collection, but it looks like you're doing the right thing by using the concurrent collector (the UseConcMarkSweepGC flag), assuming the server has multiple processors. Follow the suggestions for troubleshooting at http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html#cms. If you already have, let us know what happened when you tried them.
Which version of java are you using?http://docs.oracle.com/javase/7/docs/technotes/guides/vm/G1.html
For better try to minimize the use of instance variables in a class.It would be better to perform on local variables than instance varibles .It helps in gaining the performance and safe from synchronization problem.In the end of operation before exit of program always reset the used variables if you are using instance variables and set again when it is required. It helps more in enhancing performance.Besides in the version of java a good garbage collection policy is implemented.It would be better to move to new version if that is fleasible.
Also you can monitor the garbage collector pause time via VisualVm and you can get more idea when it is performing more garbage collection.
When I run a java program with the starting heap size of 3G (set by -Xms3072m VM argument), JVM doesn't start with that size. It start with 400m or so and then keeps on acquiring more memory as required.
This is a serious problem for me. I know JVM is going to need the said amount after some time. And when JVM increases is its memory as per the need, it slows down. During the time when JVM acquires more memory, considerable amount of time is spent in garbage collection. And I suppose memory acquisition is an expensive task.
How do I ensure that JVM actually respects the start heap size parameter?
Update: This application creates lots of objects, most of which die quickly. Some resulting objects are required to stay in memory (which get transferred out of young heap.) During this operation, all these objects need to be in memory. After the operation, I can see that all the objects in young heap are claimed successfully. So there are no memory leaks.
The same operation runs smoothly when the heap size reaches 3G. That clearly indicates the extra time required is spent in acquiring memory.
This Sun JDK 5.
If I am not mistaken, Java tries to get the reservation for the memory from the OS. So if you ask for 3 GB as Xms, Java will ask the OS, if this is available but not start with all the memory right away... it might even reserve it (not allocate it). But these are details.
Normally, the JVM runs up to the Xms size before it starts serious old generation garbage collection. Young generation GC runs all the time. Normally GC is only noticeable when old gen GC is running and the VM is in between Xms and Xmx or, in case you set it to the same value, hit roughly Xmx.
If you need a lot of memory for short lived objects, increase that memory area by setting the young area to... let's say 1 GB -XX:NewSize=1g because it is costly to move the "trash" from the young "buckets" into the old gen. Because in case it has not turned into real trash yet, the JVM checks for garbage, does not find any, copies it between the survivor spaces, and finally moves into the old gen. So try to suppress the check for the garbage in the young gen, when you know that you do not have any and postpone this somehow...
Give it a try!
I believe your problem is not coming from where you think.
It looks like what's costing you the most are the GC cycles, and not the allocation of heap size. If you are indeed creating and deleting lots of objects.
You should be focusing your effort on profiling, to find out exactly what is costing you so much, and work on refactoring that.
My hunch - object creation and deletion, and GC cycles.
In any case, -Xms should be setting minimum heap size (check this with your JVM if it is not Sun). Double-check to see exactly why you think it's not the case.
i have used sun's vm and started with minimum set to 14 gigs and it does start off with that.
maybe u should try setting both the xms and xmx values to the same amt, ie try this-
-Xms3072m -Xmx3072m
Why do you think the heap allocation is not right? Taking any operating system tool that shows only 400m does not mean it isn't allocated.
I don't get really what you are after. Is the 400m and above already a problem or is your program supposed to need that much? If you really have the need to deal with that much memory and it seems you need a lot of objects than you can do several things:
If the memory consumption doesn't match your gut feeling it is the right amount than you probably are leaking memory. That would explain why it "slows down" over time. Maybe you missed to remove objects from one structure so they don't get garbage collected and are slowing lookups and such down.
Your memory settings are maybe the trouble in itself. Garbage collection is not run per se. It is only called if there is some threshold reached. If you give it a big heap setting and your operating system has plenty of memory the garbage collection runs not often.
The characteristics you mentioned would be a scenario where a lot of objects are created and shortly after they would be deleted again. Otherwise the garbage collection wouldn't be a problem (some sort of generational gc). That means you have only "young" objects. Consider using an object pool if you are needing objects only a short period of time. That would eliminate the garbage collection at all.
If you know there are good times in your code for running gc you can consider running it manually to be able to see if it changes anything. This is what you would need
Runtime r = Runtime.getRuntime();
r.gc();
This is just for debugging purposes. The gc is doing a great job most of the time so there shouldn't be the need to invoke the gc on your own.