When I run a java program with the starting heap size of 3G (set by -Xms3072m VM argument), JVM doesn't start with that size. It start with 400m or so and then keeps on acquiring more memory as required.
This is a serious problem for me. I know JVM is going to need the said amount after some time. And when JVM increases is its memory as per the need, it slows down. During the time when JVM acquires more memory, considerable amount of time is spent in garbage collection. And I suppose memory acquisition is an expensive task.
How do I ensure that JVM actually respects the start heap size parameter?
Update: This application creates lots of objects, most of which die quickly. Some resulting objects are required to stay in memory (which get transferred out of young heap.) During this operation, all these objects need to be in memory. After the operation, I can see that all the objects in young heap are claimed successfully. So there are no memory leaks.
The same operation runs smoothly when the heap size reaches 3G. That clearly indicates the extra time required is spent in acquiring memory.
This Sun JDK 5.
If I am not mistaken, Java tries to get the reservation for the memory from the OS. So if you ask for 3 GB as Xms, Java will ask the OS, if this is available but not start with all the memory right away... it might even reserve it (not allocate it). But these are details.
Normally, the JVM runs up to the Xms size before it starts serious old generation garbage collection. Young generation GC runs all the time. Normally GC is only noticeable when old gen GC is running and the VM is in between Xms and Xmx or, in case you set it to the same value, hit roughly Xmx.
If you need a lot of memory for short lived objects, increase that memory area by setting the young area to... let's say 1 GB -XX:NewSize=1g because it is costly to move the "trash" from the young "buckets" into the old gen. Because in case it has not turned into real trash yet, the JVM checks for garbage, does not find any, copies it between the survivor spaces, and finally moves into the old gen. So try to suppress the check for the garbage in the young gen, when you know that you do not have any and postpone this somehow...
Give it a try!
I believe your problem is not coming from where you think.
It looks like what's costing you the most are the GC cycles, and not the allocation of heap size. If you are indeed creating and deleting lots of objects.
You should be focusing your effort on profiling, to find out exactly what is costing you so much, and work on refactoring that.
My hunch - object creation and deletion, and GC cycles.
In any case, -Xms should be setting minimum heap size (check this with your JVM if it is not Sun). Double-check to see exactly why you think it's not the case.
i have used sun's vm and started with minimum set to 14 gigs and it does start off with that.
maybe u should try setting both the xms and xmx values to the same amt, ie try this-
-Xms3072m -Xmx3072m
Why do you think the heap allocation is not right? Taking any operating system tool that shows only 400m does not mean it isn't allocated.
I don't get really what you are after. Is the 400m and above already a problem or is your program supposed to need that much? If you really have the need to deal with that much memory and it seems you need a lot of objects than you can do several things:
If the memory consumption doesn't match your gut feeling it is the right amount than you probably are leaking memory. That would explain why it "slows down" over time. Maybe you missed to remove objects from one structure so they don't get garbage collected and are slowing lookups and such down.
Your memory settings are maybe the trouble in itself. Garbage collection is not run per se. It is only called if there is some threshold reached. If you give it a big heap setting and your operating system has plenty of memory the garbage collection runs not often.
The characteristics you mentioned would be a scenario where a lot of objects are created and shortly after they would be deleted again. Otherwise the garbage collection wouldn't be a problem (some sort of generational gc). That means you have only "young" objects. Consider using an object pool if you are needing objects only a short period of time. That would eliminate the garbage collection at all.
If you know there are good times in your code for running gc you can consider running it manually to be able to see if it changes anything. This is what you would need
Runtime r = Runtime.getRuntime();
r.gc();
This is just for debugging purposes. The gc is doing a great job most of the time so there shouldn't be the need to invoke the gc on your own.
Related
So, the jest of it is, a version of an application at my company is having some memory issues lately, and I'm not fully sure the best way to fix it that isn't just "Allocate more memory", so I wanted to get some guidance.
For the application, It looks like the eden heap is getting full pretty quickly when it has a concurrent users, so objects that won't be alive very long end up in the old heap. After running for a while, the old heap simply gets fulls, and never seems to automatically clean up, but manually running the garbage collection in VisualVM will clear it out (So I assume this means the old heap is full of dead objects)
Is there any setting suggested I could add so garbage collection gets run on the old heap once it gets to a certain threshold? And is there any pitfalls from changing the old/edin ratio from the stock 2:1 to 1:1? For the application, the majority of objects created are what I would consider short lived (From milliseconds to a few minutes)
It looks like the eden heap is getting full pretty quickly when it has a concurrent users, so objects that won't be alive very long end up in the old heap.
This is called "premature promotion"
After running for a while, the old heap simply gets fulls,
When it fills, the GC triggers a major or even a full collection.
never seems to automatically clean up
In which case, it is either used or it is not completely full. It might appear to be almost full, but the GC will be performed when it is actually full.
but manually running the garbage collection in VisualVM will clear it out
So the old gen wasn't almost but not actually full.
I could add so garbage collection gets run on the old heap once it gets to a certain threshold?
You can run System.gc() but this means more work for you application and slow it down. You don't want to be doing this.
If you use the CMS collector you can change the threshold at which it kicks in but unless you need low latency you might be better off leaving your settings as they are.
And is there any pitfalls from changing the old/edin ratio from the stock 2:1 to 1:1?
You reduce the old gen, you you may half the number of GCs you perform and double the amount of time an object can live and not end up in the old gen.
I work in the low latency space and usually set the young space to 24 GB and the old gen to 2 GB. I also use a lot of off heap data so I don't need much old gen. This is not an average use case, but it can work depending on your requirements.
If you are using < 32 GB, just adding a few more GB may be the simplest answer. Also you can use something like -Xmn4g -Xms6g to set the young space and maximum heap not worry about ratios.
For the application, the majority of objects created are what I would consider short lived (From milliseconds to a few minutes)
In that case, ideally you want your eden space large enough so you have a minor collection every few minutes. This way most of your objects will die in the eden space, and not be copied around.
Note: in extreme cases it is possible to have an application produce less than one GB per hour of garbage and run all day with a 24 GB Eden space without even a minor collection.
I have three questions regarding garbage collection
I am trying to study the garbage collection in my application and I can notice that a full GC has occurred. By studying the GC logs I could find that old gen has not even used half the memory allocated to it. Then why would a full GC happen. Is there some other algorithm the JVM uses that releases the memory even when old gen is not completely utilized?
What can be called as a good GC trend. I mean if the full GC occurs at every 10- 15 mins can I call the application to be in a good state. I want to know how an ideal GC should be for an application. I know it depends considerably on the application, but there should be something to call ideal.
I have not set the NewSize or Newratio property. The default NewRatio in the machine seems to be 2. But I can see that my young gen is using only 1/4 th the heap size and the rest is used by tenured gen. How is this possible? All I have defined is the Xmx and permsize.
A major collection can happen for several reasons, in most cases you can see the cause by using jstat -gccause.
Few of the reasons are
-System.gc() if called from your app or any other code that you use and relies on this call.
-When the old space occupancy fractions has been reached
-When a PermGen collection takes place
-Depending on the collector you are using CMSIncrementalMode seems to be causing major collections before the limit of the old generation.
Most likely System.gc() is the cause of your unexpected major collections, try to use the flag -XX:+DisableExcplicitGC and see if you still get them.
--
There is no trend that can describe all usages. This should be based on your needs. Does the way your GC works now affect the performance of your app/service. Do you get long stop-the-world pauses that decrease your throughput ? What do you want to achieve? And the most important what is the garbage you are generating ? Try to analyze a heap dump and see if you can somehow reduce the numbers before you go and optimize the collector.
--
It depends on the flags you are using the version of the JVM your OS etc etc... In general GC ergonomics and more specifically the option -XX:+UseAdaptiveSizePolicy will be responsible of the sizings of your generations.
When I start JVM it reserves at least {{xms}} memory, right? That means this memory is private for JVM process (it is malloced), yes?
When JVM needs to increase heap at reserves (mallocs) more memory. But how much?
I do not believe it reserves exactly as much as it needs, probably there is certain step (pool?) size.
How this "step size" could be configured?
And all that happens until {{xmx}} is reached and OOM is thrown, right?
When JVM starts GC? Not when it comes to xmx, but when it comes to reserved heap size (top of this pool)?
If so, it is much better to set xms close to xmx to prevent many useless GCs.
I will have one huge GC instead of many little ones, bug every GC freezes my JVM, so it is better to have one, right?
When JVM needs to increase heap at reserves (mallocs) more memory. But how much?
You shouldn't really care. It just works. Many advice using equal Xmx and Xms so that JVM allocates all the memory at startup. This is reasonable, read further.
How this "step size" could be configured?
It can't, it is completely implementation and probably OS dependant.
When JVM starts GC? Not when it comes to xmx, but when it comes to reserved heap size (top of this pool)?
GC is a bit more complicated than you think. Minor GC is executed when young generation is filled up. Major GC is called there is no more space left in old generation.
And all that happens until {{xmx}} is reached and OOM is thrown, right?
No, when Xmx is reached, JVM stabilizes and nothing wrong happens. OutOfMemoryError is thrown when, immediately after GC, JVM is unable to find enough space for new object (this is a major simplification).
If so, it is much better to set xms close to xmx to prevent many useless GCs.
Once again, you must learn how GC works. Using Xmx equal to Xms is a good choice because it avoids unnecessary allocations when application runs (everything happens on startup, no further overhead). GC has nothing to do with that.
instead of many little ones, bug every GC freezes my JVM, so it is better to have one, right?
Nope. Minor GC usualy takes tens of milliseconds and is almost invisible, unless you are working on a real-time system. Major (stop-the-world) GC might take few seconds and is certainly noticeable for end users. In a correctly tuned JVM major GC should occur very rarely.
You are correct about the meaning of the switches.
The way I remember the switches is
xm*s* = Ends with "s" like "*s*tarting memory".
xm*x* = Ends with "x" like "ma*x*imum memory"
It is up to a given JVM to decide how to move from the starting memory to the maximum memory. Assuming the two are not trivially close to each other, the allocation will happen in steps on all JVM's I'm aware of.
I'm not aware of any option to control the size of the steps in any JVM. There is certainly no standard option.
Different JVM's have different GC strategies. Some JVMs allow you to use one of multiple GC strategies, controlled by a command line switch.
Just curious. If you have 4Gb of free memory and you create 10k of garbage per minute. Will the GC trigger every minute? In my situation it would be preferable to delay the GC or not execute it at all. Any thoughts and ideas about the best GC to use in order to accomplish something like this?
No, the default garbage collector (serial GC) will not run until the memory is full ( either the Eden space or the old generation space).
if you want to minimize the garbage collector running time in your case try to maximize the Eden space:
java -Xms2g -Xmx3g -XX:NewSize=500m -XX:MaxNewSize=1024m yourApplication
the above settings, will run your application with 3g maximum memory, and the Eden space will be 1g at max, so any new allocated object will be stored in this space, and the (default) garbage collector will not run until this space is filled with objects.
Is there any kind of hack to turn off GC execution until free memory is approaching zero?
No, there isn't. You don't have that level of control from within an application. On the other hand ...
If you have 4Gb of free memory and you create 10k of garbage per minute. Will the GC trigger every minute?
No it won't. The GC runs when the JVM figures that it is the best time to do it. The "best time" depends on what the GC is trying to optimize for; e.g. for throughput, or to minimize pauses:
In the former case, it will be when the Eden space (where new objects are created) doesn't have space for an object you want to create.
In the latter case, it will be when the amount of free memory in (typically) the Eden space drops below a (configurable) threshold level.
But generally speaking, you don't need to worry about the JVM running the garbage collector unnecessarily. It won't.
Run your app with the -verbose:gc option, and you can see logs of all garbage collection dumped to the screen, which will give you a good picture of what is going on in the virtual machine.
You can also use Visual VM to monitor behavior.
Currently in our testing environment the max and min JVM heap size are set to the same value, basically as much as the dedicated server machine will allow for our application. Is this the best configuration for performance or would giving the JVM a range be better?
Peter 's answer is correct in that -Xms is allocated at startup and it will grow up to -Xmx (max heap size) but it's a little misleading in how he has worded his answer. (Sorry Peter I know you know this stuff cold).
Setting ms == mx effectively turns off this behavior. While this used to be a good idea in older JVMs, it is no longer the case. Growing and shrinking the heap allows the JVM to adapt to increases in pressure on memory yet reduce pause time by shrinking the heap when memory pressure is reduced. Sometimes this behavior doesn't give you the performance benefits you'd expect and in those cases it's best to set mx == ms.
OOME is thrown when heap is more than 98% of time is spent collecting and the collections cannot recover more than 2% of that. If you are not at max heaps size then the JVM will simply grow so that you're beyond that boundaries. You cannot have an OutOfMemoryError on startup unless your heap hits the max heap size and meets the other conditions that define an OutOfMemoryError.
For the comments that have come in since I posted. I don't know what the JMonitor blog entry is showing but this is from the PSYoung collector.
size_t desired_size = MAX2(MIN2(eden_plus_survivors, gen_size_limit()),
min_gen_size());
I could do more digging about but I'd bet I'd find code that serves the same purpose in the ParNew and PSOldGen and CMS Tenured implementations. In fact it's unlikely that CMS would be able to return memory unless there has been a Concurrent Mode Failure. In the case of a CMF the serial collector will run and that should include a compaction after which top of heap would most likely be clean and therefore eligible to be deallocated.
Main reason to set the -Xms is for if you need a certain heap on start up. (Prevents OutOfMemoryErrors from happening on start up.) As mentioned above, if you need the startup heap to match the max heap is when you would match it. Otherwise you don't really need it. Just asks the application to take up more memory that it may ultimately need. Watching your memory use over time (profiling) while load testing and using your application should give you a good feel for what to need to set them to. But it isn't the worse thing to set them to the same on start up. For a lot of our apps, I actually start out with something like 128, 256, or 512 for min (startup) and one gigabyte for max (this is for non application server applications).
Just found this question on stack overflow which may also be helpful side-effect-for-increasing-maxpermsize-and-max-heap-size. Worth the look.
AFAIK, setting both to the same size does away with the additional step of heap resizing which might be in your favour if you pretty much know how much heap you are going to use. Also, having a large heap size reduces GC invocations to the point that it happens very few times. In my current project (risk analysis of trades), our risk engines have both Xmx and Xms to the same value which pretty large (around 8Gib). This ensures that even after an entire day of invoking the engines, almost no GC takes place.
Also, I found an interesting discussion here.
Definitely yes for a server app. What's the point of having so much memory but not using it?
(No it doesn't save electricity if you don't use a memory cell)
JVM loves memory. For a given app, the more memory JVM has, the less GC it performs. The best part is more objects will die young and less will tenure.
Especially during a server startup, the load is even higher than normal. It's brain dead to give server a small memory to work with at this stage.
From what I see here at http://java-monitor.com/forum/showthread.php?t=427
the JVM under test begins with the Xms setting, but WILL deallocate memory it doesn't need and it will take it upto the Xmx mark when it needs it.
Unless you need a chunk of memory dedicated for a big memory consumer initially, there's not much of a point in putting in a high Xms=Xmx. Looks like deallocation and allocation occur even with Xms=Xmx