Edit:
I'm changing my question to make it more clarify.
Here is my app, running with defualt gc in java8:
Yes, there are a lot of gc time, but, my commited memory fits to my actual used memory (I know this is not the desired behabiour)
Now, lets take a look what happens when we set up G1 gc:
You can see the commited memory is much much larger than the used one.
You might be wondering what has changed between first and second execution:
The first one goes with JVM_ARGS: -Xms1024m -Xmx20048m -XX:MaxGCPauseMillis=100
The second one:-XX:+UseG1GC -Xms1024m -Xmx20048m -XX:MaxGCPauseMillis=100 -XX:+UseStringDeduplication
I've been reading a lot about G1 GC, but, I can't understand why this simple change, makes my memory behabes so diferent?
As described in this document from Oracle -Xmx sets the maximum available Memory for the jvm and -Xms the initial memory, so with your values, the jvm uses 1024mb at stat up and up to 4096m when running. So this setting is more a limit then a consumption value.
Compare it to a bucket: If it can hold 10 Liters, you don't have to fill 10 Liters into this bucket, you could also fill it with only 2 Liters
Related
Why does the JVM have an -Xms option? Why do we care about a minimum heap size? Why not just 0? It's super easy to allocate RAM, so I don't see the point of forcing a minimum heap size.
In my searching, I see that it's customary to set -Xms (minimum heap size) and -Xmx (maximum heap size) to the same value.
I am having a hard time finding a clear and rational basis for this custom or why -Xms even exists. Rather, I find a lot of communal reinforcement. On occasion, I see it justified by a flaky theory, such that the JVM is unusually slow at allocating additional RAM as it grows the heap size.
While this came up as I was optimizing Solr, it seems that fussing with the heap size is a common consideration with JVMs.
As a curious data point, you'll see two memory-usage dips here:
Before dip 1: -Xms14g -Xmx14g
Between dip 1 and 2: -Xms0g -Xmx14g
After dip 2: -Xmx14g
After dip 2, Solr reported to me that it was only using a couple hundred MBs of heap space even though the JVM gobbled up many GBs of RAM.
In case it matters, I am on the current release of OpenJDK.
To summarize, is there a rational and fact-based basis for:
Setting -Xms to something other than 0.
The custom of setting -Xms and -Xmx to the same value.
Why -Xms even exists.
I think the fact-based basis will help with a more informed basis for managing heap-size options.
You have three questions.
First question:
Is there a rational and fact-based basis for setting -Xms to something other than 0?
Looking as far back as version 8, we can see (among other things) the following for -Xms:
Sets the minimum and the initial size (in bytes) of the heap.
So, setting -Xms to 0 would mean setting the heap to 0. I'm not sure what your expectation would be, but.. the JVM needs at least some amount of heap to do things (like run a garbage collector).
Second question:
Is there a rational and fact-based basis for the custom of setting -Xms and -Xmx to the same value?
Yes, if you expect to use a certain amount of memory over time, but not necessarily right away. You could allocate the full amount of memory up front so that any allocation costs are out of the way.
For example, consider an app that launches needing less than 1GB of memory, but over time it grows (normally, correctly) to 4GB. You could run that app with -Xms1g -Xmx4g – so, start with 1GB and do periodic allocations over time to reach 4GB. Or you could run with -Xms4g -Xmx4g – tell the JVM to allocate 4GB now, even if it's not going to be used right away.
Allocating memory from an underlying operating system has a cost, and might be expensive enough that you'd like to do that early in the application life, instead of some later point where that cost might be more impactful.
Third question:
Is there a rational and fact-based basis for why -Xms even exists?
Yes, it allows tuning JVM behavior. Some applications don't need to do this, but others do. It's useful to be able to set values for lower, upper, or both (lower and upper together). Way beyond this, there's a whole world of garbage collector tuning, too.
A little more detail on how -Xms is used (below) could give you some initial garbage collection topics to read about (old generation, young generation):
If you do not set this option, then the initial size will be set as the sum of the sizes allocated for the old generation and the young generation.
We are basically tuning our JVM options.
-J-Xms1536M -J-Xmx1536M -J-Xss3M -J-Djruby.memory.max=1536M -J-Djruby.thread.pool.enabled=true -J-Djruby.compile.mode=FORCE -J-XX:NewRatio=3 -J-XX:NewSize=256M -J-XX:MaxNewSize=256M -J-XX:+UseParNewGC -J-XX:+CMSParallelRemarkEnabled -J-XX:+UseConcMarkSweepGC -J-XX:CMSInitiatingOccupancyFraction=75 -J-XX:+UseCMSInitiatingOccupancyOnly -J-XX:SurvivorRatio=5 -J-server -J-Xloggc:/home/deploy/gcLog/gc.log -J-XX:+PrintGCDateStamps -J-XX:+PrintGCDetails -J-XX:+PrintGCApplicationStoppedTime -J-XX:+PrintSafepointStatistics -J-XX:PrintSafepointStatisticsCount=1
We have set the -J-Xmx1536 and -J-Xms1536M to a value of 1536M. Now
If I understood this correctly -J-Xmx represent the maximum size of the heap.
The system is 4 core 15GB ram process.
But when I check the RSS(using top) of my running Java process I see it is consuming a value larger than the -JXmx1536 around ~2GB.
Now clearly, the JVM heap has increased beyond the specified value of -Jmx.
So my question are..
Why? am I not seeing any Java out of memory exception.
And what is an ideal setting for -JXmx with 4 cores and 15GB RAM.(given that no other process is running in the system other than Java application)
Why? am I not seeing any Java out of memory exception.
because you did not run out of heap memory, start VisualVM and examine the process after setting -Xmx. you'll notice there'sa region called MetaSpace (1G max by default) besides that there are also ways in which the process might use additional memory e.g. for code-cache (JIT-ed native code)
And what is an ideal setting for -JXmx with 4 cores and 15GB RAM.(given that no other process is running in the system other than Java application)
there's no "clear" answer for that, it depends from application to application, you should monitor your memory usage under various scenarios. first thing to do might be to set heap high but if you're not using up most of it and you have a memory leak it will complicate things.
Currently in our testing environment the max and min JVM heap size are set to the same value, basically as much as the dedicated server machine will allow for our application. Is this the best configuration for performance or would giving the JVM a range be better?
Peter 's answer is correct in that -Xms is allocated at startup and it will grow up to -Xmx (max heap size) but it's a little misleading in how he has worded his answer. (Sorry Peter I know you know this stuff cold).
Setting ms == mx effectively turns off this behavior. While this used to be a good idea in older JVMs, it is no longer the case. Growing and shrinking the heap allows the JVM to adapt to increases in pressure on memory yet reduce pause time by shrinking the heap when memory pressure is reduced. Sometimes this behavior doesn't give you the performance benefits you'd expect and in those cases it's best to set mx == ms.
OOME is thrown when heap is more than 98% of time is spent collecting and the collections cannot recover more than 2% of that. If you are not at max heaps size then the JVM will simply grow so that you're beyond that boundaries. You cannot have an OutOfMemoryError on startup unless your heap hits the max heap size and meets the other conditions that define an OutOfMemoryError.
For the comments that have come in since I posted. I don't know what the JMonitor blog entry is showing but this is from the PSYoung collector.
size_t desired_size = MAX2(MIN2(eden_plus_survivors, gen_size_limit()),
min_gen_size());
I could do more digging about but I'd bet I'd find code that serves the same purpose in the ParNew and PSOldGen and CMS Tenured implementations. In fact it's unlikely that CMS would be able to return memory unless there has been a Concurrent Mode Failure. In the case of a CMF the serial collector will run and that should include a compaction after which top of heap would most likely be clean and therefore eligible to be deallocated.
Main reason to set the -Xms is for if you need a certain heap on start up. (Prevents OutOfMemoryErrors from happening on start up.) As mentioned above, if you need the startup heap to match the max heap is when you would match it. Otherwise you don't really need it. Just asks the application to take up more memory that it may ultimately need. Watching your memory use over time (profiling) while load testing and using your application should give you a good feel for what to need to set them to. But it isn't the worse thing to set them to the same on start up. For a lot of our apps, I actually start out with something like 128, 256, or 512 for min (startup) and one gigabyte for max (this is for non application server applications).
Just found this question on stack overflow which may also be helpful side-effect-for-increasing-maxpermsize-and-max-heap-size. Worth the look.
AFAIK, setting both to the same size does away with the additional step of heap resizing which might be in your favour if you pretty much know how much heap you are going to use. Also, having a large heap size reduces GC invocations to the point that it happens very few times. In my current project (risk analysis of trades), our risk engines have both Xmx and Xms to the same value which pretty large (around 8Gib). This ensures that even after an entire day of invoking the engines, almost no GC takes place.
Also, I found an interesting discussion here.
Definitely yes for a server app. What's the point of having so much memory but not using it?
(No it doesn't save electricity if you don't use a memory cell)
JVM loves memory. For a given app, the more memory JVM has, the less GC it performs. The best part is more objects will die young and less will tenure.
Especially during a server startup, the load is even higher than normal. It's brain dead to give server a small memory to work with at this stage.
From what I see here at http://java-monitor.com/forum/showthread.php?t=427
the JVM under test begins with the Xms setting, but WILL deallocate memory it doesn't need and it will take it upto the Xmx mark when it needs it.
Unless you need a chunk of memory dedicated for a big memory consumer initially, there's not much of a point in putting in a high Xms=Xmx. Looks like deallocation and allocation occur even with Xms=Xmx
My question is simple. I have an application that specifies the "-Xmx 3G" command line option. Does this mean that no garbage collection will take place in the application till all (or say 80%) the 3GB of memory is consumed? Any further reading material would be appreciated as well.
No. A minor gc can occur even before the minimum memory -ms has been reached. The JVm reserves the maximum memory -mx on startup. However you can get full collections before this size is reached.
No. A simple test would demonstrate that!
When I run a java program with the starting heap size of 3G (set by -Xms3072m VM argument), JVM doesn't start with that size. It start with 400m or so and then keeps on acquiring more memory as required.
This is a serious problem for me. I know JVM is going to need the said amount after some time. And when JVM increases is its memory as per the need, it slows down. During the time when JVM acquires more memory, considerable amount of time is spent in garbage collection. And I suppose memory acquisition is an expensive task.
How do I ensure that JVM actually respects the start heap size parameter?
Update: This application creates lots of objects, most of which die quickly. Some resulting objects are required to stay in memory (which get transferred out of young heap.) During this operation, all these objects need to be in memory. After the operation, I can see that all the objects in young heap are claimed successfully. So there are no memory leaks.
The same operation runs smoothly when the heap size reaches 3G. That clearly indicates the extra time required is spent in acquiring memory.
This Sun JDK 5.
If I am not mistaken, Java tries to get the reservation for the memory from the OS. So if you ask for 3 GB as Xms, Java will ask the OS, if this is available but not start with all the memory right away... it might even reserve it (not allocate it). But these are details.
Normally, the JVM runs up to the Xms size before it starts serious old generation garbage collection. Young generation GC runs all the time. Normally GC is only noticeable when old gen GC is running and the VM is in between Xms and Xmx or, in case you set it to the same value, hit roughly Xmx.
If you need a lot of memory for short lived objects, increase that memory area by setting the young area to... let's say 1 GB -XX:NewSize=1g because it is costly to move the "trash" from the young "buckets" into the old gen. Because in case it has not turned into real trash yet, the JVM checks for garbage, does not find any, copies it between the survivor spaces, and finally moves into the old gen. So try to suppress the check for the garbage in the young gen, when you know that you do not have any and postpone this somehow...
Give it a try!
I believe your problem is not coming from where you think.
It looks like what's costing you the most are the GC cycles, and not the allocation of heap size. If you are indeed creating and deleting lots of objects.
You should be focusing your effort on profiling, to find out exactly what is costing you so much, and work on refactoring that.
My hunch - object creation and deletion, and GC cycles.
In any case, -Xms should be setting minimum heap size (check this with your JVM if it is not Sun). Double-check to see exactly why you think it's not the case.
i have used sun's vm and started with minimum set to 14 gigs and it does start off with that.
maybe u should try setting both the xms and xmx values to the same amt, ie try this-
-Xms3072m -Xmx3072m
Why do you think the heap allocation is not right? Taking any operating system tool that shows only 400m does not mean it isn't allocated.
I don't get really what you are after. Is the 400m and above already a problem or is your program supposed to need that much? If you really have the need to deal with that much memory and it seems you need a lot of objects than you can do several things:
If the memory consumption doesn't match your gut feeling it is the right amount than you probably are leaking memory. That would explain why it "slows down" over time. Maybe you missed to remove objects from one structure so they don't get garbage collected and are slowing lookups and such down.
Your memory settings are maybe the trouble in itself. Garbage collection is not run per se. It is only called if there is some threshold reached. If you give it a big heap setting and your operating system has plenty of memory the garbage collection runs not often.
The characteristics you mentioned would be a scenario where a lot of objects are created and shortly after they would be deleted again. Otherwise the garbage collection wouldn't be a problem (some sort of generational gc). That means you have only "young" objects. Consider using an object pool if you are needing objects only a short period of time. That would eliminate the garbage collection at all.
If you know there are good times in your code for running gc you can consider running it manually to be able to see if it changes anything. This is what you would need
Runtime r = Runtime.getRuntime();
r.gc();
This is just for debugging purposes. The gc is doing a great job most of the time so there shouldn't be the need to invoke the gc on your own.