the Heap size of my java application is continuously growing till it reaches the Max Heap sizeof 1G. Why is that so?
I start my application with those parameters:
java -Xmx1G -XX:+UnlockExperimentalVMOptions -XX:+UseShenandoahGC -XX:ShenandoahUncommitDelay=5000 -XX:ShenandoahGuaranteedGCInterval=10000 -Dlog4j.configurationFile=./log4j2.xml -jar applicaiton.jar
Edit:
When I restart my application you see there isn't a gap between "Heap size" and "Used Heap" but this gap is getting bigger and bigger, can I somehow limit that gap?
I am not particularly familiar with the way that the new Shenandoah GC normally behaves. However, there is nothing particularly alarming (to me) with that graph.
According to https://shipilev.net/talks/devoxx-Nov2017-shenandoah.pdf, the modus operandi (MO) of a this GC with regards to memory utilization is a bit different to some other collectors.
"We shall take all the memory when we need it, but we shall also give it back when we don’t".
If there is significant load on the allocator (i.e. lots of objects being allocated) Shenandoah will aggressively expand the heap. This is based on the observation that a low-pause GC is most efficient (and most likely to keep up!) if it has plenty of space to work in.
But the flipside is that if your system is idle, the GC will give memory back to the OS more freely than many other GCs.
This seems to fit the memory graph in your question.
The other thing to note is that the heap size (orange) is nowhere near your max-heap limit. If you get close to that limit, the GC will stop growing the heap.
Finally, note that you can apparently encourage Shenandoah to give back uncommitted memory faster by using a smaller value for the -XX:ShenandoahUncommitDelay=<millis> option. However, it is recommended to NOT make it too small because that is liable to slow down the allocator.
(Source: https://www.javacodegeeks.com/2017/11/minimize-java-memory-usage-right-garbage-collector.html)
Related
I have an application which reads XML-responses from a server.
This is working nice, until I try to read ~200.000 XML-responses. When I reach that magic number, the handling time reduces with a factor 10.
When I let it run, at some point the JVM would say that the GC is taking 90% of CPU-time. So I first tried to optimize my code - using fields instead of local variables, using intern on my strings (Since I have a lot of copies) and so on.
This helped a bit, but it still went slow after approx 100k XML-files. I then tried using Visual VM to see what was going on, and what I saw was:
Up until 18:02, everything works fine. Then suddenly the garbage collector is going bananas, and stealing CPU-time, which then in turn stabilizes memory consumption. I would understand this, if we we're hitting maximum memory of the heap, but I've set max heap size at 8 gb.
There is nothing different happening at that point, it's basically a giant loop doing the same thing over and over.
What is happening and what can I do in this situation?
Your heap size is insufficient for your workflow. You may have memory leak, or it just specific of your application.
Normal pattern for parallel GC algorithm (which you have enabled)
Young GC
Young GC
...
Full GC
Though, once old space is full (~5.6 GiB for your setup), pattern would switch to
Full GC
Full GC
Full GC
...
Full GC is order of magnitude longer, so application would stay in GC pause (with high CPU consumption) almost all time. VisialVM incorrectly charts GC CPU usage, in reality blue spikes are as high as orange line on CPU chart.
If memory usage grows due to memory leak, you should address that.
If it is application design specific, you need increase old space by
either increasing total heap size
or reducing young space (-Xmn=SIZE option) to save more memory for old space
I have troubles with Java memory consumption.
I'd like to say to Java something like this: "you have 8GB of memory, please use it, and only it. Only if you really can't put all your resources in this memory pool, then fail with OOM".
I know, there are default parameters like -Xmx - they limit only the heap. There are also plenty of other parameters, I know. The problems with these parameters are:
They aren't relevant. I don't want to limit the heap size to 6GB (and trust that native memory won't take more than 2GB). I do want to limit all the memory (heap, native, whatever). And do that effectively, not just saying "-Xmx1GB" - to be safe.
There is too many different parameters related to memory, and I don't know how to configure all of them to achieve the goal.
So, I don't want to go there and care about heap, perm and whatever types of memory. My high-level expectation is: since there is only 8GB, and some static memory is needed - take the static memory from the 8GB, and carefully split the remaining memory between other dynamic memory entities.
Also, ulimit and similar things don't work. I don't want to kill the java process once it consumes more memory than expected. I want Java does its best to not reach the limit firstly, and only if it really, really can't - kill the process.
And I'm OK to define even 100 java parameters, why not. :) But then I need assistance with the full list of needed parameters (for, say, Java 8).
Have you tried -XX:MetaspaceSize?
Is this what you need?
Please, read this article: http://karunsubramanian.com/websphere/one-important-change-in-memory-management-in-java-8/
Keep in mind that this is only valid to Java 8.
AFAIK, there is no java command line parameter or set of parameters that will do that.
Your best bet (IMO) is to set the max heap size and the max metaspace size and hope that other things are going to be pretty static / predictable for your application. (It won't cover the size of the JVM binary and it probably won't cover native libraries, memory mapped files, stacks and so on.)
In a comment you said:
So I'm forced to have a significant amount of memory unused to be safe.
I think you are worrying about the wrong thing here. Assuming that you are not constrained by address space or swap space limitations, memory that is never used doesn't matter.
If a page of your address space is not used, the OS will (in the long term) swap it out, and give the physical RAM page to something else.
Pages in the heap won't be in that situation in a typical Java application. (Address space pages will cycle between in-use and free as the GC moves objects within and between "spaces".)
However, the flip-side is that a GC needs the total heap size to be significantly larger than the sum of the live objects. If too much of the heap is occupied with reachable objects, the interval between garbage collection runs decreases, and your GC ergonomics suffer. In the worst case, a JVM can grind to a halt as the time spent in the GC tends to 100%. Ugly. The GC overhead limit mechanism prevents this, but that just means that your JVM gets an OOME sooner.
So, in the normal heap case, a better way to think about it is that you need to keep a portion of memory "unused" so that the GC can operate efficiently.
When I start JVM it reserves at least {{xms}} memory, right? That means this memory is private for JVM process (it is malloced), yes?
When JVM needs to increase heap at reserves (mallocs) more memory. But how much?
I do not believe it reserves exactly as much as it needs, probably there is certain step (pool?) size.
How this "step size" could be configured?
And all that happens until {{xmx}} is reached and OOM is thrown, right?
When JVM starts GC? Not when it comes to xmx, but when it comes to reserved heap size (top of this pool)?
If so, it is much better to set xms close to xmx to prevent many useless GCs.
I will have one huge GC instead of many little ones, bug every GC freezes my JVM, so it is better to have one, right?
When JVM needs to increase heap at reserves (mallocs) more memory. But how much?
You shouldn't really care. It just works. Many advice using equal Xmx and Xms so that JVM allocates all the memory at startup. This is reasonable, read further.
How this "step size" could be configured?
It can't, it is completely implementation and probably OS dependant.
When JVM starts GC? Not when it comes to xmx, but when it comes to reserved heap size (top of this pool)?
GC is a bit more complicated than you think. Minor GC is executed when young generation is filled up. Major GC is called there is no more space left in old generation.
And all that happens until {{xmx}} is reached and OOM is thrown, right?
No, when Xmx is reached, JVM stabilizes and nothing wrong happens. OutOfMemoryError is thrown when, immediately after GC, JVM is unable to find enough space for new object (this is a major simplification).
If so, it is much better to set xms close to xmx to prevent many useless GCs.
Once again, you must learn how GC works. Using Xmx equal to Xms is a good choice because it avoids unnecessary allocations when application runs (everything happens on startup, no further overhead). GC has nothing to do with that.
instead of many little ones, bug every GC freezes my JVM, so it is better to have one, right?
Nope. Minor GC usualy takes tens of milliseconds and is almost invisible, unless you are working on a real-time system. Major (stop-the-world) GC might take few seconds and is certainly noticeable for end users. In a correctly tuned JVM major GC should occur very rarely.
You are correct about the meaning of the switches.
The way I remember the switches is
xm*s* = Ends with "s" like "*s*tarting memory".
xm*x* = Ends with "x" like "ma*x*imum memory"
It is up to a given JVM to decide how to move from the starting memory to the maximum memory. Assuming the two are not trivially close to each other, the allocation will happen in steps on all JVM's I'm aware of.
I'm not aware of any option to control the size of the steps in any JVM. There is certainly no standard option.
Different JVM's have different GC strategies. Some JVMs allow you to use one of multiple GC strategies, controlled by a command line switch.
Currently in our testing environment the max and min JVM heap size are set to the same value, basically as much as the dedicated server machine will allow for our application. Is this the best configuration for performance or would giving the JVM a range be better?
Peter 's answer is correct in that -Xms is allocated at startup and it will grow up to -Xmx (max heap size) but it's a little misleading in how he has worded his answer. (Sorry Peter I know you know this stuff cold).
Setting ms == mx effectively turns off this behavior. While this used to be a good idea in older JVMs, it is no longer the case. Growing and shrinking the heap allows the JVM to adapt to increases in pressure on memory yet reduce pause time by shrinking the heap when memory pressure is reduced. Sometimes this behavior doesn't give you the performance benefits you'd expect and in those cases it's best to set mx == ms.
OOME is thrown when heap is more than 98% of time is spent collecting and the collections cannot recover more than 2% of that. If you are not at max heaps size then the JVM will simply grow so that you're beyond that boundaries. You cannot have an OutOfMemoryError on startup unless your heap hits the max heap size and meets the other conditions that define an OutOfMemoryError.
For the comments that have come in since I posted. I don't know what the JMonitor blog entry is showing but this is from the PSYoung collector.
size_t desired_size = MAX2(MIN2(eden_plus_survivors, gen_size_limit()),
min_gen_size());
I could do more digging about but I'd bet I'd find code that serves the same purpose in the ParNew and PSOldGen and CMS Tenured implementations. In fact it's unlikely that CMS would be able to return memory unless there has been a Concurrent Mode Failure. In the case of a CMF the serial collector will run and that should include a compaction after which top of heap would most likely be clean and therefore eligible to be deallocated.
Main reason to set the -Xms is for if you need a certain heap on start up. (Prevents OutOfMemoryErrors from happening on start up.) As mentioned above, if you need the startup heap to match the max heap is when you would match it. Otherwise you don't really need it. Just asks the application to take up more memory that it may ultimately need. Watching your memory use over time (profiling) while load testing and using your application should give you a good feel for what to need to set them to. But it isn't the worse thing to set them to the same on start up. For a lot of our apps, I actually start out with something like 128, 256, or 512 for min (startup) and one gigabyte for max (this is for non application server applications).
Just found this question on stack overflow which may also be helpful side-effect-for-increasing-maxpermsize-and-max-heap-size. Worth the look.
AFAIK, setting both to the same size does away with the additional step of heap resizing which might be in your favour if you pretty much know how much heap you are going to use. Also, having a large heap size reduces GC invocations to the point that it happens very few times. In my current project (risk analysis of trades), our risk engines have both Xmx and Xms to the same value which pretty large (around 8Gib). This ensures that even after an entire day of invoking the engines, almost no GC takes place.
Also, I found an interesting discussion here.
Definitely yes for a server app. What's the point of having so much memory but not using it?
(No it doesn't save electricity if you don't use a memory cell)
JVM loves memory. For a given app, the more memory JVM has, the less GC it performs. The best part is more objects will die young and less will tenure.
Especially during a server startup, the load is even higher than normal. It's brain dead to give server a small memory to work with at this stage.
From what I see here at http://java-monitor.com/forum/showthread.php?t=427
the JVM under test begins with the Xms setting, but WILL deallocate memory it doesn't need and it will take it upto the Xmx mark when it needs it.
Unless you need a chunk of memory dedicated for a big memory consumer initially, there's not much of a point in putting in a high Xms=Xmx. Looks like deallocation and allocation occur even with Xms=Xmx
When I run a java program with the starting heap size of 3G (set by -Xms3072m VM argument), JVM doesn't start with that size. It start with 400m or so and then keeps on acquiring more memory as required.
This is a serious problem for me. I know JVM is going to need the said amount after some time. And when JVM increases is its memory as per the need, it slows down. During the time when JVM acquires more memory, considerable amount of time is spent in garbage collection. And I suppose memory acquisition is an expensive task.
How do I ensure that JVM actually respects the start heap size parameter?
Update: This application creates lots of objects, most of which die quickly. Some resulting objects are required to stay in memory (which get transferred out of young heap.) During this operation, all these objects need to be in memory. After the operation, I can see that all the objects in young heap are claimed successfully. So there are no memory leaks.
The same operation runs smoothly when the heap size reaches 3G. That clearly indicates the extra time required is spent in acquiring memory.
This Sun JDK 5.
If I am not mistaken, Java tries to get the reservation for the memory from the OS. So if you ask for 3 GB as Xms, Java will ask the OS, if this is available but not start with all the memory right away... it might even reserve it (not allocate it). But these are details.
Normally, the JVM runs up to the Xms size before it starts serious old generation garbage collection. Young generation GC runs all the time. Normally GC is only noticeable when old gen GC is running and the VM is in between Xms and Xmx or, in case you set it to the same value, hit roughly Xmx.
If you need a lot of memory for short lived objects, increase that memory area by setting the young area to... let's say 1 GB -XX:NewSize=1g because it is costly to move the "trash" from the young "buckets" into the old gen. Because in case it has not turned into real trash yet, the JVM checks for garbage, does not find any, copies it between the survivor spaces, and finally moves into the old gen. So try to suppress the check for the garbage in the young gen, when you know that you do not have any and postpone this somehow...
Give it a try!
I believe your problem is not coming from where you think.
It looks like what's costing you the most are the GC cycles, and not the allocation of heap size. If you are indeed creating and deleting lots of objects.
You should be focusing your effort on profiling, to find out exactly what is costing you so much, and work on refactoring that.
My hunch - object creation and deletion, and GC cycles.
In any case, -Xms should be setting minimum heap size (check this with your JVM if it is not Sun). Double-check to see exactly why you think it's not the case.
i have used sun's vm and started with minimum set to 14 gigs and it does start off with that.
maybe u should try setting both the xms and xmx values to the same amt, ie try this-
-Xms3072m -Xmx3072m
Why do you think the heap allocation is not right? Taking any operating system tool that shows only 400m does not mean it isn't allocated.
I don't get really what you are after. Is the 400m and above already a problem or is your program supposed to need that much? If you really have the need to deal with that much memory and it seems you need a lot of objects than you can do several things:
If the memory consumption doesn't match your gut feeling it is the right amount than you probably are leaking memory. That would explain why it "slows down" over time. Maybe you missed to remove objects from one structure so they don't get garbage collected and are slowing lookups and such down.
Your memory settings are maybe the trouble in itself. Garbage collection is not run per se. It is only called if there is some threshold reached. If you give it a big heap setting and your operating system has plenty of memory the garbage collection runs not often.
The characteristics you mentioned would be a scenario where a lot of objects are created and shortly after they would be deleted again. Otherwise the garbage collection wouldn't be a problem (some sort of generational gc). That means you have only "young" objects. Consider using an object pool if you are needing objects only a short period of time. That would eliminate the garbage collection at all.
If you know there are good times in your code for running gc you can consider running it manually to be able to see if it changes anything. This is what you would need
Runtime r = Runtime.getRuntime();
r.gc();
This is just for debugging purposes. The gc is doing a great job most of the time so there shouldn't be the need to invoke the gc on your own.