I have a question on my mind. Let's assume that I have two parameters passed to JVM:
-Xms256mb -Xmx1024mb
At the beginning of the program 256MB is allocated. Next, some objects are created and JVM process tries to allocate more memory. Let's say that JVM needs to allocate 800MB. Xmx attribute allows that but the memory which is currently available on the system (let's say Linux/Windows) is 600MB. Is it possible that OutOfMemoryError will be thrown? Or maybe swap mechanism will play a role?
My second question is related to the quality of GC algorithms. Let's say that I have jdk1.5u7 and jdk1.5u22. Is it possible that in the latter JVM the memory leaks vanish and OutOfMemoryError does not occur? Can the quality of GC be better in the latest version?
The quality of the GC (barring a buggy GC) does not affect memory leaks, as memory leaks are an artifact of the application -- GC can't collect what isn't actual garbage.
If a JVM needs more memory, it will take it from the system. If the system can swap, it will swap (like any other process). If the system can not swap, your JVM will fail with a system error, not an OOM exception, because the system can not satisfy the request and and this point its effectively fatal.
As a rule, you NEVER want to have an active JVM partially swapped out. GC event will crush you as the system thrashes cycling pages through the virtual memory system. It's one thing to have a idle background JVM swapped out as a whole, but if you machine as 1G of RAM and your main process wants 1.5GB, then you have a major problem.
The JVM like room to breathe. I've seen JVMs in a GC death spiral when they didn't have enough memory, even though they didn't have memory leaks. They simply didn't have enough working set. Adding another chunk of heap transformed that JVM from awful to happy sawtooth GC graphs.
Give a JVM the memory it needs, you and it will be much happier.
"Memory" and "RAM" aren't the same thing. Memory includes virtual memory (swap), so you can allocate a total of free RAM+ free swap before you get the OutOfMemoryError.
Allocation depends on the used OS.
If you allocate too much memory, maybe you could end up having loaded portions into swap, which is slow.
If the your program runs fater os slower depends on how VM handle the memory.
I would not specify a heap that's not so big to make sure it don't occupy all the memory preventing the slows from VM.
Concerning your first question:
Actually if the machine can not allocate the 1024 MB that you asked as max heap size it will not even start the JVM.
I know this because I noticed it often trying to open eclipse with large heap size and the OS could not allocate the larger heap space the JVM failed to load. You could also try it out yourself to confirm. So the rest of the details are irrelevant to you. If course if your program uses too much swap (same as in all languages) then the performance will be horrible.
Concerning your second question:
the memory leaks vanish
Not possible as they are bugs you will have to fix
and OutOfMemoryError does not occur? Can the quality of GC be better
in the latest version?
This could happen, if for example some different algorithm in GC is used and it manages to kick-in before you seeing the exception. But if you have a memory leak then it would probable mask it or you would see it intermittent.
Also various JVMs have different GCs you can configure
Update:
I have to admit (after see #Orochi note) that I noticed the behavior on max heap on Windows. I can not say for sure that this applies to linux as well. But you could try it yourself.
Update 2:
As an answer to comments of #DennisCheung
From IBM(my emphasis):
The table shows both the maximum Java heap possible and a recommended limit for the maximum Java heap size setting ......It is important to have more physical memory than is required by all of the processes on the machine combined to prevent paging or swapping. Paging reduces the performance of the system and affects the performance of the Java memory management system.
Related
Sorry if question is already exist, but wasn't able to find it.
Can you explain the logic of java memory usage.
There are my steps:
Set xmx4000M
Run app
Do stress test
After stress test my app used about 1.4G RAM. But If set xmx300M and did stress test - no performance digression, but app used about 370M (I know that xmx is about heap, gc and other things also need ram). Why java so aggressively reserve ram and can I prevent java to do it but leave high heap size?
Update:
I'm using Java 16 OpenJDK with all default settings except xmx.
PC spec:
i7 10700
16 GB Ram
Why java so aggressively reserve ram.
GC ergonomics.
The JVM will use less time garbage collection if the GC has plenty of free space. So when resizing the heap it tends to determine the heap based on an optimal ratio of used to free space.
The JVM is typically reluctant to give memory back to the operating system. It usually only does this after a major GC, and only then if a few major GCs in a row have found that there is too much free space. (Why? Because each heap resize (up or down) entails a full GC, and that is expensive.)
It is not unusual for the heap to grow much larger than the initial size, even though it looks like the heap is bigger than it needs to be. And we also have the fact that an application's startup behavior is typically rather different to its steady state behavior. (There are lots of JVM and application "warm up" effects.)
Can I prevent java to do it but leave high heap size?
There are some things that you could tweak.
There are GC options that will make the GC more willing to give memory back to the OS.
There are (I think) GC options that will make the GC less eager to ask the OS for more memory.
But these things impact on (typically) throughput; i.e. they cause your application to spend more time (more CPU cycles) running the garbage collector.
My advice would be to start by reading the Oracle GC Tuning Guide. After reading it, review what your performance goals are, decide on the most appropriate collector, and try the "behavior based tuning" approach. (The idea is to tell the JVM what the goals are, and let it set the low level GC tuning parameters itself to attempt to achieve them.)
Bear in mind that GC tuning is about balancing the costs versus benefits, AND that the optimal settings will vary depending on the application and its (actual) workload.
In your circumstances, I think it is probably NOT worthwhile to tune the GC. You say you have 16GB of RAM. And the app is using only 1.6GB of the 16GB. And this is stress testing ... not normal operation for you application.
Now maybe you are optimizing for a production environment with less RAM. But if that is the case, you would be advised to optimize on the production platform itself ... or as close to it as you can get.
Also note that optimizing to stop the JVM allocating memory aggressively (up to the -Xmx limit), will probably reduce throughput when your application is under stress.
We are running a process that has a cache that consumes a lot of memory.
But the amount of objects in that cache keeps stable during execution, while memory usage is growing with no limit.
We have run Java Flight Recorder in order to try to guess what is happening.
In that report, we can see that UsedHeap is about half of UsedSize, and I cannot find any explanation for that.
JVM exits and dumps a report of OutOfMemory that you can find here:
https://frojasg1.com/stackOverflow/20210423.outOfMemory/hs_err_pid26210.log
Here it is the whole Java Flight Recorder report:
https://frojasg1.com/stackOverflow/20210423.outOfMemory/test.7z
Does anybody know why this outOfMemory is arising?
May be I would have to change the question ... and ask: Why are there almost 10 GB of used memory that is not used in heap?
The log file says this:
# Native memory allocation (mmap) failed to map 520093696 bytes
for committing reserved memory.
So what has happened is that the JVM has requested a ~500MB chunk of memory from the OS via an mmap system call and the OS has refused.
When I looked at more of the log file, it is clear that G1GC itself is requesting more memory, and it looks like it is doing it while trying to expand the heap1.
I can think of a couple of possible reasons for the mmap failure:
The OS may be out of swap space to back the memory allocation.
Your JVM may have hit the per-process memory limit. (On UNIX / Linux this is implemented as a ulimit.)
If your JVM is running in a Docker (or similar) container, you may have exceeded the container's memory limit.
This is not a "normal" OOME. It is actually a mismatch between the memory demands of the JVM and what is available from the OS.
It can be addressed at the OS level; i.e. by removing or increasing the limit, or adding more swap space (or possibly more RAM).
It could also be addressed by reducing the JVM's maximum heap size. This will stop the GC from trying to expand the heap to an unsustainable size2. Doing this may also result in the GC running more often, but that is better than the application dying prematurely from an avoidable OOME.
1- Someone with more experience in G1GC diagnosis may be able to discern more from the crash dump, but it looks like normal heap expansion behavior to me. There is no obvious sign of a "huge" object being created.
2 - Working out what the sustainable size actually would involve analyzing the memory usage for the entire system, and looking at the available RAM and swap resources and the limits. That is a system administration problem, not a programming problem.
May be I would have to change the question ... and ask: Why are there almost 10 GB of used memory that is not used in heap?
What you are seeing is the difference between memory that is currently allocated to to the heap, and the heap limit that you have set. The JVM doesn't actually request all of the heap memory from the OS up front. Instead, it requests more memory incrementally ... if required ... at the end of a major GC run.
So while the total heap size appears to be ~24GB, the actual memory allocated is substantially less than that.
Normally, that is fine. The GC asks the OS for more memory and adds it to the relevant pools for the memory allocators to use. But in this case, the OS cannot oblige, and G1GC pulls the plug.
I was posting an answer to this question, and I realized that I'm confused over something. I asked a few co-workers comfortable in Java and we're all a little stumped.
What happens in this scenario:
Start JVM with start-up size of 512MB and maximum size of 2GB.
Your underlying OS has 1GB of memory left, so the JVM is allowed to start up.
Your program uses over 1GB of memory.
At this point, you've used all available JVM memory, but you haven't breached the JVM memory limit you set up on launch. The OS is constraining your ability to get more resources, not the JVM itself.
I would have thought that this would result in an OutOfMemoryError, as you would get if you overran the 2GB JVM size limit. I have to admit though, in practice when we're running too many processes on our servers, I tend to see a slow-down and no memory exceptions.
Please note that I'm aware of paging and that processes get killed off in Linux when memory is exhausted. I was interested in knowing if any additional mechanisms are in place at the JVM level that could cause more of a blocking effect since that's what the person in the other question was asking in his comment. I realize that the answer may simply be "No, there are no additional mechanisms in place."
Follow-Up Questions/Comments
Is this because memory exceptions are not thrown unless you hit the actual JVM memory limit? If so, what happens when the OS cannot give the JVM more memory when it hasn't reached its limit? Is there some kind of blocking or a similar mechanism until memory is available in the OS?
All modern operating systems use page caching, where data is swapped between memory and disk. While it's a slight oversimplification, each process constrained by the number of addresses available (typically 232 or 264), not the amount of physical memory.
One of the reasons your server starts to run slowly under load is because it's having to swap data more frequently, which involves comparatively slow disk reads.
From the JavaDoc for OutOfMemoryError:
Thrown when the Java Virtual Machine cannot allocate an object because
it is out of memory, and no more memory could be made available by the
garbage collector.
What happens from an operating system perspective should a process exceed the memory limits, is specific to a particular operating system, but typically the process is terminated.
After readings How many characters can a Java String have? I started wonder wonder:
Why is there are max heap setting in current JVMs? Why not just request more memory from the operating system when heap memory runs out and the garbage collector was unable to free needed memory? Does anybody know the rationale behind it?
I believe that it helps sandbox the Java programs ie stops them taking all the memory on the physical machine. Also Memory Leaks can still happen in Java even with garbage collection, and they can be even more subtle than in C/C++ at times.
If you really need a bigger memory allowance for your Java software you can tweek the max VM size in the config files.
Because you don't want the JVM taking over every possible resource on your O/S (well in some cases you do, but in that case set the Max Heap size to be the max your JVM, O/S combo can handle).
The JVM is a virtual machine. When you create a virtual machine, you limit its resources, since typically you want more than one virtual machine on an actual machine.
Because too much heap memory can actually be detrimental - you could have a situation where a relatively small application uses all of a large heap so that when GC does kick in it brings your app to a halt while it reclaims oodles of memory.
A smaller heap would allow gc to run more often, but not cripple your program each time.
Because physical memory is limited too. You cannot request more and more. Actually you can and OS will allocate the virtual memory even if the physical RAM is unavailable. The memory will be allocated on disk that may cause serious performance problems. In worse case the whole computer got stuck and you cannot do anything with it. It is better to fail earlier, i.e. better that JVM crashes than the physical host got stuck.
Two things:
you don't want all your resources to be consumed
a too high heap size will cause problems with garbage collector, because your program might pause for a few minutes unexpectedly because the GC must do it's job
Problem 2. is large enough for situations where hundreds of GBs are allocated for big Java processes and even alternative solutions like Terracotta's bigmemory have been developed.
Maybe you don't want your whole memory to be used by only one application.
Notice, that memory may not be released if there is still some free memory, it may mean, that if you have application perfectly running with Xmx200m and run it on no max heap limit, it could take whole memory.
Does the Sun JVM slow down when more memory is available and used via -Xmx? (Assumption: The machine has enough physical memory so that virtual memory swapping is not a problem.)
I ask because my production servers are to receive a memory upgrade. I'd like to bump up the -Xmx value to something decadent. The idea is to prevent any heap space exhaustion failures due to my own programming errors that occur from time to time. Rare events, but they could be avoided with my rapidly evolving webapp if I had an obscene -Xmx value, like 2048mb or higher. The application is heavily monitored, so unusual spikes in JVM memory consumption would be noticed and any flaws fixed.
Possible important details:
Java 6 (runnign in 64-bit mode)
4-core Xeon
RHEL4 64-bit
Spring, Hibernate
High disk and network IO
EDIT: I tried to avoid posting the configuration of my JVM, but clearly that makes the question ridiculously open ended. So, here we go with relevant configuration parameters:
-Xms256m
-Xmx1024m
-XX:+UseConcMarkSweepGC
-XX:+AlwaysActAsServerClassMachine
-XX:MaxGCPauseMillis=1000
-XX:MaxGCMinorPauseMillis=1000
-XX:+PrintGCTimeStamps
-XX:+HeapDumpOnOutOfMemoryError
By adding more memory, it will take longer for the heap to fill up. Consequently, it will reduce the frequency of garbage collections. However, depending on how mortal your objects are, you may find that how long it takes to do any single GC increases.
The primary factor for how long a GC takes is how many live objects there are. Thus, if virtually all of your objects die young and once you get established, none of them escape the young heap, you may not notice much of a change in how long it takes to do a GC. However, whenever you have to cycle the tenured heap, you may find everything halting for an unreasonable amount of time since most of these objects will still be around. Tune the sizes accordingly.
If you just throw more memory at the problem, you will have better throughput in your application, but your responsiveness can go down if you're not on a multi core system using the CMS garbage collector. This is because fewer GCs will occur, but they will have more work to do. The upside is that you will get more memory freed up with your GCs, so allocation will continue to be very cheap, hence the higher througput.
You seem to be confusing -Xmx and -Xms, by the way. -Xms just sets the initial heap size, whereas -Xmx is your max heap size.
More memory usually gives you better performance in garbage collected environments, at least as long as this does not lead to virtual memory usage / swapping.
The GC only tracks references, not memory per se. In the end, the VM will allocate the same number of (mostly short-lived, temporary) objects, but the garbage collector needs to be invoked less often - the total amount of garbage collector work will therefore not be more - even less, since this can also help with caching mechanisms which use weak references.
I'm not sure if there is still a server and a client VM for 64 bit (there is for 32 bit), so you may want to investigate that also.
According to my experience, it does not slow down BUT the JVM tries to cut back to Xms all the time and try to stay at the lower boundary or close to. So if you can effort it, bump Xms as well. Sun is recommending both at the same size. Add some -XX:NewSize=512m (just a made up number) to avoid the costly pile up of old data in the old generation with leads to longer/heavier GCs on the way. We are running our web app with 700 MB NewSize because most data is short-lived.
So, bottom line: I do not expect a slow down, but put your more of memory to work. Set a larger new size area and set Xms to Xmx to lower the stress on the GC, because it does not need to try to cut back to Xms limits...
It typically will not help your peformance/throughput,if you increase -Xmx.
Theoretically there could be longer "stop the world" phases but in practice with the CMS that's not a real problem.
Of course you should not set -Xmx to some insane value like 300Gbyte unless you really need it :)