Java OutOfMemoryError Exception - java

I'm confused on what OutOfMemoryError really is.
The question I have asks what the OutOfMemory exception signifies:
The process ran out of heap space
Process ran out of physical memory
Operating system ran out of virtual memory.
I have looked around online and there are many different answers for both number 1 and 3. I'm leaning towards answer 1 but I'm not so sure.

The answer is: potentially all of them, more or less.
If you fill up the regular Java heap (and the GC is unable to reclaim enough space) you will get an OutOfMemoryError. This is the most common case.
A process does not run out of physical memory per se. Physical memory is the operating system's concern. A process only sees virtual memory.
However, there are a few cases where you can get failures related to running out of physical RAM or virtual memory address space. For example:
When a 32-bit process fills all of the 32 bit address space (or at least all of it that can be mapped by the OS).
When a process runs against virtual address space limits set from the outside; e.g. using ulimit on Linux.
When the OS runs out of swap space for new virtual memory allocations.
When swapping is disabled, and the OS runs out of physical RAM.
Each of these scenarios ... and others ... can trigger unusual OutOfMemoryErrors at the point where the JVM requests a memory-related resource from the OS; e.g. to create a thread stack, allocate a direct buffer, grow the heap, and so on.
You can typically distinguish the different causes by looking carefully at the exception messages. But doing it programmatically would be awkward.
And there is another scenario. If the operating system has overcommitted RAM, and it notices that the virtual memory system is doing excessive paging / swapping I/O, the OSes "oomkiller" may decide to kill a process. If the "victim" is a JVM, the Java application won't get an OutOfMemoryError at all, and it won't even get to execute shutdown hooks. It will be "kill -9"'d.

The most common case for OutOfMemoryError => It can happen when the heap is getting overrun and the (generally full GC) GC is running all the time to reclaim heap space but is unable make any meaningful progress. This is the most common use case as others have also pointed out.
Process ran out of physical memory => Modern OS do not deal with physical memory exclusively - they deal with virtual memory. When the process is reaching that threshold of "running" out of physical memory AND paging is going over a certain threshold (or swapping is disabled) then the OS would act and kill the process and/or other processes. It could also be that the JVM throws an OutOfMemoryError before that happens since the request to OS for more memory to grow the heap might fail. It might also be that the JVM throws not one but repeatedly many OutOfMemoryErrors. And then the OS takes over. See below for the Linux OOM Killer.
OS ran out of virtual memory => the JVM process can signal a OutOfMemoryError or the OS can kill it as specified above.
In addition to what is already answered, need to point out that in certain cases, the only evidence of the JVM process being killed is in the OS logs. You would not find any trace of the JVM throwing an OutOfMemoryError at all i your application log files (or in the JVM logs, traces etc.). The only evidence would be in the OS logs.
Note the JVM OOM error(OutOfMemoryError) is available on a best case basis. As there might be situations in which the JVM is unable to proceed further to even throw that error. It could also happen in case when the OS does a "kill -9" of processes that could include the JVM. For more info, read on the "Linux OOM Killer" wherein the memory requirements exceed a certain threshold in the system.
In all, the issue of OS memory management is complex. Once a memory related error occurs then the specific reason will be detailed in either the JVM or application logs or in the OS logs.
Also note: if there is even one OOM error in the application or the JVM logs, you might find the application is still running. However, this does not mean that everything is recovered and is hunky dory. The JVM at this point could be in a corrupt state and it is best to kill and restart with appropriate testing.

Related

Doesn't the jvm argument xms mean process won't start if the specified memory is missing?

I had set the xms to 32g and xmx to 32g
The program started, but when around 25GB data was loaded into memory, the process was killed by Linux, giving the reason of memory issues.
If 32g was already assigned to the process due to xms being 32g, why did it went out of memory?
Doesn't xms mean allocate this memory in beginning and if you cannot please don't start the process ?
Can someone please explain, why programs fails ?
"Allocating memory" really means "allocating virtual address space". Modern operating systems separate the address space used by a process from physical memory.
So, with -Xms32G, you've got 32G of address space.
Actual memory is allocated in pages and on demand, which generally means that something has to actually 'touch' a page before memory is 'committed' to the page.
Thus in reality, memory is only being committed as needed. And if the OS decides it is under real-memory pressure at the time, you're likely to get killed.
You can force the JVM to touch each page at startup by using the JVM option -XX:+AlwaysPreTouch. The likely effect of this will be that the JVM process starts and gets killed during its initialization, before your program is entered at main(). You still can't get the memory, you just find out sooner.
The OOM killer being what it is, it is also possible that the pretouch will go ok, your code will run, but at some later time due to other system activity, the kernel will decide it's critically low on available resources, and since you're probably the largest process around, there's a target on your back.
First of all -Xms and -Xmx control the heap of your java process. A java process need memory for other things too, like off heap allocation, GC structures, etc. What this means is that when you set your application to a heap of 1GB, your java process is going to need more than that.
Second point is that -Xms controls the committed memory in the virtual space. Virtual space for a 64 bit CPU is huge. Committed memory becomes resident in a lazy fashion (resident is actual in RAM, for example). So when you set -Xmx and -Xms, you will get a portion of virtual memory allocated as needed.
You should carefully read the logs on this part:
giving the reason of memory issues
you probably are killed by the OOM Killer, which kills the process, which you already know (from the above) that needs more than the heap.
If you want to allocate memory and make it resident, add the -XX:+AlwaysPreTouch flag. This will make your process start a lot slower though and in general, is not a great idea. It has its benefits in various applications.

OutOfMemoryException in Java process, but Used Heap is about half of Used Size

We are running a process that has a cache that consumes a lot of memory.
But the amount of objects in that cache keeps stable during execution, while memory usage is growing with no limit.
We have run Java Flight Recorder in order to try to guess what is happening.
In that report, we can see that UsedHeap is about half of UsedSize, and I cannot find any explanation for that.
JVM exits and dumps a report of OutOfMemory that you can find here:
https://frojasg1.com/stackOverflow/20210423.outOfMemory/hs_err_pid26210.log
Here it is the whole Java Flight Recorder report:
https://frojasg1.com/stackOverflow/20210423.outOfMemory/test.7z
Does anybody know why this outOfMemory is arising?
May be I would have to change the question ... and ask: Why are there almost 10 GB of used memory that is not used in heap?
The log file says this:
# Native memory allocation (mmap) failed to map 520093696 bytes
for committing reserved memory.
So what has happened is that the JVM has requested a ~500MB chunk of memory from the OS via an mmap system call and the OS has refused.
When I looked at more of the log file, it is clear that G1GC itself is requesting more memory, and it looks like it is doing it while trying to expand the heap1.
I can think of a couple of possible reasons for the mmap failure:
The OS may be out of swap space to back the memory allocation.
Your JVM may have hit the per-process memory limit. (On UNIX / Linux this is implemented as a ulimit.)
If your JVM is running in a Docker (or similar) container, you may have exceeded the container's memory limit.
This is not a "normal" OOME. It is actually a mismatch between the memory demands of the JVM and what is available from the OS.
It can be addressed at the OS level; i.e. by removing or increasing the limit, or adding more swap space (or possibly more RAM).
It could also be addressed by reducing the JVM's maximum heap size. This will stop the GC from trying to expand the heap to an unsustainable size2. Doing this may also result in the GC running more often, but that is better than the application dying prematurely from an avoidable OOME.
1- Someone with more experience in G1GC diagnosis may be able to discern more from the crash dump, but it looks like normal heap expansion behavior to me. There is no obvious sign of a "huge" object being created.
2 - Working out what the sustainable size actually would involve analyzing the memory usage for the entire system, and looking at the available RAM and swap resources and the limits. That is a system administration problem, not a programming problem.
May be I would have to change the question ... and ask: Why are there almost 10 GB of used memory that is not used in heap?
What you are seeing is the difference between memory that is currently allocated to to the heap, and the heap limit that you have set. The JVM doesn't actually request all of the heap memory from the OS up front. Instead, it requests more memory incrementally ... if required ... at the end of a major GC run.
So while the total heap size appears to be ~24GB, the actual memory allocated is substantially less than that.
Normally, that is fine. The GC asks the OS for more memory and adds it to the relevant pools for the memory allocators to use. But in this case, the OS cannot oblige, and G1GC pulls the plug.

What happens to java process if the physical memory is very low on system

I have a Java process running doing some tasks, after a couple of hours there are multiple other applications opened on the system causing a very low physical memory available on the system.
So, if the system has no physical memory/very less memory left, how would my java process respond to such a situation? Would it throw a 'out of memory' exception?
When RAM is exhausted the OS will usually use swap or pagefile to provide virtual memory:
RAM is a limited resource, whereas for most practical purposes, virtual memory is unlimited. There can be many processes, and each process has its own 2 GB of private virtual address space. When the memory being used by all the existing processes exceeds the available RAM, the operating system moves pages (4-KB pieces) of one or more virtual address spaces to the computer’s hard disk. This frees that RAM frame for other uses. In Windows systems, these “paged out” pages are stored in one or more files (Pagefile.sys files) in the root of a partition.
Paging usually results in a severe performance penalty because even modern SSD storage is not as fast as RAM. If the memory scarcity continues, the system may start thrashing.
Assuming that the JVM does not require more memory, e.g. it is already constrained by -Xmx and allocated all allowed memory, it will continue to run. Usually when the memory is exhausted OS will not allow new processes to start e.g. attempting to start new JVM process will result in following error:
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.
At the end of the day it depends on the OS configuration. Usually this situation is not something you want to spend time investigating because RAM is much cheaper than developer's time.

When do out of memory exceptions actually occur in Java?

I was posting an answer to this question, and I realized that I'm confused over something. I asked a few co-workers comfortable in Java and we're all a little stumped.
What happens in this scenario:
Start JVM with start-up size of 512MB and maximum size of 2GB.
Your underlying OS has 1GB of memory left, so the JVM is allowed to start up.
Your program uses over 1GB of memory.
At this point, you've used all available JVM memory, but you haven't breached the JVM memory limit you set up on launch. The OS is constraining your ability to get more resources, not the JVM itself.
I would have thought that this would result in an OutOfMemoryError, as you would get if you overran the 2GB JVM size limit. I have to admit though, in practice when we're running too many processes on our servers, I tend to see a slow-down and no memory exceptions.
Please note that I'm aware of paging and that processes get killed off in Linux when memory is exhausted. I was interested in knowing if any additional mechanisms are in place at the JVM level that could cause more of a blocking effect since that's what the person in the other question was asking in his comment. I realize that the answer may simply be "No, there are no additional mechanisms in place."
Follow-Up Questions/Comments
Is this because memory exceptions are not thrown unless you hit the actual JVM memory limit? If so, what happens when the OS cannot give the JVM more memory when it hasn't reached its limit? Is there some kind of blocking or a similar mechanism until memory is available in the OS?
All modern operating systems use page caching, where data is swapped between memory and disk. While it's a slight oversimplification, each process constrained by the number of addresses available (typically 232 or 264), not the amount of physical memory.
One of the reasons your server starts to run slowly under load is because it's having to swap data more frequently, which involves comparatively slow disk reads.
From the JavaDoc for OutOfMemoryError:
Thrown when the Java Virtual Machine cannot allocate an object because
it is out of memory, and no more memory could be made available by the
garbage collector.
What happens from an operating system perspective should a process exceed the memory limits, is specific to a particular operating system, but typically the process is terminated.

Why is there are max heap setting in Java?

After readings How many characters can a Java String have? I started wonder wonder:
Why is there are max heap setting in current JVMs? Why not just request more memory from the operating system when heap memory runs out and the garbage collector was unable to free needed memory? Does anybody know the rationale behind it?
I believe that it helps sandbox the Java programs ie stops them taking all the memory on the physical machine. Also Memory Leaks can still happen in Java even with garbage collection, and they can be even more subtle than in C/C++ at times.
If you really need a bigger memory allowance for your Java software you can tweek the max VM size in the config files.
Because you don't want the JVM taking over every possible resource on your O/S (well in some cases you do, but in that case set the Max Heap size to be the max your JVM, O/S combo can handle).
The JVM is a virtual machine. When you create a virtual machine, you limit its resources, since typically you want more than one virtual machine on an actual machine.
Because too much heap memory can actually be detrimental - you could have a situation where a relatively small application uses all of a large heap so that when GC does kick in it brings your app to a halt while it reclaims oodles of memory.
A smaller heap would allow gc to run more often, but not cripple your program each time.
Because physical memory is limited too. You cannot request more and more. Actually you can and OS will allocate the virtual memory even if the physical RAM is unavailable. The memory will be allocated on disk that may cause serious performance problems. In worse case the whole computer got stuck and you cannot do anything with it. It is better to fail earlier, i.e. better that JVM crashes than the physical host got stuck.
Two things:
you don't want all your resources to be consumed
a too high heap size will cause problems with garbage collector, because your program might pause for a few minutes unexpectedly because the GC must do it's job
Problem 2. is large enough for situations where hundreds of GBs are allocated for big Java processes and even alternative solutions like Terracotta's bigmemory have been developed.
Maybe you don't want your whole memory to be used by only one application.
Notice, that memory may not be released if there is still some free memory, it may mean, that if you have application perfectly running with Xmx200m and run it on no max heap limit, it could take whole memory.

Categories